When AI Becomes a Legal Liability: What US v. Heppner Means for MSPs and Their Clients

Share this post on:
**Alt text:**  
MSP advisor meeting with business clients at a conference table, discussing AI legal risk and governance using a single laptop in a neutral office setting.

Your clients are using AI tools every day. Most of them never think twice about it.

They type sensitive questions into ChatGPT. They draft communications about active disputes inside free AI platforms. They use generative tools to think through problems, including legal ones, without any awareness of what those platforms do with that information.

On February 10, 2026, a federal court made clear that this behavior carries serious legal consequences.

The ruling in United States v. Heppner, handed down by Judge Jed S. Rakoff of the Southern District of New York, is one of the most significant AI-related legal decisions to date. It directly affects every organization you support, and it gives MSPs a meaningful opportunity to step into a strategic advisory role that goes well beyond technical support.

Here is what happened, why it matters, and what to do about it.


What Happened in US v. Heppner

The defendant in this case was facing federal securities fraud charges. After receiving a grand jury subpoena and realizing he was under criminal investigation, he turned to a publicly available generative AI tool to help him outline his potential legal defense.

He acted on his own. No attorney directed him to use the tool. No enterprise platform with privacy protections was involved. He opened a free, public AI application and started typing.

The FBI seized those AI-generated materials during a search. When Heppner attempted to claim attorney-client privilege and work product protection over them, the court denied both claims.

The ruling is specific to its facts, but the precedent it establishes is far-reaching. Your clients need to understand it now.


Judge Rakoff identified three reasons the privilege claims could not stand. Each one is directly relevant to how employees across your client organizations are using AI today.

The AI tool is not an attorney.

Attorney-client privilege requires an actual communication with legal counsel. A generative AI chatbot is not a lawyer. Any information entered into a public AI platform is not a privileged communication, regardless of how legal in nature the question may be.

Public AI platforms are not confidential.

This is the issue that should concern your clients most. Most publicly available AI tools include privacy policies that permit the collection of user data, the use of that data for model training, and in some cases disclosure to third parties. The moment a client’s employee types sensitive business, legal, or strategic information into a public AI tool, that information has been shared with a third party. From a legal standpoint, this can constitute a complete waiver of privilege, even if that was never the employee’s intent.

Without attorney direction, work product protection does not apply.

The work product doctrine protects materials prepared by or at the direction of an attorney in anticipation of litigation. When an employee independently consults a public AI tool without any attorney involvement, those materials do not qualify as protected work product. They are fully discoverable.


This Risk Lives in Every Department

This is not a problem limited to executives under investigation or legal departments managing active litigation.

This risk exists across every department in every organization you support right now.

Consider what is likely happening inside businesses you serve today. An HR manager types the details of an active employee dispute into a free AI tool to think it through. A finance director pastes confidential contract terms into ChatGPT to help draft a vendor response. A business owner uses a public AI platform to think through a regulatory inquiry before calling their attorney.

In each of these scenarios, sensitive information has been shared with a third-party platform. Privilege may be waived. The information may be stored, analyzed, or in certain circumstances disclosed. This is the exact scenario the Heppner ruling addresses, and it is happening inside organizations that have no AI use policy in place.


Shadow AI Multiplies the Risk

If your clients do not have a formal AI governance policy, there is a strong likelihood that employees are already using unauthorized AI tools without the knowledge of IT, legal, or leadership. This is shadow AI, and it does not only create cybersecurity exposure. As Heppner makes clear, it creates direct legal liability.

As outlined in Chapter 13 of Rewired MSP, shadow AI tools deployed without proper oversight can leak sensitive information, create untracked data flows, and introduce vulnerabilities that are hard to identify until they become serious incidents. Just as shadow IT once created problems by introducing unapproved hardware and software, shadow AI takes those risks to another level entirely.

When employees use public AI platforms to process sensitive business information, they are making decisions about data confidentiality that they are simply not qualified to make. They do not know what the platform retains. They do not know whether their inputs become training data. They do not know whether their communications are discoverable in litigation. They only know the tool gave them a helpful answer. That is all they know.


Five Actions MSPs Should Take Right Now

The Heppner ruling gives MSPs a concrete reason to initiate AI governance conversations with every client. Here is a practical starting framework.

1. Audit AI usage across your client environments.

Deploy discovery tools to identify which AI applications are in use, by whom, on which data, and under what terms. As the Rewired MSP framework puts it, regular scans and monitoring should be conducted to identify shadow or rogue AI deployments. This requires a combination of technical tools and ongoing user education. You cannot govern what you cannot see.

2. Help clients build an AI Acceptable Use Policy.

Every organization needs a written policy that defines approved AI tools, prohibits certain categories of sensitive input, limits retention of AI-generated outputs, and establishes a clear escalation process. MSPs should also require contractual guarantees from AI vendors that client data will never be used for further model training without explicit client agreement.

3. Replace public AI tools with enterprise-grade alternatives where possible.

Where AI use is approved and appropriate, guide clients toward enterprise-licensed platforms that offer contractual guarantees around data retention, privacy, and confidentiality. The difference between a public free tool and a properly governed enterprise deployment is often the difference between discoverable and protected.

4. Train employees at every level, not just leadership.

The risk does not live only in the executive suite. It lives in every employee with a browser and a question they want answered quickly. Practical, plain-language training on what types of information should never be entered into a public AI prompt is essential. Workshops, case studies, and real-world examples are especially effective at creating a vigilant and proactive user base.

5. Ensure legal counsel is involved before AI touches anything sensitive.

Educate clients that if AI will be used in any context involving active litigation, regulatory inquiry, HR disputes, or contract negotiations, their attorney needs to be part of defining those processes first. That is the only pathway to preserving work product protection.


The Strategic Opportunity in Front of MSPs

The Heppner ruling is a clear signal that courts and regulators are catching up to AI adoption, and doing so in ways that will catch organizations off guard.

MSPs are already trusted to protect clients from cybersecurity threats, data breaches, and compliance failures. AI governance is the natural next layer of that same responsibility.

As Rewired MSP frames it, the true value of an MSP in the AI era is not measured by technical capability alone. It is measured by the safety, empowerment, and resilience you provide to those you serve. By prioritizing responsible AI, protecting against shadow IT and shadow AI, and resisting the pull of empty trends, MSPs become architects of trust and guardians of their clients’ futures.

Your clients are not thinking about privilege waivers when they open ChatGPT. They are thinking about getting their question answered. That gap in awareness is exactly where your value as a strategic partner lives.

The MSPs who bring this conversation proactively will not just protect their clients from legal exposure. They will deepen the relationship in a way that no competitor offering a lower monthly rate can replicate.


Go Deeper: Rewired MSP

Chapter 13 of Rewired MSP covers the full framework for guiding clients to ethical and responsible AI adoption, including how to evaluate AI vendors, build governance policies, run employee training, and position your MSP as a trusted advisor in the AI era.

If you are serious about leading your clients through the risks and opportunities of artificial intelligence, this is required reading. Get your copy of Rewired MSP on Amazon today.


Frequently Asked Questions

What is US v. Heppner and why does it matter for businesses?

US v. Heppner is a February 2026 federal ruling from the Southern District of New York in which Judge Jed S. Rakoff held that materials created using a public AI tool were not protected by attorney-client privilege or the work product doctrine. It matters for businesses because it establishes that employees who use public AI platforms to process sensitive legal or strategic information may inadvertently waive the legal protections they assumed they had.

Can attorney-client privilege apply to AI-generated content?

It depends on the circumstances. Based on the Heppner ruling, attorney-client privilege is unlikely to apply when a public AI tool is used, no attorney directed the creation of the content, and the platform’s privacy policy allows data collection or third-party sharing. Privilege may be preserved in more controlled enterprise environments where an attorney is involved and confidentiality can be reasonably maintained.

What is shadow AI and why is it a legal risk?

Shadow AI refers to the use of AI tools within an organization without the knowledge or approval of IT or leadership. It creates legal risk because employees using unauthorized public AI platforms may be inadvertently sharing privileged or confidential business information with third parties, potentially waiving legal protections and exposing the organization to discovery in litigation.

What should an AI Acceptable Use Policy include?

At a minimum, an AI Acceptable Use Policy should define which tools are approved for use, specify categories of information that may never be entered into any AI system, establish retention limits for AI-generated outputs, and create a clear escalation process for legal or sensitive use cases. It should be reviewed by legal counsel before implementation and updated as new tools and regulations emerge.

How can MSPs help clients manage AI legal risk?

MSPs can help by auditing existing AI usage, identifying shadow AI deployments, recommending enterprise-grade tools with appropriate data protections, building and implementing AI Acceptable Use Policies, and training employees on what types of information should never be shared with public AI platforms. This positions the MSP as a strategic advisory partner rather than simply a technical support provider.

What is the difference between attorney-client privilege and the work product doctrine?

Attorney-client privilege protects confidential communications between a client and their attorney made for the purpose of obtaining legal advice. The work product doctrine protects documents and materials prepared by or at the direction of an attorney in anticipation of litigation. Both protections can be waived if sensitive information is shared with unauthorized third parties, including public AI platforms.

Leave a Reply