Attorney-Client Privilege and Cloud AI: The Structural Problem Law Firms Can't Negotiate Away

Disclaimer: This is not legal advice. Attorney-client privilege is jurisdiction-specific and governed by your state's ethics rules. Talk to your state bar and your own counsel before making decisions about AI tools and client information.

Every law firm conversation starts the same way. "Can we use cloud AI if we have an NDA with the vendor?"

No. Not because NDAs are worthless. Because privilege isn't a contract. It's a rule of evidence. Breaking it doesn't violate a vendor's terms of service. It breaks the law.

ABA Model Rule 1.6 and the Third-Party Disclosure Problem

Model Rule of Professional Conduct 1.6 requires lawyers to maintain confidentiality of client information. It's broad and it's strict. Secrets, strategies, plans, everything a client tells you in confidence for the purpose of obtaining legal advice. All of it.

Rule 1.6(a) does carve out space for disclosure when it's reasonably necessary to carry out the representation. Billing services, document management systems, paralegals, co-counsel. Client information has to leave the strictest vault of privilege sometimes because the work demands it.

The operative word is "necessary." If you can do the work without the third-party disclosure, you must. Share unnecessarily, and you've waived privilege.

Cloud AI providers are where this breaks down. When you send client information to OpenAI, Claude, or any cloud service, you're disclosing that information to a third party that isn't your employee, isn't under your supervision, and isn't bound by your client retainer. The model processes data from millions of sources. Your specific query may touch systems in multiple countries.

Was that disclosure necessary? For most law firm cloud AI workflows, the honest answer is no. You could use local inference. The disclosure happens because cloud services are convenient and cheap, not because the work requires it.

ABA Formal Opinion 477R: The 2017 Ruling That Still Governs

In 2017, the ABA issued Formal Opinion 477R addressing cloud-based legal services directly. It doesn't ban cloud use. It establishes conditions: the cloud provider must be contractually bound by confidentiality and security obligations comparable to the lawyer's own, and the lawyer must exercise reasonable supervision and due diligence over the provider.

That sounds workable. Here's the catch embedded in the same opinion: the lawyer remains responsible for any breach by the cloud provider. If the vendor leaks data, the client sues the lawyer, not the vendor, because the lawyer made the disclosure decision.

477R was issued in 2017, before large language models existed. No major state bar has since clarified how it applies to AI systems that train on input data, that modify model behavior based on usage patterns, or that operate under terms of service explicitly reserving the right to use submissions for improvement. The legal terrain isn't settled. It's actively moving, and not in law firms' favor.

Where State Bars Are Heading

California State Bar Opinion 2023-201 warns that lawyers must obtain informed client consent before using generative AI with client information. Informed consent means the client understands what the tool is, who operates it, where the data goes, and how it might be used. Most clients, when asked, say no.

Through 2024, New York and several other states issued ethics opinions raising explicit concerns about AI use with privileged information. The emerging consensus is narrow: cloud AI with client data is permitted only with explicit client consent and demonstrable necessity.

Five years ago this was a gray area most firms quietly ignored. Now it's moving toward prohibition unless you meet strict conditions. That trajectory isn't reversing.

Why Terms of Service Don't Fix This

A vendor's NDA is not privilege protection. Here's the structural reason why.

Privilege governs whether a court will compel you to disclose information. If you've disclosed client information to a third party, even under a confidentiality agreement, a court may hold that privilege is waived. The information is no longer protected.

A vendor's promise not to use your data doesn't restore privilege. It creates a breach-of-contract claim if they break the promise. But the evidence has already been disclosed. Once disclosed, a court can compel it.

Walk through the hypothetical. You send a client's sensitive tax strategy to a cloud AI under an NDA with the vendor. Two years later, your client gets audited. The IRS subpoenas the chat history. The vendor deletes conversations daily, so nothing comes back. But if anything did, you've arguably waived privilege by sharing with a third party, NDA or not.

The NDA protects against data breach. It does not protect against disclosure. Those are fundamentally different things.

How Privilege Breaks in Practice

Most privilege failures in law firms aren't intentional. They're casual. A partner wants to draft a motion faster, copies the complaint into a cloud chat, and nobody's thinking about waiver. It's just convenient.

Then opposing counsel requests all communications related to the motion. Your firm discloses the chat history. Opposing counsel subpoenas the vendor's servers. The vendor says they don't retain conversation data, which may be true, but the motion is now on record as having been processed by a cloud system.

A sophisticated opposing counsel argues privilege was waived the moment you disclosed to a third party. Some courts will agree, particularly in jurisdictions that read privilege narrowly. Internal attorney strategy that was fully privileged before is now potentially discoverable.

The cost isn't just that one motion draft. It's potentially every communication connected to it.

What Local Inference Changes

When V4-Flash runs on your own hardware, the model runs on your server. Your data never leaves your premises. No third-party disclosure occurs. Privilege isn't waived.

This isn't a theoretical advantage. It's structural. There's no confidentiality agreement to parse, no vendor terms of service to audit, no question about whether the disclosure was "necessary." The information stays in your control, full stop.

Local deployment doesn't eliminate the need to think carefully about how you use AI with client data. You still need internal policies. You still need staff training. You still need access controls. But you eliminate the disclosure question entirely, and that's the question that kills you in court.

Building Internal Policies Around Local AI

Even running locally, good practice requires structure.

Designate which matters can use AI assistance and which cannot. Some firms treat AI as internal support only covering drafting, research, and initial analysis, but exclude it from final client deliverables without explicit consent. Others permit AI assistance in client communications as long as the system isn't cloud-based.

Control access. Not every staff member should touch sensitive client data. OpenWebUI's admin interface allows granular user permissions. Restrict which staff query which models. Build an audit trail.

Establish retention policies. How long do conversation histories live? Who can retrieve them? Local systems let you enforce this. Cloud systems leave the vendor in control of the answer.

Document your decisions. If you restrict AI use on a particular matter, record why. If you use it only in limited ways, record how. When a privilege question surfaces later, you want evidence that you thought it through.

Air-Gapping: When Maximalist Makes Sense

Some firms go further and operate local AI hardware completely offline. No internet connection. No external API calls. Nothing leaves the network except physical mail.

Island Mountain systems run on air-gapped networks. OpenWebUI as your interface, everything on internal hardware, fixed cost with no per-query cloud fees.

This is overkill for most practices. For firms handling M&A, CFIUS matters, or client information that is itself a trade secret, it's entirely reasonable.

The Client Consent Conversation

Even with local AI, consider getting explicit client consent. Most jurisdictions don't legally require it, but it's sound practice and takes thirty seconds.

"We use local AI systems to assist with research, drafting, and analysis. The systems run on our own hardware, not cloud services. Your information stays on our network. Do you consent to this use?"

Most clients say yes. The consent letter goes in the file and becomes evidence of thoughtful practice.

What You Should Do Right Now

If your firm is using cloud AI with client information, stop. The risk isn't speculative anymore. State bars are issuing opinions. Courts are beginning to address privilege waiver in the context of AI use. The trend is moving toward strict liability for cloud disclosure.

If you're considering it, don't. The convenience is real. So is the privilege exposure. Local AI is no longer a specialized or expensive solution. It's economically practical and structurally safer.

If you're already running local AI, document it. Train your staff. Build audit trails. Be able to explain exactly why you chose local systems and how you manage access.

For questions about configuring Island Mountain hardware for a law firm environment, including OpenWebUI access control and air-gapping, reach out. We work with practices of every size and can help you build a compliance-first AI workflow.

Summary: Cloud AI creates structural privilege waiver risk that no NDA resolves because disclosure to a third party can destroy confidentiality regardless of contractual protections.