One of the highest-risk scenarios I see: a partner feeds client information into an AI tool without understanding what happens to that data. The client finds out. The relationship explodes.
This isn't hypothetical. This is happening at firms right now. And the partners involved have no idea they've violated their engagement letters until the client tells them.
Client confidentiality and AI create a complex intersection that every professional services firm needs to understand.
The Core Problem
Your client agreement almost certainly says something like: "We will keep your information confidential and will not disclose it to third parties without your permission."
When you feed client information into ChatGPT or an unapproved AI tool, you're violating that agreement. You're sending client data to a third party (OpenAI, or whoever operates the tool).
It doesn't matter if you think the tool is secure. It doesn't matter if the AI doesn't "remember" the data. You've used it as a third-party processor without authorization.
That's a breach of contract, potentially a breach of professional responsibility, and definitely a client confidentiality violation.
What Your Client Agreements Should Say
Here's the uncomfortable truth: most client agreements don't explicitly address AI. You need to fix this.
Add language that covers AI use. Something like:
"We may use third-party software tools and services to enhance our delivery of services to you, including artificial intelligence tools. Any such use will comply with applicable law and professional ethics rules regarding confidentiality. We will not disclose your information to third-party AI systems without your prior written consent, except where we use tools that have been contractually verified to provide adequate confidentiality protections."
This language does two things: (1) it notifies clients that you might use AI, and (2) it clarifies that you won't use unauthorized tools.
Then, when you deploy approved tools, you can update client agreements or get explicit consent: "We use ChatGPT Enterprise for [specific purpose]. It has SOC 2 compliance and contractual protections."
Which Tools Are Safe to Use?
Safe (with caveats):
- ChatGPT Enterprise: Has SOC 2 compliance, contractual data protections, and doesn't train on your data.
- Claude (through Anthropic's API): Doesn't train on data, contract-first, but requires API integration.
- Vendors with Business Associate Agreements (BAAs): If they have a BAA or equivalent, data is contractually protected.
Not safe:
- ChatGPT Plus (consumer): OpenAI reserves the right to use your data for training (though in practice this is being phased out). You have no contractual protection.
- Free tools: Unless explicitly protected, assume your data is being used for training or analysis.
- Unknown vendors: If you don't have a contract with the vendor and you don't know how they handle data, don't use them with client information.
The Data Protection Checklist
Before you use any AI tool with client information, verify:
1. Contractual protection — Does the vendor have a contract (BAA, DPA, or similar) that explicitly protects your data?
2. Training data policy — Does the vendor explicitly promise not to use your data for model training?
3. Data retention — How long does the vendor keep your data? Can you request deletion?
4. Subprocessors — Does the vendor use other vendors to process your data? If so, who?
5. Compliance certifications — Does the vendor have SOC 2, ISO 27001, or equivalent certifications?
If you can't answer all five of these, don't use the tool with client data.
Client Disclosure
Do you need to tell every client you're using AI? Not necessarily, but it depends on your engagement letter and your industry.
Tell them if:
- Your engagement letter requires disclosure of methodology
- You're using AI for analysis that directly affects client advice
- Your industry has specific disclosure requirements (healthcare, law, finance often do)
You can probably not tell them if:
- You're using AI for internal administration (email management, document organization)
- Your engagement letter allows you to use tools as you see fit
- The AI is just assisting human judgment, not replacing it
But when in doubt, tell them. Transparency builds trust. And it prevents the scenario where a client finds out you're using AI and feels betrayed.
A Practical Framework
For each AI tool you're considering, ask these questions:
- Is there a contract protecting our data?
- Can we get explicit client consent?
- Do we need to update our engagement letters?
- What's our liability if something goes wrong?
- Are we comfortable explaining this to a court?
If you can't confidently answer yes to 4-5 of these, don't deploy the tool.
Client confidentiality isn't negotiable. It's the foundation of professional services. AI makes it more complex, but not less important.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call