Regulatory guidance on AI and privacy is still evolving, but the fundamentals haven't changed. If you understood data privacy before AI, you need to understand one additional layer: what happens when your data flows through an AI system.

Most of the questions I hear from firms are variations on the same concern: "Can we send client data to Claude or another AI platform without violating privacy regulations?" The answer is more nuanced than yes or no, but it's navigable.

The Core Rule: Classification Before Deployment

The first step is knowing what data you're talking about. Data breaks into categories that matter for privacy law:

Firms I work with that have clear policies around this don't have privacy problems with AI. Firms that just dump data into systems and hope for the best do.

GDPR and EU Data Transfers in 2026

GDPR is now mature. The rule: you can't transfer personal data of EU residents outside the EU unless there's a legal mechanism (adequacy decision, standard contractual clauses, binding corporate rules).

Most major AI platforms (Claude, ChatGPT, etc.) now have data processing agreements (DPAs) in place. If you're in the EU and using these platforms, you need a DPA. Period. Your legal team can get these directly from the vendor. It's standard practice by now.

The practical implication: before you send data to any AI system, confirm a DPA exists. If your AI vendor can't provide one, don't send EU resident data to them.

CCPA and US State Privacy Laws

CCPA applies if your firm serves California residents or processes their data. CCPA requires consent or legitimate interest before using data, and you have to honor "do not sell" requests.

When you send California resident data to an AI platform for processing, you're disclosing it to a third party. CCPA treats this as a "sale" unless you have a data processing agreement with the vendor that explicitly says they won't use the data for their own purposes.

Solution: same as GDPR. Get a DPA. Confirm the vendor won't use your data for training or other secondary purposes.

Other states (Colorado, Connecticut, Utah, Virginia) have similar rules, all coming into force through 2025-2026. The principle is the same: know what you're sending, where it's going, and what the vendor will do with it.

Industry-Specific Constraints: Health and Finance

If you're in healthcare or finance, you have additional rules that override generic privacy law:

If your work touches regulated data, your first question on any AI tool should be: "What compliance certifications do you have?" Not "Is it approved?" but "Is it certified for the data I need to process?"

Client Confidentiality: The Contract Layer

Privacy law is one constraint. Contracts are another. Your engagement letters and NDAs with clients probably say something like "You will not disclose client information to third parties without written consent."

Sending client data to an AI platform is disclosing it to a third party. Even if that vendor is compliant with privacy law, you may need explicit client consent first. Many firms now include an "AI processing" clause in their engagement letters that says: "We may use AI tools for analysis, provided data is not used for third-party training and is processed in accordance with all privacy laws."

If your clients are highly sensitive (government work, litigation, M&A), you might not get that consent. Plan for manual workflows in those cases.

The Practical Framework for 2026

Here's what I tell every firm I advise:

  1. Classify your data before you use an AI tool.
  2. Check if the tool has a DPA for regulated data (GDPR/CCPA) and BAA for health data.
  3. Confirm in your client contracts that AI processing is permitted (or get consent case-by-case).
  4. For truly sensitive data (litigation, government), plan manual workflows.
  5. Document your decisions. If you're audited, "We thought about it" is less credible than "We evaluated these tools and chose this one because it met these requirements."

This is not paralyzing. Most established AI platforms now meet these requirements. But you need to be intentional, not accidental.

The firms that get this right in 2026 will have fewer compliance surprises and more confidence in their AI implementations.

Want to discuss AI strategy for your firm?

Book a free 30-minute assessment — no pitch, just practical insights.

Book a Call