By September 2025, the regulatory space for AI has clarified significantly from the chaos of 2022–2023. There's still uncertainty, but we're moving from "nobody knows the rules" to "here are the rules."
This matters for your firm. Here's what you need to know and what you should be doing.
What Happened in 2025
The EU AI Act Went Into Effect
The EU's AI Act, passed in late 2024, came into effect through 2025. Key points:
- Risk-based approach: Regulation depends on risk level (prohibited, high-risk, general-purpose).
- Professional services mostly fall in "high-risk" or "general-purpose" categories. Not prohibited, but with requirements.
- Requirements include: Documentation, transparency (tell users when AI is used), testing, and governance.
- Penalties: Fines for non-compliance are significant (up to 6% of revenue).
GDPR Enforcement on AI Increased
By 2025, regulators clarified: GDPR applies to AI like any data processing. Key developments:
- Using large language models with personal data requires a lawful basis (contract, consent, legitimate interest)
- Data processing agreements must cover AI vendors (Anthropic, OpenAI, Google must agree to be data processors)
- The big three (Claude, GPT-4, Gemini) have clear contractual terms for this. Smaller tools are risker.
- Enforcement is increasing. Fines are real.
US Approach: Sector-Specific, Not Comprehensive
The US has not passed comprehensive AI regulation. Instead:
- Financial services have specific AI guidance (SEC, banking regulators)
- Healthcare has FDA guidance on AI/ML systems
- Privacy laws (CCPA, state privacy laws) apply to AI
- No federal "AI Act" but pressure for one continues
Professional Liability Insurance Adapted
By mid-2025, insurance companies addressed AI:
- AI is no longer automatically excluded from coverage
- Coverage applies if you have documented governance and used AI responsibly
- Reckless AI use (using unapproved tools with client data) is still excluded
What This Means for Professional Services
1. You Can Use AI Responsibly (And Legally)
The key takeaway: regulation doesn't prohibit AI use in professional services. It requires responsible use.
Responsible means:
- Clear policies on what data can use which tools
- Data processing agreements with vendors
- Documentation of how AI is used
- Audit trails and governance
- Transparency with clients when AI is used for their work
2. GDPR is Your North Star
If you comply with GDPR (which most professional services firms should), you're mostly compliant with AI regulation. GDPR is the strictest regime, so following it puts you ahead of most requirements.
3. Documentation Matters
Regulators care about evidence that you thought about the risks. Document your governance:
- Written policy on AI use
- Data classification (what data goes where)
- Vendor assessment (why you chose which tools)
- Impact assessment (how is using AI affecting data handling?)
Key Compliance Requirements (As of September 2025)
For Any Firm Using AI with Client Data
- Data Processing Agreement with AI Vendors. Ensure Claude, ChatGPT, Gemini (whoever you use) agree to be data processors under GDPR. They do. Ensure it's in writing.
- Written AI Use Policy. Document what data can be used with what tools. Example: "Client confidential data only in Claude Business tier, never in free ChatGPT."
- Transparency with Clients. Tell them when AI is used. Especially if it's client data. No surprises.
- Audit and Logging. Track who used which tools with what data. Basic logging is fine for smaller firms.
For Firms in EU or Serving EU Clients
- All of the above, plus:
- Impact Assessment for High-Risk Use. If you're using AI for high-stakes client decisions, document why it's safe.
- Vendor Due Diligence. Ensure your AI vendors comply with EU AI Act. (The big three do.)
- Explicit Consent. For some use cases, get client consent to use AI on their work.
For Financial Services or Regulated Industries
- All of the above, plus:
- Follow industry-specific guidance (SEC, FCA, etc.)
- Model governance and testing requirements
- Documentation for regulatory examination
What's Coming (September 2025 Outlook)
Next 12 Months
- More US regulation. Expect sector-specific guidance, possibly a comprehensive AI bill.
- Clearer vendor liability. Regulations will address: if AI makes a mistake, who's responsible? Vendor or user?
- Bias and fairness standards. Expect more scrutiny on whether AI models are fair and non-discriminatory.
- International harmonization. Different countries' regulations will start converging (though not perfectly).
By End of 2026
- Most jurisdictions will have clearer AI policy.
- "AI compliance" will be standard business practice, not differentiator.
- Insurance will fully account for AI liability.
- Liability frameworks will clarify (less uncertainty on blame).
The Practical Playbook
If you haven't done this yet, here's the minimum for September 2025:
Phase 1: Documentation (This Month)
- List the AI tools your firm uses (Claude, ChatGPT, etc.)
- Document what data each tool is used for
- Write one-page AI policy covering data classification
- Get executive sign-off
Time: 4–8 hours. Cost: $0 (if internal).
Phase 2: Vendor Agreements (Next 30 Days)
- For each tool, verify it has a data processing agreement (DPA) available
- Sign the DPA
- Store it with your contracts
Time: 2–4 hours. Cost: $0–$1K (if legal review needed).
Phase 3: Client Communication (Next 60 Days)
- Add to engagement letter: "We use AI tools to enhance our work, including [list tools]. Your data is protected under [agreement]."
- If anything is sensitive, offer to discuss or get consent
Time: 2 hours. Cost: $0.
Phase 4: Monitoring (Ongoing)
- Basic audit log (who used what, when)
- Quarterly review of policy (is it working?)
- Monitor regulatory updates
Bottom Line
By September 2025, the regulatory space for AI is clearer than it was. The message is consistent: you can use AI responsibly. Being responsible means documenting your governance and being transparent with clients.
This isn't onerous. Most professional services firms have the maturity to do this. The ones that do will be compliant and confident. The ones that don't will face increasing regulatory pressure.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call