Six months ago, AI regulation was theoretical. Now the EU AI Act is in effect (partially), and other jurisdictions are moving fast. For professional services firms, that changes how you can deploy AI. Not by much, but meaningfully.
Here's what you need to know and what you need to do.
The EU AI Act Reality
The EU AI Act categorizes systems by risk level. High-risk applications (things that could affect fundamental rights) need specific safeguards: documentation, testing, human oversight, transparency. Low-risk applications (mostly everything else) just need minimal transparency.
For professional services, most workflows are low-risk:
- Document analysis and extraction: low-risk
- Proposal generation: low-risk
- Client intake and classification: low-risk
- Research synthesis: low-risk
- Scheduling and coordination: low-risk
High-risk would be something like "AI decides whether to hire a candidate" or "AI determines loan eligibility." You're probably not doing that.
The practical requirement for low-risk: document what you're using AI for, how it works, who's responsible for reviewing output, and how you handle errors. That's it.
What Compliance Actually Requires
Documentation: A simple document describing each AI system: what it does, what model it uses, how it's trained (if at all), what safeguards are in place, who's responsible for oversight.
Testing: Before deploying, test on representative data. Document accuracy rates, failure modes, edge cases. Keep records.
Monitoring: Track how the AI performs in production. Are accuracy rates holding up? Are edge cases emerging? Keep logs.
Transparency: If your AI affects clients (e.g., it classifies their intake or suggests pricing), disclose that it's AI-assisted. Simple statement: "This analysis uses AI to assist our review. A human has verified the results."
Human Oversight: Someone is responsible for reviewing AI output and catching errors. Document who and what their review process is.
The Practical Checklist
For each AI workflow you deploy:
- [ ] Document the system (one-page summary is fine)
- [ ] Test on 50+ real examples, record accuracy
- [ ] Define the human review process
- [ ] Identify who's responsible for monitoring
- [ ] Set up logging to track performance
- [ ] Document how you handle errors when they occur
- [ ] If it affects clients, write a disclosure statement
This takes maybe 10-15 hours per workflow. Not nothing, but not prohibitive.
The Bigger Picture
Regulation is tightening but not in a way that should stop you from deploying AI. The intent is to prevent harms and ensure transparency. For most professional services uses, you can easily meet that bar.
The firms that will struggle: ones that deployed AI with no documentation, no testing, no monitoring, and no oversight. If that's you, you need to retrofit compliance. Start with your riskiest workflow and work backward.
The firms that will thrive: ones that planned for compliance from the start. You have a documentation process. You test. You monitor. You're ready for any regulation that comes.
What's Coming
Other jurisdictions are following the EU's lead. The US hasn't passed a comprehensive AI law yet, but sectoral regulations are emerging (banking, healthcare, etc.). UK, Canada, and others are developing frameworks. None of them will be as strict as the EU AI Act, but all of them will require some form of documentation and transparency.
If you build for EU compliance now, you're ahead of the curve for whatever comes next.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call