I had a conversation with a partner at a mid-market consulting firm recently. They'd just implemented an AI-powered recommendation system and deployed it to give client advice. It was working well. Then someone asked: "What does our malpractice insurance actually cover here?"
They hadn't asked before deploying. This is more common than it should be. So let me walk through what you actually need to think about before your AI system gives advice to clients or affects client deliverables.
The Baseline: Who's Liable?
Let's be clear on something first: the professional—you, your partner, your firm—remains liable for advice you give to clients, whether you arrived at it with or without AI. If the advice is wrong, you're responsible. AI doesn't change that.
What AI does change is how that liability works:
- If a human made the mistake, it's a judgment error.
- If an AI made the mistake and a human didn't catch it, it's a process error. And process errors are harder to defend. They're more likely to be viewed as negligence.
Courts are still figuring out how to treat AI-assisted professional advice, but the trend is clear: professionals are expected to understand and validate the systems they're using.
The Insurance Problem
Most professional liability policies were written before generative AI existed. Your insurance company is still catching up on what they cover. Here are the questions you should ask your broker:
1. Does the policy cover advice generated by AI?
Some insurers are already excluding AI-generated advice from coverage. Others haven't updated their policies yet. You need to know which camp you're in. If your insurer learns six months after your AI deployment that they won't cover it, that's a $1M problem.
2. What's the disclosure requirement?
Do you need to tell clients that you're using AI in your process? Most lawyers already disclose their tools and processes to clients. But "we used legal research AI" might have a different insurance implication than "we used AI to generate the recommendation." Talk to your insurer about when disclosure is required.
3. What about data confidentiality?
If your AI is cloud-based and processes client data, does your policy cover data breaches? What if the vendor suffers a breach? What if the vendor goes out of business and your client data ends up somewhere sketchy? These are edge cases, but they're in the liability universe.
4. What documentation do you need?
If something goes wrong and a client sues, you'll need to show: "Here's our process. Here's how we validated the AI. Here's how we reviewed output. Here's our testing results." Not having documentation is a liability amplifier. Having documentation that shows you were responsible is a liability reducer.
Beyond Insurance: Legal Risk
Insurance is one thing. But there are other legal risks that aren't about malpractice:
Intellectual Property
If your AI is trained on client data or uses client intellectual property, you might be creating IP issues. If you generate advice that incorporates a client's trade secret without proper handling, that's a problem. Talk to your IP counsel about what data can be used in your AI system.
Regulatory Risk
Different industries and jurisdictions have different rules about delegating professional judgment to systems. A lawyer using AI to draft a will might face regulation. A financial advisor using AI to generate investment recommendations definitely will. Understand your regulatory environment before deploying AI in client-facing work.
Custody and Control
Who controls the AI? You or a vendor? If a vendor makes a change to the model and suddenly your advice changes, is that your fault? These custody questions matter. Prefer systems where you control the model and the data, not systems where you're dependent on a vendor's updates.
Red Flags: When to Pump the Brakes
Some AI use cases carry enough liability risk that you should be especially cautious:
- Regulated industries. Legal, financial, healthcare advice. If there's a regulator involved, that regulator is probably not AI-friendly. Get ahead of that.
- High-stakes decisions. Advice that affects whether someone gets something (a loan, a contract, a job) needs careful handling. The person who doesn't get it will blame the AI.
- Untested systems. If you're the first to use a particular AI approach for your type of work, you're taking extra risk. Later firms can learn from your mistakes.
- Black-box systems. If you can't explain why the AI gave the advice it gave, that's a problem. You'll struggle to defend it in court.
What You Should Be Doing
Before you deploy any AI system that affects professional advice or client deliverables:
- Talk to your insurance broker. Disclose what you're planning. Get confirmation that it's covered. Get it in writing.
- Consult your outside counsel. Liability matters are legal matters. Have a lawyer assess the risk specific to your jurisdiction and industry.
- Document your process. How do you validate the AI? How do you review its output? When do you reject it? Who's accountable? Documenting this now is gold if you ever need to defend the decision.
- Disclose to clients. I'd default to transparency: "We use AI to assist with X, but our professionals review and validate everything." This reduces liability because clients know what they're getting.
- Start conservatively. Use AI for low-risk work first. High-stakes advice comes later, after you've learned.
- Get professional indemnity right. Professional indemnity insurance exists for a reason. Make sure you have enough coverage to account for your new AI risks.
The Real Talk
Insurance companies are adapting to AI. Eventually, using AI responsibly will be standard of care in many professions—and not using it might be the liability. But we're not there yet. Right now, using AI is still somewhat novel, and novel things carry extra liability scrutiny.
The firms managing this best are the ones treating AI as a professional responsibility issue, not just a tech issue. They're getting insurance clarity, consulting counsel, and documenting their process. They're not moving fast and breaking things. They're moving carefully and building things they can defend.
That's the approach I'd recommend. The short-term speed gain from rushing AI deployment is never worth the long-term liability exposure.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call