You haven't officially approved ChatGPT or any AI tools in your firm. You haven't issued guidance. You haven't set policy.
But your team is using it anyway. Some openly, some quietly. Some to draft emails, some to analyze client documents. Some carefully, and some recklessly.
This is what I call the Shadow AI problem. And you need to address it now, not after something goes wrong.
The Reality
You can't prevent your team from using ChatGPT. It's free. It's easy. It's everywhere. You could block it at the network level, but smart people will use it on their phones or at lunch.
So the question isn't "will they use it?" The question is "will they use it safely?"
Right now, without guidance, they're probably not.
What Shadow AI Looks Like
A junior associate is working on a client contract. They're not sure how to approach a clause. They paste it into ChatGPT and ask for interpretation.
That confidential contract is now on OpenAI's servers.
An office manager is preparing for a team meeting. They paste the agenda — which includes staffing issues, project concerns, and financial updates — into ChatGPT to brainstorm discussion points.
Your internal strategic information is now in OpenAI's systems.
A senior partner is researching a regulatory question. They describe the client's situation in detail to ChatGPT and ask for guidance.
You've now sent identifiable client information to a third party without consent.
These aren't hypothetical. This is happening at your firm right now. You just don't know about it because nobody's asking permission.
Why This Matters
Compliance risk: If you're in a regulated industry, ungoverned use of AI tools is a regulatory violation. Your auditors will ask about it. Your compliance team will be concerned.
Client risk: If a client finds out their confidential information was pasted into a public AI system, that's a trust problem. Possibly a legal problem.
Data security risk: OpenAI has stated they retain data for improving their models (though they've offered enterprise agreements with different terms). Even if they don't use your data, it's in their systems. That's exposure.
Quality risk: If your team is using ChatGPT without verification, they might be getting wrong answers and treating them as reliable.
What You Should Do
Step 1: Acknowledge it's happening. Have a leadership conversation. Your team is using ChatGPT. That's not a failure of discipline. That's a signal that they see value in it. The question is how to channel that responsibly.
Step 2: Create a simple policy. Don't overthink this. Something like:
- ChatGPT and similar tools are approved for non-confidential work.
- No client information can be pasted into public AI systems.
- No internal strategy, financial data, or HR information can be shared.
- Always verify outputs, especially for regulatory or legal interpretation.
- Report any questions to leadership.
Step 3: Communicate it. Don't send a memo and assume people read it. Have a conversation. Explain the why. Ask for questions.
Step 4: Offer alternatives. If you've shut down ChatGPT for client-facing work but your team needs AI help with analysis, what's the alternative? Local tools? Enterprise agreements? Be clear about what they should do instead.
Step 5: Monitor and evolve. Spot-check occasionally. Ask your team what they're using ChatGPT for. If you see problems, address them. If you see new opportunities, evolve your policy.
The Honest Conversation
Here's what you should tell your team:
"We know many of you are using ChatGPT. It's a powerful tool. We want you to use it. But we need to use it safely. That means no confidential client information, no internal strategy, no personal data. If you need AI help with that kind of work, tell us and we'll find a better solution. Our job is to make sure you have good tools that don't put our clients or our firm at risk."
That's a conversation that leads to adoption and trust. "Stop using ChatGPT" leads to shadow usage and hidden risks.
The Bottom Line
The Shadow AI problem isn't about stopping your team from using AI. It's about making sure they use it safely. That only happens if you have a clear conversation about it.
Have that conversation this month. Don't wait.