The week after ChatGPT launched, I did something I normally don't do: I treated a tool like it mattered before I fully understood it. I just used it. Constantly. For eight days straight.
I drafted emails for a fictional law firm. I summarized conference notes. I wrote FAQ responses for a fake healthcare clinic. I asked it to explain tax credits, review a contract draft, and explain why a software project was over budget. I pushed it hard and I let it fail.
Here's the honest assessment: it's better than you think it is, but not for the reasons you might hope.
What ChatGPT Does Well
First drafts of routine communication. I asked ChatGPT to draft an email declining a request for a discount while keeping the client relationship intact. The first draft was stiff and corporate. But the second draft — after I fed it the tone from three previous emails — was genuinely good. It was the kind of thing a junior team member would produce, but faster.
Summarization of unstructured information. I dumped a 45-minute meeting transcript into ChatGPT and asked for action items. It got them right. Not perfectly — it missed one nuance about timing — but it was 80% of the way there. A person would still need to review it, but a junior associate could do that in five minutes instead of 15.
Template generation and adaptation. I asked it to write an onboarding email for new clients in a law firm context. It generated something reasonable that would have taken someone an hour to draft from scratch. A partner would still need to edit it for firm voice, but the structure was solid.
Explaining concepts in different ways. I asked it to explain the same tax concept five different times, aiming at different audience levels. It did. A CFO preparing to communicate with staff could use this as a starting point to find the right framing.
What ChatGPT Does Poorly
Making judgment calls. I gave it a contract scenario and asked whether to sign. It gave me a balanced answer about what to look for. But it couldn't actually advise me. It hedged every judgment because it has no context about what matters to my business, my risk appetite, or my negotiating position.
Working with proprietary or confidential information. Here's the part that matters for compliance: every conversation with ChatGPT goes to OpenAI's servers. If you paste client data, contracts, financial information, or anything regulated, you've sent it to a third party. For firms in healthcare, law, or finance, this is a deal-breaker without explicit security and compliance review.
Knowing what it doesn't know. ChatGPT has a habit I'll call "confident hallucination." If you ask it something outside its training data, it doesn't say "I don't know." It makes something up that sounds plausible. I asked it to cite the section of a specific regulation. It made up a section number. It sounded real. Only if you checked would you find out it was invented.
This is the dangerous part. A junior person using this tool without verification could confidently relay false information to a client or to leadership.
Long-form strategic thinking. I asked ChatGPT to write a business case for adopting a new software platform. It produced something that looked like a business case, but it was surface-level. No real analysis. No probing of assumptions. No devil's advocate. It won't replace a serious strategy conversation.
The Three-Layer Framework for ChatGPT
Here's how I'm thinking about where ChatGPT fits:
Layer 1 — Speed Up Routine Work: Email drafting, meeting summaries, FAQ responses, template generation. Things that take a junior person 30 minutes and ChatGPT can do in 30 seconds. A senior person still reviews, but you save time. No compliance risk if the content isn't confidential.
Layer 2 — Augment Decision-Making: Ask it to explain a concept different ways. Ask it to play devil's advocate on a proposal. Use it to stress-test your thinking. But never let it be the decision. You are. ChatGPT is a sparring partner, not an advisor.
Layer 3 — Stay Away From: Confidential information, regulatory decisions, anything that requires domain expertise you can't verify, anything where you need the model to know what it doesn't know. Don't paste client contracts. Don't ask it to interpret regulations for you. Don't let it drive client-facing advice.
The Real Limitation: It Has No Context
ChatGPT doesn't know your firm. It doesn't know your clients. It doesn't know your industry's regulatory space or your competitive position. It can't learn from feedback. Every conversation starts from scratch.
This is fine for generic tasks. It's catastrophic for anything where context matters.
A healthcare provider asking ChatGPT to draft a patient communication email needs to paste in all the context (HIPAA rules, patient demographics, clinic voice). It won't remember that context in the next conversation. Your team member will have to re-explain it every time.
Should Your Firm Use It?
Yes. But not for everything.
Start with the routine stuff. Have your team test it on email drafts for non-critical communication. Have someone summarize a non-confidential meeting. Generate a template outline and let a real person build on it.
Then talk to your leadership about data security. If you're in a regulated industry, get legal and compliance involved before your team treats ChatGPT like the copying machine — something that's just there to use.
And be honest with your team about what it is. It's a tool that's very good at making plausible text very fast. It's not good at knowing when it's wrong. Your job is to verify.
That's the realistic read on ChatGPT in December 2022. It's useful. But it's not magic. And the firms that will win are the ones that use it for what it's good at, not the ones that hope it will do more than it can.