I've watched enough firms launch AI initiatives to see a pattern. Some fail. Most don't get results. A few actually work.
Here's the pattern behind each.
The Three Types That Fail
Type 1: The "Cool Technology" Project
"We're going to use AI to [insert cool idea that's not really a problem we have]."
"We're going to build an AI chatbot to answer client questions." Problem: Your clients don't have that many simple questions. Most questions need judgment.
"We're going to use AI to predict which clients will churn." Problem: You don't have enough client data yet. You're building a cathedral no one needs.
"We're going to use machine learning to optimize staffing." Problem: Your staffing decisions are already pretty good. You're solving a problem that doesn't matter.
The pattern: Cool technology first, actual problem second. Backwards.
Type 2: The "Magic Solution" Project
"We'll implement [tool] and it'll fix [large systemic problem]."
"We'll implement an AI intake system and our client onboarding will be 50% faster." Problem: Client onboarding is slow because your process is complex, not because intake is slow. AI isn't the lever.
"We'll use ChatGPT and we won't need to hire new people." Problem: You still need people, and the work won't change that much. You're hoping for magic.
The pattern: Big expectations, small problem addressed, disappointment.
Type 3: The "Nobody Owns It" Project
"We bought a tool. It's up to the team to figure out how to use it."
No training. No guidance. No clear use cases. The tool sits unused because nobody knows what to do with it.
Even great tools fail this way.
The pattern: Technology push instead of use-case pull. Nobody champions it. It dies.
The One That Always Works
The projects that actually deliver results share these characteristics:
1. Start with a real, specific problem
"Our intake process takes three hours per client. We hate it. It's boring work. We want to cut it by 40%."
2. Someone owns it
The operations director, a partner, or a manager says "I'm responsible for this. I'm making it work."
3. Quick measurement
Run a two-week pilot. Measure time saved. Get real numbers.
4. Tools that exist today
Use ChatGPT or GPT-4. Not some custom solution. Not something experimental. Something you can actually use.
5. Clear training and adoption path
"Here's how you use this. Here's what we expect. Here's how to get help if it's not working."
6. Actually use the time saved
When your team finishes intake in two hours instead of three, they do something with that hour. They don't just leave early. The time creates value.
The Success Pattern
Real problem → Owner assigned → Quick pilot → Measure → Train → Scale
That's it. Simple, obvious, but firms skip steps all the time.
The Framework for Project Selection
Here's how to decide which problem to tackle first:
- Is it specific? "Intake is slow" is not specific. "Intake takes 3 hours and 2 hours is spent on phone calls and note-taking" is specific.
- Is it repeatable? Does it happen multiple times per week? If it's once a year, it doesn't matter if AI cuts it by 50%.
- Is there clear ownership? Is there a person who will champion this? If not, don't start.
- Can we measure success? Can we count hours saved? Quality improvement? If we can't measure it, we won't know if it worked.
- Can we do it with tools that exist today? No custom development. No waiting. Something that's ready to use now.
What You Should Do
Take your AI project idea and ask: Which type is this?
If it's Type 1, 2, or 3, stop. Rethink it. Make it concrete and owned.
If it follows the success pattern, go ahead. You've got a shot.
The firms that win aren't the ones with the most AI. They're the ones who use the right AI on the right problems with clear ownership and measurement.
That's not complicated. It's just discipline.