In August 2025, I've been advising firms on AI for three years. I've been wrong a lot. This post is about the biggest mistakes I made and what I've learned from them. If you're building AI into your firm, hopefully these save you some pain.
Mistake 1: I Overestimated How Much Capability Mattered
In 2022–2023, I spent a lot of mental energy comparing models. Which one was smarter? Which one handled long documents better? Which one was faster?
I was wrong about the importance of this. By 2024, I realized: Capability differences matter way less than adoption and workflow integration.
A firm using Claude effectively beats a firm with "better" models that nobody uses. The question isn't "which model is best?" It's "which model will your team actually adopt?"
What I Did Wrong
I recommended Model A over Model B because the benchmarks showed A was 3% better. The firm adopted neither because neither was integrated into their workflow.
What I Should Have Done
Recommended the model that was already accessible through tools they used. If they use Slack, recommend ChatGPT. If they use Google Workspace, recommend Gemini. Integration matters more than capability.
The Lesson
Technology adoption is mostly organizational, not technical. By August 2025, I focus on adoption-first, capability-second.
Mistake 2: I Underestimated the Importance of Governance
In early engagements (2022–2023), I focused on "getting AI working." Governance was an afterthought.
Big mistake. The firms that succeeded weren't the ones with the best models. They were the ones with clear policies on data, approved tools, audit trails, and accountability.
What I Did Wrong
I advised a firm to start using Claude for client work without setting up clear data classification or audit procedures. They did, and six months later, they had no visibility into what was running where. They shut down the program, scared of liability.
What I Should Have Done
Governance first. "Here are the three tools you can use. Here's what data you can use them with. Here's how we audit and log. Now adopt." That's the sequence that works.
The Lesson
Governance enables adoption, not restricts it. The firms that move fastest are the ones with clearest governance.
Mistake 3: I Thought Pilot Projects Would Scale Into Production
I lost count of pilots that "worked great" but never scaled. A team would run a successful proof of concept, then struggle to go live.
I assumed the success would translate. It doesn't. There's a chasm between "we built an AI thing that works" and "we integrated it into operations, trained the team, it runs reliably."
What I Did Wrong
I celebrated pilots. "Great! That worked. Now roll it out across the team." The team didn't know how to use it. The tool had no governance. It broke when edge cases appeared. Rollout failed.
What I Should Have Done
Treated rollout as a separate project. Pilots prove feasibility. Rollout proves sustainability. They're different problems.
The Lesson
Plan for operationalization from the start. How will this run? Who owns it? What happens when it breaks? If you can't answer those questions in the pilot, the production rollout will fail.
Mistake 4: I Overestimated How Much AI Would Change Professional Services
Three years ago, I thought AI would radically transform the work. Fewer advisors, more automation, completely new business models.
It hasn't. By August 2025, AI has made work more productive, not fundamentally different. Advisors do the same job 20–40% faster. The business model is the same.
What I Did Wrong
I advised firms to plan for dramatic reduction in headcount as AI took over work. Some took me seriously and planned for layoffs that never happened. Some got paranoid about disruption that didn't come.
What I Should Have Done
Been more humble about predicting change. AI is a tool that makes people better at their jobs. It's not replacing the jobs.
The Lesson
The biggest impact of AI in professional services is use, not replacement. The consultant who uses AI well becomes 1.5x more valuable, not obsolete. Plan accordingly.
Mistake 5: I Didn't Push Hard Enough on Measurement
Too many AI initiatives I've advised on lack clear metrics. "We'll use AI to improve research" is not a measurable goal. "We'll reduce research time from 6 hours to 4 hours per project" is.
I let firms get away with vague success criteria. As a result, they can't tell if AI is actually working.
What I Did Wrong
I focused on helping them implement. I didn't push back on measurement. Six months later, they'd say "AI is working, but we don't have hard numbers." They'd scale a program with no evidence of ROI.
What I Should Have Done
Required measurement before implementation. "Before we roll this out, let's define success. What are we measuring? How will we know this is working?"
The Lesson
No measurement means no learning. Get rigorous about metrics from day one.
Mistake 6: I Thought Custom Solutions Would Win
In 2023–2024, I advised some firms to build custom AI systems. "Buy a platform for this. Build custom for that."
By 2025, I've realized that custom usually loses. Off-the-shelf solutions (Claude, ChatGPT, Salesforce with AI) almost always win against custom because they're maintained, updated, and integrated into larger ecosystems.
What I Did Wrong
I suggested custom when the problem seemed bespoke. "Your workflow is unique, so you need custom AI." The custom projects took longer, cost more, and became technical debt the firm didn't want to maintain.
What I Should Have Done
Recommended off-the-shelf first. Only suggest custom when no existing solution comes close (rare) or the custom cost is trivial and maintaining it is manageable.
The Lesson
Software is a commodity now. Build on platforms, not custom systems. The use comes from integration, not differentiation.
Mistake 7: I Didn't Focus Enough on Change Management
I'm an engineer at heart. I focused on technical implementation. I should have spent more time on adoption and resistance.
The technical part is easy. Getting people to use new tools is hard. I underestimated that challenge.
What I Did Wrong
I'd complete an implementation and hand it to the firm. "Here's your AI system." Three months later, adoption was 20% and falling.
What I Should Have Done
Stayed involved in adoption. Trained the team multiple times. Addressed concerns. Celebrated wins. Made it easy to use and hard not to use.
The Lesson
Change management is 50% of AI adoption. Technical excellence accounts for 30%. Everything else is 20%. Invest accordingly.
What I Got Right (For Balance)
It's not all mistakes:
- Focus on professional services. Staying in one vertical let me see patterns that generalist consultants miss.
- Emphasis on ROI. I always pushed for business value, not just technology. That grounded me in reality.
- Skepticism of hype. I didn't get caught up in "AI will change everything." Measured skepticism was healthy.
- Long-term view. I advised for sustainability, not just quick wins. That builds trust.
What I'd Tell Myself in 2022
If I could go back:
- Governance first, capability second.
- Adoption is harder than implementation. Plan for it.
- Pilot success doesn't equal production success.
- Measure everything or don't do it.
- People matter more than technology.
- Change management is as important as technical architecture.
- Stay humble. You'll be wrong a lot.
Going Forward
By August 2025, I'm more focused on helping firms work through AI as a business problem, not a technical problem. The technology is solved. The adoption, governance, and organizational change—that's where the real work is.
I'm better at my job because I've made these mistakes. I hope you can learn from them without making them yourself.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call