It's been almost two years since I started working full-time on AI strategy with professional services firms. Long enough to see patterns. Long enough to have perspective. It felt right to pause here, heading into the holidays, and think about what's actually working—and what remains stubbornly difficult.
What I'm Genuinely Grateful For
The models actually work. This might sound obvious, but it wasn't guaranteed. When ChatGPT first launched, the skeptics said: "It's a parlor trick. It's not useful for real work." Two years later? It's clear the skeptics were wrong. These models are genuinely useful for professional work. Not perfect, but useful. You can build real businesses on top of them.
The cost trajectory is staggering. I've watched the price per token drop 10x, then another 5x. Inference that cost $0.01 per 1000 tokens now costs $0.00025 per 1000 tokens. This isn't a 10% improvement year over year. This is Moore's Law in action. It changes the economics of everything.
We have actual competition. OpenAI, Anthropic, Google, Meta, even smaller players. Each pushes the others to improve. We're not waiting for one company to set the terms. Claude, GPT-4o, Gemini—we have real options. The competition is brutal, and that's good for everyone except maybe venture capitalists who overpaid for AI startups.
The open source movement is serious. Llama 2, Mistral, and others have made it possible to run capable AI models on your own infrastructure if you want to. That sounds technical, but it's actually important: it reduces vendor lock-in and gives firms optionality. You don't have to depend on OpenAI or Anthropic.
We're past the "CEO wants AI" phase. Early on, C-suite interest in AI was disconnected from reality. "We need AI!" without any thought about what for. Now? Executives ask sensible questions. "What's the ROI? What are the risks? How do we measure success?" This is boring but healthy.
The early adopters are sharing what works. I'm seeing actual case studies now, not hype. Firms that deployed AI are publishing honest assessments of what worked and what didn't. This shared knowledge is accelerating everyone else's learning curve.
What Still Frustrates Me
Governance is still vague. We have compliance regulations for almost everything. But the guidance on appropriate AI use in professional services is scattered across different jurisdictions, regulators, and frameworks. Firms are making it up as they go, which is inefficient and risky.
We're still conflating different risks. Using AI to summarize emails is very different from using it to generate legal advice. But I see firms applying the same restrictive policies to both. This kills adoption of harmless applications.
Hallucinations are still a real problem. We talk about it like it's solved, but it's not. LLMs will confidently generate plausible-sounding facts that are completely false. For professional services, where accuracy matters, this is a real constraint. You need human oversight, and that overhead reduces ROI.
Most firms still don't have a clear AI strategy. I talk to a lot of managing partners. Most of them have ChatGPT on the approved list (or on a DLP blocklist). But almost none of them have a coherent view of: where does AI fit in our business model? What competitive advantage does it give us? Where should we invest heavily and where should we stay out? This gap is costing them money.
The talent market is weird. There's huge demand for AI expertise, but the supply of people who understand both AI AND professional services is tiny. This creates a bottleneck. Firms need "prompt engineers" and "AI leads" and other roles that didn't exist three years ago. But how do you hire for a role that didn't exist last year? How do you evaluate whether someone's actually good at it?
The liability picture is still murky. Who's responsible if AI-generated work has an error? You? The vendor? Both? We still don't have clear case law on this. This ambiguity slows adoption, even for low-risk applications.
The Two-Year Perspective
Two years ago, the question was: "Is AI real?" Now it's: "How do we capture value from AI?" That's progress. But capturing value requires thinking clearly about strategy, not just deploying the latest tool.
The firms that will win in 2025-2026 aren't the ones with the most AI integrations. They're the ones with the clearest sense of:
- What AI is good for in their business (client service, internal efficiency, both)
- What competitive advantage it might give them
- How they'll train their people to use it effectively
- What their governance and risk policies actually are
I'm grateful for the progress we've made. I'm frustrated by how much we still need to figure out. But that's the nature of being early on an adoption curve. The uncertainty is the price of opportunity.
Looking Forward
I'm grateful for the firms trusting me to help them work through this. I'm grateful for the founders and researchers pushing AI forward. I'm grateful for the skeptics asking hard questions—they push us to be more rigorous.
And I'm grateful for the problems that still exist. They're where the value creation happens next.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call