There's an obvious problem with my job: I advise firms on AI strategy using AI. The potential for bias and self-fulfilling prophecy is real. If I'm using Claude to think through AI adoption recommendations, aren't I just getting back what Anthropic would want me to say?
Fair question. Here's how I actually approach it, the guardrails I use, and why I think it works.
The Risk I'm Aware Of
Let me be direct about the conflicts:
- I partner with Olyra AI, which builds AI solutions. I have incentive to recommend AI broadly.
- I use Claude extensively. I have incentive to recommend it.
- I'm advising on a domain I'm deeply embedded in. It's hard to see that I'm wrong.
I can't eliminate these conflicts. But I can be transparent about them and design my process to counteract them.
How I Actually Use AI in Advisory Work
My process is deliberate and bounded:
1. Problem Framing Without AI
I don't use AI to define the problem. That's all human judgment. What's the firm's current state? What are their constraints? What do they actually need?
I've seen consultants let AI shape the problem, and that's where bias sneaks in. The model suggests "you need a comprehensive AI strategy," and suddenly that's what you're solving for, whether it's right or not.
I do this part thinking alone, with the client.
2. Structured Brainstorming With AI
Once I understand the problem, I use Claude to generate options. But with structure:
- I give Claude a specific scenario and ask for divergent approaches. "Here's a $12M firm with 40 people, heavy on services delivery, weak on technology. What are three very different approaches to AI adoption?"
- I ask for downsides explicitly. "For each approach, what are the biggest risks?"
- I ask Claude to argue against its own recommendations. "Why might approach one actually fail? What would have to be true?"
This is where AI becomes a thinking partner instead of an echo chamber.
3. Challenge and Judgment
Here's the critical part: I reject AI output frequently. Sometimes Claude suggests something that's technically sound but practically unworkable. Sometimes it's too optimistic on timeline or cost. Sometimes I disagree with the prioritization.
My job isn't to validate Claude's thinking. It's to improve on it by combining AI-generated options with 30 years of pattern recognition. That combination is worth something.
4. Testing Against Real Data
I don't rely on AI for current state analysis. I use actual client data, market research, and benchmarking data. Claude helps me structure and interpret it, but the data is real.
This matters. If you let AI synthesize everything, including the baseline, you get coherence without grounding.
Where I Don't Use AI
This is equally important:
- Negotiating and relationship work. Firm partnerships, stakeholder management, difficult conversations. This is human judgment only.
- Assessing cultural fit. Will this approach work in this firm's culture? That requires understanding humans, not analyzing text.
- Making final recommendations. I present options and my analysis, but the client decides. I don't use AI to rank options or optimize the recommendation. That feels like I'm pre-deciding their choice.
The Real Advantage of AI in Strategy Work
It's not that AI is smarter at strategy. It's that AI helps me move faster and think more systematically:
- Generating options I might not think of (especially convergent/unconventional approaches)
- Stress-testing ideas quickly (what breaks if we assume X?)
- Structuring analysis (laying out scenarios, dependencies, risk factors in a systematic way)
- Connecting dots across domains (what's happening in adjacent industries that applies here?)
The insight is still human. The scaffolding is AI-assisted.
What I Tell My Clients
Full transparency: I use AI in my own advisory work. Here's what that means:
- I've thought through options more systematically because Claude helps me structure options.
- I've considered more edge cases because I've pushed back on AI-generated thinking.
- I'm biased toward AI adoption in ways I'm aware of and trying to counteract.
- But the final judgment is mine, grounded in 30 years and your specific context.
I think this is an advantage if you know it's happening. It's a liability if it's hidden.
The Bigger Point
By June 2025, everyone is using AI. The question isn't whether to use it—it's how to use it in ways that amplify your judgment without replacing it.
For me, that means being explicit about where the AI thinking ends and the human judgment begins. For your firm, it means doing the same when you deploy AI in your advisory work.
Clients don't want AI; they want better thinking. If AI helps you think better, use it. If it makes your thinking worse, don't.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call