There's an obvious problem with my job: I advise firms on AI strategy using AI. The potential for bias and self-fulfilling prophecy is real. If I'm using Claude to think through AI adoption recommendations, aren't I just getting back what Anthropic would want me to say?

Fair question. Here's how I actually approach it, the guardrails I use, and why I think it works.

The Risk I'm Aware Of

Let me be direct about the conflicts:

I can't eliminate these conflicts. But I can be transparent about them and design my process to counteract them.

How I Actually Use AI in Advisory Work

My process is deliberate and bounded:

1. Problem Framing Without AI

I don't use AI to define the problem. That's all human judgment. What's the firm's current state? What are their constraints? What do they actually need?

I've seen consultants let AI shape the problem, and that's where bias sneaks in. The model suggests "you need a comprehensive AI strategy," and suddenly that's what you're solving for, whether it's right or not.

I do this part thinking alone, with the client.

2. Structured Brainstorming With AI

Once I understand the problem, I use Claude to generate options. But with structure:

This is where AI becomes a thinking partner instead of an echo chamber.

3. Challenge and Judgment

Here's the critical part: I reject AI output frequently. Sometimes Claude suggests something that's technically sound but practically unworkable. Sometimes it's too optimistic on timeline or cost. Sometimes I disagree with the prioritization.

My job isn't to validate Claude's thinking. It's to improve on it by combining AI-generated options with 30 years of pattern recognition. That combination is worth something.

4. Testing Against Real Data

I don't rely on AI for current state analysis. I use actual client data, market research, and benchmarking data. Claude helps me structure and interpret it, but the data is real.

This matters. If you let AI synthesize everything, including the baseline, you get coherence without grounding.

Where I Don't Use AI

This is equally important:

The Real Advantage of AI in Strategy Work

It's not that AI is smarter at strategy. It's that AI helps me move faster and think more systematically:

The insight is still human. The scaffolding is AI-assisted.

What I Tell My Clients

Full transparency: I use AI in my own advisory work. Here's what that means:

I think this is an advantage if you know it's happening. It's a liability if it's hidden.

The Bigger Point

By June 2025, everyone is using AI. The question isn't whether to use it—it's how to use it in ways that amplify your judgment without replacing it.

For me, that means being explicit about where the AI thinking ends and the human judgment begins. For your firm, it means doing the same when you deploy AI in your advisory work.

Clients don't want AI; they want better thinking. If AI helps you think better, use it. If it makes your thinking worse, don't.

Want to discuss AI strategy for your firm?

Book a free 30-minute assessment — no pitch, just practical insights.

Book a Call