While everyone's focused on ChatGPT and GPT-4, there's another AI company worth paying attention to: Anthropic. They've built a language model called Claude that's available in early access, and it's worth a conversation.

It's not going to replace ChatGPT for everyone. But it might be what you actually want for your firm.

What Is Claude?

Claude is a large language model built by Anthropic, a company founded by former OpenAI researchers who were concerned about AI safety. They've built Claude with a specific philosophy: helpful, harmless, and honest.

It does the same things ChatGPT does — writing, analysis, research, coding — but with different design priorities.

Key Differences from ChatGPT

Safety focus: Anthropic designed Claude specifically to be less likely to generate harmful content or hallucinate. It's more cautious about what it claims to know.

Better reasoning: Early testing suggests Claude is better at step-by-step reasoning and logic problems than ChatGPT. More careful, less error-prone.

Honesty about limitations: When Claude doesn't know something, it says so instead of making something up. This is the opposite of ChatGPT's tendency to confidently state wrong information.

Constitutional AI training: Anthropic uses a different training method designed to make the model align with human values. In practice, this means Claude is more likely to push back on requests it thinks are problematic (which might be good or frustrating depending on your use case).

Why This Matters for Professional Services

Regulatory and compliance concerns: If you're in healthcare, law, or finance, you want AI that's designed from the ground up to be safe and cautious. Claude's design philosophy is built around that.

Accuracy for analysis: Professional services is all about accuracy. You need to be able to trust the analysis. Claude's tendency to admit uncertainty is actually valuable here.

Liability and risk: If your firm uses ChatGPT and it confidently generates wrong legal or medical analysis, that's on you. Claude is designed to be more conservative about claiming expertise it doesn't have.

The Catch

Claude is not as widely available as ChatGPT. It's in early access. You can request access, but it's limited.

And on raw capability, it's probably not better than GPT-4 yet. It's more conservative, which is good for risk management but might mean it's slower or less confident on things it could actually answer well.

Should You Test It?

Yes. If you're a professional services firm evaluating AI, you should test Claude alongside ChatGPT and GPT-4.

Run it on the same test cases. Compare accuracy, confidence, and willingness to admit uncertainty. See how your team feels using it.

You might find that Claude's caution is exactly what you need, or you might find you prefer ChatGPT's confidence. But you won't know until you test.

The Broader Point

The AI world is diversifying. It's not just OpenAI anymore. Google has Bard. Anthropic has Claude. Others are building models.

For your firm, that diversity is good. It means you have options. You're not locked into one vendor or approach.

The firms that win won't necessarily use the best AI. They'll use the AI that fits their specific needs, compliance requirements, and culture.

For some, that's ChatGPT. For others, it might be Claude.

What You Should Do

Request access to Claude (anthropic.com). When you get it, run a few tests on real problems your team faces. Compare to ChatGPT and GPT-4. Decide what feels right for your firm.

Don't assume the most popular tool is the best tool for you. Sometimes the quieter alternative is exactly what you need.