By April 2026, your firm probably has several AI tools deployed. Some are working. Some aren't. Some are sitting unused while your team keeps doing work manually. It's time to audit.
This is the unglamorous work of AI adoption. Not deploying new tools, but fixing old ones. It's also where most firms leave real money on the table.
The Audit Framework: Four Questions
Question 1: What AI Tools Do We Actually Have?
You'd be surprised how many firms can't answer this. Start with an inventory:
- Paid AI platforms or subscriptions (ChatGPT Pro, Claude, etc.)
- AI-powered integrations in your existing systems (Salesforce AI, project management AI, etc.)
- Standalone tools deployed for specific workflows
- Free tools people are using unofficially
Get a complete list. Ask your teams: "What AI tools do you use regularly?" You'll be shocked at the answers.
Question 2: Is Anyone Actually Using Them?
Many deployments look good in week 1. By month 3, adoption has collapsed and people reverted to old ways.
For each tool, measure: - Active users (people using it in the last 30 days): What % of the target team? - Usage frequency: How often are people using it? - Completion rate: If it's a workflow tool, what % of processes use it vs. alternative methods?
If adoption is below 50% of the target team and it's been more than 3 months, that's a red flag. Either the tool doesn't solve a real problem, or you deployed it without addressing resistance.
Question 3: Is It Delivering Measurable Value?
For tools people are actually using, measure outcomes:
- Time saved: How much faster is the task with AI vs. without? Measure on a real project.
- Quality improvement: Are outputs better? Fewer errors? Higher client satisfaction?
- Cost per transaction: If it's automating something, what's the per-unit cost reduction?
- Capacity expansion: Are you doing more work with the same team, or the same work with less team?
If you can't measure one of these, the tool probably isn't delivering value. That doesn't mean discard it immediately—it means your measurement framework is weak. But weak measurement usually correlates with weak results.
Question 4: What's the Failure Mode?
For tools that aren't working, diagnose why:
- Wrong problem: The tool solves a problem that's not actually painful for your team. (Rare to deploy at this level, but happens.)
- Wrong tool: The tool could solve the problem, but it's not reliable, or the UX is bad, or it doesn't integrate with your existing systems.
- Deployment failure: The tool is fine, but you didn't train people, didn't make it mandatory, or didn't incentivize adoption. (Common.)
- Accuracy/trust issue: People don't trust the AI output, and so they verify everything manually. Time savings evaporate.
Most failures are #3 and #4: people problems, not tool problems. Knowing that changes what you fix.
The Decision Framework: Keep, Fix, or Cut
After auditing, you have three categories:
Keep (Active, High-Value Tools)
These are working. Usage is 70%+. Value is measurable. Action: protect them. Make sure they stay integrated into workflows. Allocate budget for upgrades. Make adoption mandatory and consistent across your team.
Fix (Low Adoption or Unclear Value)
These have potential but something's broken. Before you cut them, diagnose: - If it's a deployment issue: invest in training, simplify the workflow, make it easier to use. - If it's a tool issue: try an alternative or configure it differently. - If it's a trust issue: invest in accuracy measurement and validation. Show people the tool is reliable.
Give a "fix cycle"—usually 30-60 days. If adoption and value don't improve, cut it.
Cut (Low Value, Low Adoption, High Distraction)
These are wasting budget and organizational focus. Cut them. Reallocate the budget to tools that are working or new tools that solve real problems.
Cutting is a feature of good management. You're not admitting failure; you're being disciplined about resource allocation.
The Conversation With Your Team
Frame the audit as "we're optimizing our AI stack," not "auditing your mistakes." Invite input. Ask:
- "What AI tools have made your work meaningfully better?"
- "What AI tools are in your way?"
- "What's a problem we haven't solved with AI yet?"
The gap between what management thinks is working and what teams actually use is usually large. Their input will surprise you.
What This Usually Reveals
In my experience, firms audit their AI stack and find:
- 30-40% of deployed tools are unused or rarely used
- 20-30% are delivering clear value and being used consistently
- 30-40% are in the "unclear" category—people use them, but impact is hard to measure
After the audit, firms typically cut 1-2 tools, invest heavily in 2-3 tools that are working, and fix 2-3 tools that have potential. Net result: cleaner tech stack, clearer value, better team buy-in.
The Q2 2026 Opportunity
April is spring cleaning month. Use it to clean up your AI deployments. Measure what you have. Kill what's not working. Double down on what is. Then you enter Q2 with clarity about where your AI adoption is actually driving value.
That clarity is worth more than the newest AI tool.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call