Back to all articles
AI & Automation
6 min read

How AI Is Actually Used in Business (And Where It Goes Wrong)

In the last six months, I've reviewed about forty AI implementations across scale-ups and mid-market companies. The gap between what leaders think is happening and what's actually happening with AI in their organisations is wider than you'd expect.

How AI Is Actually Used in Business (And Where It Goes Wrong)

While the tools are ample, businesses are struggling to adopt them. They haven't decided what problem they're solving and they've underestimated the governance gap between a demo and production.

Here's what I'm seeing work in practice, what's quietly failing and the uncomfortable truths about AI in business that nobody wants to admit on LinkedIn.

Where AI Actually Works: Three Workflows That Survived Contact With Reality

The AI implementations that stick share one thing: they solve a specific, repeatable problem that people already understood before AI existed.

Customer service triage and response drafting Teams use AI to categorise incoming queries, pull relevant context from knowledge bases and draft responses for human review. The key word is draft: humans still own the send button.

This works because it removes the blank-page problem without removing judgment. One client cut first-response time by 40% without hiring more people. The workflow stuck because it made an existing job easier.

Document analysis and summarisation Legal, finance and operations teams are using AI to extract key information from contracts, reports and compliance documents. Instead of replacing lawyers or analysts, it's doing the tedious pre-work so they can focus on interpretation and decision-making.

A client in financial services now processes due diligence packs in two days instead of five. The humans still make the calls; AI just gets them to the decision point faster.

Internal knowledge search and synthesis Companies with scattered documentation (Notion, Confluence, SharePoint, Google Drive) are using AI to make their own knowledge accessible. Staff ask questions in plain English, then AI pulls from multiple sources and synthesises an answer. This only works if the underlying documentation is decent — AI can't fix a knowledge management problem, it just exposes it faster.

What these workflows have in common is their clear human-in-the-loop design, narrow scope and augmentation of existing processes rather than wholesale replacement.

_R742474_websize.jpg

The Governance Gap Nobody Talks About

Here's where it gets uncomfortable. Most organisations I work with have no idea how many people are using AI tools and what risks they've introduced.

Last month, Anthropic — one of the companies positioning itself as the responsible AI player — found itself in a public dispute when it emerged Claude had been used in a Pentagon operation without their knowledge. If a company built around AI safety can't track how its own tools are being deployed, what chance does a mid-market business have?

This isn't hypothetical. I've seen sales teams pasting customer data into ChatGPT to draft proposals. Developers using AI coding assistants on proprietary codebases without security review. Finance teams feeding sensitive P&L data into Claude to generate board reports.

None of this was malicious. People were trying to do their jobs faster. But the organisation had no visibility or guardrails to show them what had already left the building.

The term for this is Shadow AI — staff adopting AI tools outside official channels because the approved process is too slow or doesn't exist. We saw this with cloud adoption ten years ago, but the stakes are higher because the tools have access to more data and make more autonomous decisions.

AI Agents: The Next Level of Risk Most Boards Haven't Thought About

If Shadow AI is the current problem, AI agents are the emerging one.

AI agents aren’t just answering questions. It's taking actions: booking meetings, sending emails, updating CRMs, pulling data from multiple systems.

Last week, 1Password open-sourced a benchmark specifically to test whether AI agents leak credentials. They built it because the risk is real and nobody else was measuring it.

The problem is what’s being called 'judgment hallucination'. People over-delegate to agents because they sound confident and seem smart. The agent says it's done something, the human assumes it's correct and nobody checks.

I've seen teams deploy AI agents to handle refunds, update inventory and manage procurement without asking what happens and who’s accountable when the agent gets it wrong.

The question is organisations will govern AI before or after something expensive breaks.

_R742054_websize.jpg

What Responsible AI Implementation Actually Looks Like

Here's what I tell clients who want to adopt AI without the governance theatre:

Start with use cases, not tools. Identify three specific workflows where AI could help. Write down what success looks like, who owns the decision if AI gets it wrong and what data the tool will touch. You’re only ready to deploy when you have those answers.

Put humans in the loop. Every AI output should be reviewed by a person who understands the context. If you start with fully automation and something breaks, you've lost trust and probably money. Over time, you can automate the easy stuff.

Build guardrails people will follow. Blanket bans don't work. People just use the tools and hide it. Instead, give approved tools, clear guidance on what data is safe to use and explain why it matters. Make it easy.

Measure what's actually happening, not what you think is happening. Track what’s being used, what’s being asked and where things go wrong. If you don't have visibility, you don't have governance.

Treat AI like any new capability. It's not magic. It's a tool that requires the same discipline as any other business system — training, process design, accountability and periodic review.

AI works in business when it solves a specific, repeatable problem within a workflow humans already understand. It fails when organisations treat it as a strategy rather than a tool, or when they deploy it faster than they can govern it.

The biggest risk is the gap between enthusiasm and oversight. Shadow AI is already happening in your organisation. AI agents are next. The question is whether you'll put guardrails in place before or after something breaks.

If you're a leader trying to figure out where AI actually fits in your business — and how to implement it without the governance theatre or the chaos — I run half-day workshops that map real use cases, design practical guardrails and get teams moving without overwhelming them. Book an AI training session or download the Lean AI Startup Playbook to see the frameworks I use with clients.

Share this article
Martin Sandhu

Martin Sandhu

Fractional CTO & Product Consultant

Product & Tech Strategist helping founders and growing companies make better technology decisions.

Connect on LinkedIn
Now accepting applications

The Startup Launchpad

A 90-day programme for founders who are building or have built and want results not theory. 6 modules. Limited places.

Want to apply these ideas?

Let's talk about how to put this into practice for your business.

Martin