Most mid-market AI investments will fail. Not because the technology is wrong. Because the business was not ready for it. Here is how to tell if yours is.  ‌ ‌ 

Assured Velocity

The Velocity Brief

Straight talk on transformation, technology, AI, data and people leadership for mid-market operators. No vendor spin. No consultant waffle. Just what actually works.

Issue 03  ·  June 2026

This Month

The AI question every mid-market board is avoiding.

In This Issue

01   The Hard Truth

02   The AI Readiness Framework

03   Real World Case Study

04   Richard Danks: AI Governance

05   Joe Kay: AI and Operational Reality

06   Brian Ford: AI in Programmes

07   Tom Henry: AI and Process


01   The Hard Truth

Your board is being asked to invest in AI by people who have never run an operation.

The pressure to adopt AI is now coming from every direction simultaneously. The vendor community is selling it. The board is reading about it. The operations team is experimenting with it. The CFO is asking when the productivity gains will show up in the numbers.

And somewhere in the middle of all of that noise, the question that actually matters is not being asked. Not "which AI tool should we buy" and not "what can AI do for us." The question is: are we ready for AI - and if not, what would it actually take to get there?

The three ways mid-market AI investments fail

Deployed on dirty data. The AI tool is capable. The data it is fed is inconsistent, incomplete or poorly defined. The outputs are confident and wrong.

Deployed on broken process. The AI automates a workflow that was not fit for purpose before automation. You now have faster broken process - and it is harder to change because it is embedded in a tool.

Deployed without governance. Nobody owns the outputs. Nobody monitors accuracy over time. Nobody has defined what the AI is allowed to decide and what requires human review. Six months later the tool is either unused or uncontrolled.

None of these are technology failures. They are readiness failures. And they are entirely preventable - if you ask the right questions before you commit the budget.


02   The Framework

The AI Readiness Assessment - 6 Questions for Your Board

Answer these honestly before committing to any AI investment. Not as an IT team. As a board. Each question maps to a domain where unresolved problems will undermine AI deployment regardless of which tool you choose.

1

Data and Intelligence Domain

Is the data you plan to feed this AI tool accurate, consistently defined, and complete enough to produce reliable outputs? If your management accounts have integrity issues (see Issue 02), your AI outputs will have the same integrity issues - presented with more confidence and less visibility.

Ask: Has the data feeding this tool been audited in the last 12 months?


2

Process and Experience Domain

Is the process you are automating or augmenting with AI actually the right process - or a process that has grown organically and never been properly reviewed? AI applied to a poor process does not improve the process. It embeds it permanently and makes it far harder to change.

Ask: Has this process been mapped and validated in its current form before we automate it?


3

Technology Governance Domain

Who owns the AI tool - its outputs, its accuracy over time, its integration with your other systems, and its vendor relationship? If the answer is the person who championed the purchase, you do not have governance. You have enthusiasm. Those are different things with very different shelf lives.

Ask: Who is accountable for this tool 18 months after go-live?


4

People and Change Domain

Has your team been involved in the design of this deployment - or are they being presented with a tool and expected to change how they work? AI adoption fails at the human layer more often than the technical one. The team that has to work with the tool every day will find workarounds faster than you can close them if they were not part of the decision.

Ask: Who in the team has been involved in designing how this will actually work day to day?


5

Business Governance Domain

What decisions is this AI tool making or influencing - and do you have a defined policy on which decisions require human review? Regulatory exposure, customer-facing decisions, financial outputs - each carries a different risk profile. Most mid-market businesses have not defined their AI decision boundary before deployment.

Ask: What is this AI allowed to decide without a human in the loop?


6

Strategy and Operating Model Domain

Does this AI investment connect to a specific strategic outcome - or is it a response to competitive pressure and fear of being left behind? Both can be valid starting points. But only one of them produces a coherent business case, a measurable success criteria, and a leadership team that knows what it is actually trying to achieve.

Ask: What specific outcome are we measuring this investment against in 12 months?

The Honest Score

If you can answer all six questions with confidence and documentary evidence - your AI investment has a strong foundation. If three or more feel uncomfortable, or generate disagreement in your leadership team, you have found your readiness gap. Closing it before deployment is a fraction of the cost of discovering it after. Every pound spent on readiness protects ten pounds of investment.


03   Real World

The £60M Insurance Broker Who Deployed AI Twice - and Only Got Value the Second Time

Anonymised for client confidentiality

This business moved fast on AI. Understandably - their sector is competitive, margins are tight, and the operational efficiency gains from AI-assisted document processing and client communication are genuinely available to businesses that get it right.

First deployment: an AI tool to categorise and route incoming client queries. Went live in six weeks. Was quietly abandoned by the team within four months. The tool was routing incorrectly at a rate that created more work than it saved. The client experience deteriorated. The team worked around it. The tool was technically functioning.

"The AI works fine. The team just does not use it."

Operations Director, 4 months post go-live

What the readiness review found before the second deployment:

Data: Query categories had never been formally defined. The AI was classifying against labels that different team members used differently. Garbage in, garbage out - at speed.

Process: The routing workflow had three manual exceptions built in by institutional memory. None had been documented. The AI had no way to replicate them.

People: The team was not involved in the first deployment. They were shown the tool the week before go-live. Nobody owned the exceptions. Nobody had a clear escalation path when the AI got it wrong.

Governance: No defined accuracy threshold. No monitoring. No review cycle. No owner of the tool post go-live.

Second deployment - 10 weeks of readiness work first, then go-live:

Query category taxonomy defined and agreed across all teams

All routing exceptions documented and built into the workflow design

Team involved in testing and calibration for six weeks pre go-live

Named tool owner, weekly accuracy review, defined escalation path

Routing accuracy at 91% within 8 weeks of go-live - saving 1.4 FTE in query handling

First Deployment Cost

£34,000

Value delivered: zero

vs

Second Deployment Cost

£41,000

1.4 FTE saving. Payback: 7 months.


Partner Articles

From the Front Line

All four partners on AI - from four completely different angles. Because AI touches every domain, and every domain has a different failure mode.

Richard Danks

Partner · Technology, Governance and CTO Recovery  ·  DBA, MBA

AI Without Governance Is Not Innovation. It Is Unmanaged Risk.

I have spent my career in environments where technology governance failures have material consequences - regulated banking, defence, PE-backed businesses under pressure to perform. The pattern I am now seeing with AI deployments is identical to the patterns I have seen with every previous wave of transformative technology. The technology moves faster than the governance. And the gap between deployment and accountability is where the real risk lives.

AI governance in a mid-market business does not need to be complex. It needs to answer four questions. What decisions is this AI making or influencing? What is the accuracy standard it is being held to and who monitors it? What is the escalation path when it gets it wrong? And who is accountable for the tool - not at go-live, but 18 months from now?

The question is not whether AI can do the job. The question is whether your governance structure is capable of managing what happens when it does the job badly.

In regulated environments this is not optional - the FCA and ICO are both developing clearer expectations around AI decision-making and auditability. But even outside regulation, the commercial exposure is real. An AI tool making customer-facing decisions without a defined human review threshold is a liability that has not been priced. A tool producing financial outputs that nobody is cross-checking is a governance failure waiting to become a board conversation.

Build the governance structure before you go live. Define the decision boundary. Assign the owner. Set the accuracy threshold. Create the review cycle. It adds weeks to the deployment. It removes years of potential exposure.

Richard Danks specialises in technology governance and regulatory remediation across banking, defence and SaaS.  Connect on LinkedIn


Joe Kay

Partner · Process and Operational Excellence  ·  Lean Six Sigma Master Black Belt

AI Will Not Fix Your Operations. But It Will Reveal Exactly What Is Wrong With Them.

I have been a Lean Six Sigma Master Black Belt for most of my professional life. I have delivered operational improvement in environments where waste reduction of even 5% was worth tens of millions of pounds. And I will tell you something that most AI vendors will not: the fastest way to identify your operational problems is to try to deploy AI against them.

Not because the AI fixes them. Because the discipline of preparing for AI deployment - mapping the process, defining the data, documenting the exceptions, agreeing the decision rules - surfaces every assumption, workaround and undefined handoff in the operation. The AI readiness process is, in practice, one of the most rigorous operational diagnostics available.

Every time I have prepared a mid-market operation for AI deployment we have found process problems that nobody knew existed - because the people doing the work had simply absorbed them into their daily routine.

The practical implication is this: do not treat AI readiness as a pre-deployment checklist. Treat it as a structured operational review. Map the process end to end before you try to automate any part of it. Document every exception. Define every decision rule. Identify every data source and validate its accuracy. By the time you have done that work properly you will have a cleaner operation regardless of whether the AI deployment proceeds.

The businesses that get the most value from AI are the ones that treat the readiness phase as valuable in its own right - not as overhead to minimise on the way to go-live. The preparation is not the cost of the transformation. It is the transformation.

Joe Kay has delivered over £100m in operational savings across 100+ programmes including Network Rail, BCG, Aviva and HSBC.  Connect on LinkedIn


Brian Ford

Partner · Programme Recovery and Delivery Assurance  ·  PRINCE2 Practitioner

An AI Deployment Is a Programme. Treat It Like One.

I have seen AI deployments treated as product purchases. Buy the tool, configure it, train the team, go live. Six weeks. Done. The problem is that a meaningful AI deployment is not a product purchase - it is a change programme with a technology component. And like every change programme that is not governed properly, it will drift, stall and eventually be quietly abandoned by the people it was supposed to serve.

The difference between an AI deployment that sticks and one that does not is almost always programme discipline. Clear scope. Defined success criteria. Named workstream owners. A structured adoption plan that does not end at go-live. A board-level sponsor who is accountable for outcomes, not just delivery. A risk register that includes the human and operational risks - not just the technical ones.

If your AI deployment does not have a programme structure behind it, it does not have a delivery assurance mechanism. And without assurance, you will not know it has failed until long after it has.

The practical questions to ask before any AI deployment begins: What does success look like in measurable terms at 3, 6 and 12 months? Who is the named executive sponsor and what decisions can they make without escalation? What is the change management plan for the team that has to change how they work? What is the exit plan if the tool does not perform to the defined standard?

These are not questions about the technology. They are questions about the programme. Answer them before you go live and your probability of success increases substantially. Ignore them and you are relying on the technology alone to carry the change. In my experience it never does.

Brian Ford led the largest banking transformation in Europe at Bank of Ireland and has delivered programmes at EY, Capgemini, Barclays Capital and JPMorgan.  Connect on LinkedIn


Tom Henry

Partner · Process Transformation and Operations Excellence  ·  Lean Black Belt L2A

The Right Time to Deploy AI Is After You Know What You Are Actually Trying to Change.

Process transformation and AI are not the same thing - but they are closely related, and the sequence matters. In my experience the businesses that get the most from AI are the ones that understand their current-state process in detail before they decide where AI fits. Not because AI cannot help identify process problems - it can. But because if you do not understand the process first, you cannot evaluate whether the AI is actually improving it.

The specific risk I see most often in mid-market businesses is AI being deployed against a process that has never been mapped from the customer's perspective. The internal workflow gets automated. The customer-facing friction points - the ones that exist in the gaps between systems and teams - remain. The operation is faster. The customer experience is unchanged or worse. And because the AI is now embedded in the workflow, the friction is harder to address than it was before deployment.

Map the customer journey before you deploy AI in any process that touches a customer. Without that map you are optimising in the dark.

The practical sequence I recommend is: current-state process map first, customer journey map second, identification of the specific friction points where AI can genuinely reduce effort or improve experience third, then AI tool selection. Most businesses do this in reverse - tool first, use case second, process map never. The result is a tool that solves a problem the business defined after purchase rather than before.

AI is a genuine competitive advantage for mid-market businesses that are ready for it. The readiness question is not whether you have the budget or the technology appetite. It is whether you understand your current state well enough to know what you are actually changing - and how you will know if the change worked.

Tom Henry has delivered 50% lead time reductions and double-digit FTE savings across financial services, legal and energy sectors.  Connect on LinkedIn


The One Thing - Do This This Month

Run the 6-question AI readiness assessment with your leadership team.

One hour. Six questions. No preparation required. If you are planning any AI investment in the next 12 months - or you already have AI tools live that were not deployed against this framework - the answers will tell you exactly where your exposure is.

If you want an independent AI readiness review - covering all six domains, identifying your specific risk gaps, and giving you a prioritised action plan before you commit further budget - a Rapid Triage with Assured Velocity is 30 minutes. No pitch. No pressure.

Book a Free 30-Minute Rapid Triage

Next Month in The Velocity Brief

"Your operating model stopped fitting your business two years ago."

How mid-market businesses outgrow the structure that got them here - and the five signals that your operating model is now working against you rather than for you.

Assured Velocity

Fractional Transformation Office for Mid-Market Businesses (£10M to £100M)
Independent · Vendor-Neutral · Embedded in your execution

Transformation Domains

Strategy Technology Data AI Process People Delivery

Governance Domains

Technology Gov. Programme Gov. Project Gov. Business Gov.

UK-based · Midlands and Nationwide    assured-velocity.co.uk    hello@assured-velocity.co.uk


You are receiving The Velocity Brief because you signed up or connected with us on LinkedIn.
Unsubscribe  ·  Forward to a colleague  ·  View online