Why most mid-market AI projects fail before they start

The majority of AI projects in mid-market organisations do not fail because the technology does not work. They fail because the data the technology needs to work does not exist in the form required - and no one assessed this honestly before the investment was made.

AI systems - whether they are predictive models, large language model integrations, automated decision systems, or intelligent workflow tools - depend on data. Specifically, they depend on data that is available, accessible, accurate, consistent, and governed. In most mid-market businesses, at least two of those five conditions are not met.

The result is a familiar pattern: an AI initiative is announced, a vendor is selected, a pilot is run, the results are disappointing, and the conclusion drawn is that "AI is not ready for our business" - when the actual problem was that the business was not ready for AI.

"We spent six months and £180k on an AI demand forecasting project before anyone realised that our stock movement data had been recorded inconsistently for three years. The model was trained on data that did not reflect reality. We had to start again."

- Operations Director, manufacturing business

The five dimensions of AI data readiness

An honest AI readiness assessment covers five dimensions. Weakness in any one of them will constrain what AI can deliver, regardless of how much is invested in models, tools, or platforms.

1

Data availability

Do you have the data the AI use case requires, and has it been recorded consistently over a long enough period to train or validate a model? For predictive use cases, this typically means 18 to 36 months of clean historical data. Many mid-market businesses either do not have this, or have it in fragmented form across legacy systems that were never designed to integrate.

2

Data accessibility

Even if the data exists, can it be accessed in a usable form? Data locked in disconnected systems, manually maintained spreadsheets, or legacy databases without APIs requires extraction, transformation, and integration work before any AI application can use it. This work is frequently underestimated in AI project scoping.

3

Data accuracy and completeness

Is the data accurate? Are there systematic gaps, known errors, or periods of poor data quality? AI models trained on inaccurate or incomplete data produce inaccurate or incomplete outputs. This sounds obvious, but it is consistently the dimension that receives the least attention during AI project initiation.

4

Data governance and ownership

Is there a named owner for each data set the AI application will depend on? Is there a process for maintaining data quality, resolving inconsistencies, and managing changes to the data structure over time? Without governance, AI applications that work at launch degrade in performance as the data they depend on drifts.

5

Data infrastructure

Does the organisation have the infrastructure to collect, store, process, and serve data to AI applications at the required scale and latency? This is not exclusively an enterprise concern - mid-market businesses attempting real-time AI applications without the underlying data infrastructure to support them encounter performance and reliability problems that are expensive to fix after the AI application is already built.

How to assess your AI readiness honestly

A useful AI readiness assessment is not a vendor questionnaire or a maturity model scored by the people who want to use the AI. It is an independent review that:

  • Starts with the use case, not the technology. What specifically is the AI supposed to do, what data does that require, and does that data exist in the form and quality required?
  • Involves the people who actually work with the data. Data quality problems are visible to the people who enter and maintain data - not always to the people who report from it. A readiness assessment that does not include process owners and data users will miss the most important issues.
  • Produces a gap analysis, not a traffic light. The output should be a specific, actionable list of what needs to change before the AI use case is viable - not a general assessment of whether you are "ready" or "not ready".
  • Is independent of the AI vendor. AI vendors have a commercial interest in the assessment concluding that you are ready to proceed. Independent assessment serves a different purpose.

What comes before the AI roadmap

The sequence that works is: use case definition, data readiness assessment, data foundation work, pilot, then scale. The sequence that does not work - and that most mid-market AI initiatives follow - is: AI roadmap, vendor selection, pilot, data problems discovered, project stalls.

Data foundation work is not glamorous. It includes data quality remediation, integration work, governance structure design, and master data management. It is frequently more expensive and time-consuming than the AI application it is designed to support. But it is the difference between an AI investment that delivers and one that does not.

The businesses that have delivered AI ROI in the mid-market are almost without exception the ones that invested in data foundations before AI tools - and the ones that approached AI with specific use cases and measurable outcomes rather than broad transformation programmes.

The most common mistakes

  • Starting with the tool, not the use case. Procuring an AI platform before defining what it is supposed to do produces a solution looking for a problem.
  • Relying on vendor readiness assessments. If the vendor sells the assessment and the product, the assessment will conclude you are ready for the product.
  • Treating data readiness as a precondition to be ticked, not investigated. "We have three years of data in our ERP" is not a readiness assessment. It is a starting point for one.
  • Underestimating the data foundation timeline. Data quality remediation, integration work, and governance structure design typically take two to four times longer than planned. Building an AI roadmap that depends on this work being complete in three months when it takes nine produces predictable delays.
  • No governance plan for after go-live. AI applications that are not actively governed - with named data owners, quality monitoring, and a process for handling model drift - degrade over time. The investment in building them is not protected without a plan for maintaining them.

Assessing AI readiness before you invest?

Assured Velocity provides independent AI readiness assessment and data strategy for mid-market organisations. Vendor-neutral, no tool sales, no conflicts of interest.

Book a Scoping Call