Your management accounts look fine. Your data is not fine. Here is how to tell the difference before it costs you.  ‌ ‌ 

Assured Velocity

The Velocity Brief

Straight talk on transformation, technology, AI, data and people leadership for mid-market operators. No vendor spin. No consultant waffle. Just what actually works.

Issue 02  ·  May 2026

This Month

Your data is lying to you - and your team has stopped questioning it.

In This Issue

01   The Hard Truth

02   The 5 Data Failures Framework

03   Real World Case Study

04   Joe Kay: Data and Operational Truth

05   Richard Danks: Data Governance

06   The One Thing


01   The Hard Truth

Your board is making decisions on data nobody has verified.

This is not a technology problem. It is not an IT problem. It is a leadership problem - and it is happening inside the majority of mid-market businesses right now.

Management accounts arrive. The board reviews them. Decisions get made. Nobody in that room has asked the question that actually matters: is the data behind these numbers accurate, consistent and complete - or has it been assembled by a finance team that has learned to work around the gaps in your systems?

You probably have a data problem if...

Your management accounts are consistently later than they should be - and the explanation is always "the team is working on it."

Two people in your leadership team quote different numbers for the same metric in the same meeting.

Your last system migration was described as complete - but everyone still keeps a spreadsheet "just to be safe."

None of these are minor inconveniences. Each one is evidence that your data and intelligence domain has a structural problem - and that every decision made from that data carries undisclosed risk.

The dangerous version is not when the numbers are obviously wrong. The dangerous version is when they are plausible enough that nobody questions them - but wrong enough to send the business in the wrong direction.


02   The Framework

The 5 Data Integrity Failures in Scaling Mid-Market Businesses

These are not edge cases. They are the standard. Most mid-market businesses have at least three of the five. Use this as a diagnostic in your next leadership meeting.

1

The Definition Problem

Sales, finance and operations are all measuring revenue - but using different definitions. One team counts on invoice. One counts on receipt. One counts on order. All three are technically correct. None of them match. The board sees whichever version the presenter chose this month.

Fix: A single agreed data dictionary, owned at board level, not IT level.


2

The Migration Ghost

Every business that has migrated a system in the last five years has data quality debt. Records carried across without validation. Duplicates that were not worth fixing at the time. Legacy codes still appearing in live reports. The migration was declared complete. The debt is still accruing.

Fix: A post-migration data quality audit - separate from the go-live sign-off.


3

The Shadow Spreadsheet

Somewhere in your business - probably in finance, operations and sales simultaneously - there are spreadsheets that exist because your system cannot be trusted for a specific purpose. These are not rogue files. They are rational responses to a system gap. But they create parallel data streams that diverge over time and can never be fully reconciled.

Fix: Map every shadow spreadsheet. Each one is a system capability gap in disguise.


4

The AI Trap

AI tools generate confident-looking outputs. When those tools are fed dirty data, inconsistent definitions or incomplete records the output is still confident-looking. It is just wrong. Businesses deploying AI on top of unresolved data quality problems are not accelerating their intelligence - they are accelerating their errors. At scale. Automatically.

Fix: Resolve data quality before deploying AI - not after you notice the outputs are unreliable.


5

The Customer Journey Blind Spot

Most businesses track operational data - volumes, costs, timelines. Very few track the data that describes what the customer actually experienced. Complaint rates, resolution times, contact frequency, drop-off points in the service journey. This data exists in fragments across CRM, service desk, billing and ops systems. It is almost never assembled into a complete picture. Without it you are optimising the inside of the business while the customer experience deteriorates invisibly.

Fix: Map the customer journey data model as a separate exercise from operational reporting.

The Diagnostic

Count how many of the five you recognise. One or two is manageable with focused attention. Three or more means your data and intelligence domain has structural problems that will not resolve without deliberate intervention. If you are planning any AI, RPA or analytics investment - stop until you have addressed these. You are building on ground that will not hold.


03   Real World

The £38M Manufacturer Who Almost Bought a Second ERP to Fix a Data Problem

Anonymised for client confidentiality

The operations director had been making the case for 18 months. The current ERP could not produce reliable stock valuations. Lead times were inconsistent with what the system was reporting. Customer complaints about delivery accuracy were climbing.

The board had been presented with a £420,000 proposal to replace the ERP with a more capable platform. The argument was compelling: the system is the problem. Replace the system and the reporting improves.

"The system cannot give us accurate stock. We need a system that can."

Operations Director, pre-engagement

What a four-week data and intelligence review found:

The ERP was functioning correctly. The data being entered into it was not. Three different teams were recording goods receipt at different points in the physical process - meaning the system always showed stock in transit as stock on hand.

A 2021 acquisition had added 4,200 product codes that had never been mapped to the parent company naming convention. The system treated them as separate items. Reporting collapsed them inconsistently depending on who ran the report.

The customer delivery data lived in three systems - ERP, a logistics platform and a spreadsheet maintained by the customer service team. Nobody had ever connected them. The complaints were real. The cause was invisible in any single system.

What was done instead of buying a new ERP:

Goods receipt process standardised across all three teams - single point of entry, single definition

4,200 acquired product codes rationalised to 1,100 - duplicates merged or retired

Customer journey data model built connecting all three systems - first time a complete delivery picture existed

Stock accuracy improved from 67% to 94% within eight weeks of process change

Proposed ERP Replacement

£420,000

vs

Actual Cost of Fix

£18,000


Partner Articles

From the Front Line

This month Joe and Richard on the data problems they see most often - and what actually fixes them.

Joe Kay

Partner · Process and Operational Excellence  ·  Lean Six Sigma Master Black Belt

The Number on the Dashboard Is Not the Truth. It Is Someone's Best Effort at the Truth.

I have been running operational diagnostics for 25 years. In that time I have sat in hundreds of leadership meetings where someone has pointed at a number on a screen and said: that is wrong. And someone else has said: that is what the system says.

Both of those people are right. The number is what the system says. The system is reporting accurately on the data it has been given. The problem is that the data it has been given is the product of a dozen small process decisions - most of them made years ago, by people who have since left, solving problems that no longer exist in the same form.

Data quality is a process problem wearing a technology mask. The system is not lying. The process that feeds it is unreliable.

When I run a Lean diagnostic the data trail is one of the first things I follow. Not because I am looking for a data problem - because the data trail tells me where the process breaks. Late entries, manual overrides, fields left blank, workarounds that have become standard practice. Each one is a symptom. Together they describe the gap between how the business was designed to operate and how it actually operates.

The fix is almost never technical. It is almost always a process definition problem. Who enters what, when, in which system, to which standard. Written down, agreed, trained, and checked. That is not glamorous work. But it is the work that makes every dashboard, every report and every AI tool downstream actually trustworthy.

Before you invest in better analytics, better dashboards or AI - ask yourself one question. Could every person in your business describe, precisely and consistently, how the data behind your key operational metrics gets created? If the answer is no, you do not have an analytics problem. You have a process problem that analytics will not solve.

Joe Kay has delivered over £100m in operational savings across 100+ programmes.  Connect on LinkedIn


Richard Danks

Partner · Technology, Governance and CTO Recovery  ·  DBA, MBA

Data Governance Is Not an IT Project. It Is a Board Responsibility.

I have worked in regulated banking and defence environments where data governance failures have resulted in regulatory action, capital requirements and in one case a £50m remediation programme. The common thread in every one of those situations was not a rogue IT team or a negligent CTO. It was a board that had classified data governance as a technology workstream and therefore never asked the questions that would have caught the problem early.

Mid-market businesses are not regulated banks. But the principle holds. When nobody at board level owns the question of data quality - when it is assumed that the system handles it or the finance team handles it - the gap between what the business believes about its data and what is actually true widens silently and continuously.

Every board I have worked with that had a serious data problem had one thing in common: data quality was on nobody's agenda until something broke.

Data governance in a mid-market business does not require a dedicated team or an enterprise data platform. It requires three things. First, a clear owner at leadership level - not IT, not finance, not operations. Someone with cross-functional authority and a genuine mandate to set and enforce standards. Second, a defined data quality standard for your critical business metrics - what good looks like, how it is measured, and what the escalation path is when it is not met. Third, a regular review cycle - quarterly is enough - where data quality is reported to the board with the same rigour as financial performance.

This is not expensive. It is not technically complex. It is a governance decision. And like most governance decisions the cost of making it is a fraction of the cost of not making it - measured when the regulatory letter arrives, when the acquisition due diligence uncovers four years of inconsistent records, or when the AI initiative produces outputs that nobody can explain or defend.

Richard Danks specialises in technology governance and regulatory remediation across banking, defence and SaaS.  Connect on LinkedIn


Brian Ford

Partner · Programme Recovery and Delivery Assurance  ·  PRINCE2 Practitioner

If You Cannot Trust Your Programme Data, You Cannot Govern Your Programme.

Every programme I have ever been called in to recover had the same feature in the weeks before it visibly failed. The data being reported to the board was optimistic. Not deliberately falsified - optimistic. Milestone dates that were probable rather than confirmed. Costs that reflected the original estimate rather than the current forecast. Risk ratings that had not been updated since the last board pack was written.

Programme reporting is a data problem before it is anything else. The board can only govern what it can see clearly. When the data feeding the programme dashboard is assembled by the same team that is under pressure to show progress, the incentive structure works against accuracy. Nobody is lying. Everyone is presenting the most defensible version of a complicated situation. And the cumulative effect is a board that believes the programme is amber when it is red.

The most dangerous programme report is one that is accurate about the facts and misleading about the risk. That requires no dishonesty at all. Just optimism and time pressure.

When I go into a programme recovery the first thing I do is not review the plan. It is review the data behind the plan. Where did the milestone dates come from and when were they last validated against actual progress? What is the basis for the cost forecast and who signed off the assumptions? What does the risk register show and when was each item last actively reviewed rather than carried forward from the previous period?

In the largest banking transformation I led at Bank of Ireland, programme governance was built around data integrity first. Every milestone had a defined evidence standard - not a status update from the workstream lead, but a specific artefact that confirmed completion. Every cost line had an owner and a weekly reforecast. Every risk had a named owner and a dated action. The discipline was not popular in the first month. It was non-negotiable by month three because it was the only reason the board trusted what they were being told.

If your programme reporting relies on the goodwill and judgment of the people delivering the programme to be accurate - it will not be. Not because those people are untrustworthy, but because the incentives are wrong. Independent assurance, defined evidence standards, and a clear separation between delivery reporting and governance reporting are not bureaucracy. They are the data infrastructure that makes programme governance possible.

Brian Ford led the largest banking transformation in Europe at Bank of Ireland and has delivered programmes at EY, Capgemini, Barclays Capital and JPMorgan.  Connect on LinkedIn


Tom Henry

Partner · Process Transformation and Operations Excellence  ·  Lean Black Belt L2A

You Are Optimising the Wrong Thing. The Customer Journey Data Will Tell You What You Are Missing.

Most process improvement work starts from the inside. You map the workflow. You identify the waste. You reduce the steps, the handoffs, the delays. Your lead time comes down. Your cost per transaction falls. Your team feels the improvement. The board sees it in the numbers.

And then a quarter later your customer satisfaction score has not moved. Or it has moved in the wrong direction. Because the thing you optimised was your internal view of the process - not the customer's experience of it. Those two things are related, but they are not the same. And the data that describes the gap between them is almost never assembled in one place.

The internal process data tells you how long it took. The customer journey data tells you how it felt. You need both to know whether you are actually improving.

When I run a current-state diagnostic I always map two things simultaneously. The operational process - steps, handoffs, timelines, failure points. And the customer journey - touchpoints, wait experiences, communication gaps, moments where the customer is left without information or in the wrong queue. The two maps almost never align. The operational team has optimised for throughput. The customer is experiencing the parts of the process that the operational team does not see because those parts happen on the customer's side of the boundary.

I worked with a financial services business that had reduced their onboarding process from 14 days to 8 days. Internal metrics showed a 40% improvement. Customer satisfaction had fallen. When we mapped the customer journey we found that the 8-day process involved 6 separate contact points from different teams, each asking for information the previous team had already collected. The customer experienced a faster process that felt more chaotic than the slower one it replaced.

The customer journey data existed. It was in the CRM, the service desk, the complaints log and an NPS survey that nobody had connected to the process data. Once we assembled it, the fix was straightforward. But you cannot fix what you cannot see - and most businesses are only looking at half the picture.

Tom Henry has delivered 50% lead time reductions and double-digit FTE savings across financial services, legal and energy sectors.  Connect on LinkedIn


06   The One Thing - Do This This Month

Ask your finance team where the management accounts data actually comes from.

Not which system produces the report. How the underlying data gets created. Who enters it, when, and to what standard. Ask specifically about the three metrics your board relies on most heavily.

If the answer involves the phrase "it depends" or "we adjust for" or "the system does not quite handle" - you have found your data problem. And you have found it before it becomes a crisis.

If you want an independent view of your data and intelligence domain - including where the quality gaps are, which ones matter most, and what it would take to fix them - a Rapid Triage with Assured Velocity is 30 minutes. No pitch. No pressure.

Book a Free 30-Minute Rapid Triage

Next Month in The Velocity Brief

"The AI question every mid-market board is avoiding."

How to assess AI readiness honestly, what good deployment actually looks like at mid-market scale, and the three questions your board should be able to answer before committing a single pound to an AI initiative.

Assured Velocity

Fractional Transformation Office for Mid-Market Businesses (£10M to £100M)
Independent · Vendor-Neutral · Embedded in your execution

Transformation Domains

Strategy Technology Data AI Process People Delivery

Governance Domains

Technology Gov. Programme Gov. Project Gov. Business Gov.

UK-based · Midlands and Nationwide    assured-velocity.co.uk    hello@assured-velocity.co.uk


You are receiving The Velocity Brief because you signed up or connected with us on LinkedIn.
Unsubscribe  ·  Forward to a colleague  ·  View online