When the board stops trusting the numbers
Management information loses board trust in a particular way. It rarely happens suddenly. It accumulates through a series of moments - a number that could not be reconciled in a board meeting, a figure that changed between the pack and the verbal update, a metric that everyone interprets differently, a report that finance and operations cannot agree on.
By the time a board formally acknowledges that it does not trust its management information, the problem has usually been visible for 12 to 18 months. It has been managed around - with verbal caveats, supplementary spreadsheets, and informal pre-reads - rather than addressed.
The consequences are significant. Decisions made on untrusted data are either deferred (paralysis) or made on instinct (risk). Neither is acceptable at board level in a business with material financial, operational, or regulatory exposure.
"The finance director and the operations director were presenting different numbers in every board meeting. Both were technically correct - they were just measuring different things. Nobody had ever agreed what we were supposed to be measuring."
- CEO, £35m professional services firm
The five root causes of untrusted MI
In most cases, untrusted management information is the visible symptom of one or more of these underlying problems:
1. No agreed definition of the metric
Finance calculates revenue recognition one way. Sales calculates it on bookings. Operations uses a third method. All three are internally consistent. None of them match. Until the organisation agrees on a single authoritative definition for each key metric - and documents it - different teams will continue to report different numbers that are all, in some sense, correct.
2. Multiple source systems with no governance
Mid-market businesses that have grown through acquisition, system proliferation, or organic complexity typically have the same data in multiple systems - with no mechanism to ensure they stay aligned. Each team trusts the system they use. No one trusts the consolidated view because no one knows which system won when the numbers differ.
3. Manual transformation between source and report
When data is extracted from source systems, manipulated in spreadsheets, and then presented in a reporting tool, every step introduces the potential for error, interpretation, and version divergence. The more manual transformation steps there are, the less reliable and auditable the output.
4. Process failures that corrupt source data
Reporting tools are only as reliable as the data they are built on. If the underlying business processes - stock movement recording, invoice processing, customer status updates, project time recording - are inconsistent or poorly controlled, the MI built on top of them will be unreliable regardless of the quality of the reporting layer.
5. No data ownership or accountability
In most mid-market organisations, no one is specifically accountable for the quality of the data in the systems that feed MI. IT owns the systems. Finance owns the reports. Operations owns the processes. Nobody owns the data quality. The result is that data quality problems are everyone's problem and therefore nobody's priority.
How to diagnose your MI reliability problem
Before investing in new reporting tools, BI platforms, or data infrastructure, it is worth being precise about where the problem actually is. The wrong diagnosis leads to the wrong solution.
A structured MI reliability diagnostic typically covers:
- Metric inventory: what are the key metrics the board and leadership team need, and is there an agreed, documented definition for each?
- Source system mapping: where does each metric originate, and which system is the authoritative source?
- Transformation audit: what happens to the data between source and report - how many manual steps, spreadsheets, or intermediate tools are involved?
- Process quality assessment: are the business processes that generate the underlying data operating reliably and consistently?
- Reconciliation testing: can you reconcile the key numbers across systems, functions, and reporting periods? Where can you not?
The diagnostic is not a technology exercise. It is a business and process exercise that identifies whether the problem is in definitions, systems, processes, or governance - and which of those is the primary driver.
The path to a single trusted version of the truth
There is no universal sequence, but the most reliable approach follows this logic:
Fix the definitions first
No amount of data infrastructure investment produces a trusted single version of the truth if the organisation cannot agree what it is measuring. This is a business problem, not a technology problem. It requires leadership alignment, not a new BI platform.
Then fix the process failures that corrupt source data
If the problem is that source data is unreliable because of process failures - incomplete entries, inconsistent recording, timing differences - fixing the reporting layer does not fix the problem. It just makes the inaccuracy more visible and better formatted.
Then rationalise source systems where practical
In businesses with multiple systems holding the same data, rationalisation - consolidating to a single authoritative source - dramatically reduces the governance complexity. This is often part of an ERP or platform project, but it can also be done incrementally through better integration and master data management.
Then automate the transformation
Removing manual transformation steps - replacing extract-manipulate-load spreadsheet workflows with automated data pipelines - reduces both error rates and the time cost of monthly close and reporting cycles. This is where BI platform investment and data engineering typically sit.
Then assign accountability
A data quality owner - whether that is a Head of Data, a Finance Systems Manager, or a defined role within the finance or operations function - is the mechanism that sustains improvement over time. Without it, the investment in definitions, process, and technology degrades back to the previous state within 12 to 24 months.
What does not fix it
The most common mistake is investing in new reporting tools before fixing the underlying problems. A new dashboard built on the same inconsistent data is a faster way to produce the same unreliable numbers. It does not restore trust - it accelerates the production of things the board does not trust.
The second most common mistake is treating MI reliability as a technology project and assigning it to IT. The root causes are almost always in business process, data governance, and metric definition - none of which are owned by IT. Technology is part of the solution, but it cannot be the owner of it.
Does your board trust the numbers?
Assured Velocity provides MI reliability assessment and data strategy for mid-market organisations. Start with a 30-minute scoping call to identify where the problem actually is.