The AI agency market has expanded rapidly. That expansion has not been even. Most growth has happened at the enterprise end - large budgets, long procurement cycles, and the kind of logos that agencies put on their websites. Mid-market businesses occupy a different world, and an agency that cannot navigate that world will not deliver.
A manufacturer with £30m revenue, a logistics business at £80m, a professional services firm at £50m - these are not small enterprises. They have real complexity, real integration challenges, and real operational risk. But they do not have a Chief AI Officer, a dedicated transformation budget, or a program management office standing behind the engagement. The agency has to work differently in that context. Not every agency can.
The problem is that no agency will tell you this. They will show you case studies, reference clients, and a technology stack. They will use the word "pragmatic" a great deal. What they will not tell you is whether they have ever had to get a sceptical operations director on board, navigate a finance team with no appetite for speculative investment, or integrate AI outputs with a legacy system that nobody has touched in seven years.
Before you start: The most important thing to establish is not whether an agency has worked with mid-market businesses. Almost all of them will claim they have. The question is whether they understand what makes mid-market different - and whether their delivery model is actually designed for it.
Why mid-market is a different problem
There is a structural assumption embedded in most AI agency delivery models: that the client organization has someone, or several people, whose job it is to manage the engagement. In enterprise, this is usually true. In mid-market, it almost never is.
The person sponsoring an AI initiative in a £40m business is also running something else - the operations function, the finance team, or in many cases, the whole company. They are not in a position to attend weekly steering groups, review detailed status reports, or escalate blockers through a change management hierarchy. The agency has to compensate for that. If it cannot, the engagement will drift.
| Dimension | Enterprise client | Mid-market client |
|---|---|---|
| Sponsor bandwidth | Focused on this engagement as primary responsibility | Sponsor owns 3-5 other functions simultaneously |
| Data readiness | Data warehouse, data governance in place | Fragmented across legacy systems, spreadsheets, and tribal knowledge |
| Change capacity | Internal change team or external support | Change lands on the same people the AI is meant to help |
| Integration context | Vendor-supported integration pathways | Heterogeneous legacy stack, limited documentation, workarounds in production |
| Risk appetite | Can absorb a failed workstream, hedged across parallel initiatives | One failed initiative uses up political capital for 18 months |
| Decision speed | Monthly SteerCo, documented approval gates | Fast when committed, stalled when uncertain - no formal mechanism |
An agency that has only worked in enterprise settings will design its engagement model around the top row of every dimension above. Steering committees. Status packs. Escalation paths. In a mid-market business, that apparatus becomes the engagement - and nothing actually gets built.
The five signals of genuine mid-market experience
These are the things that experienced mid-market agencies know and articulate without being asked. If you have to prompt for them, the experience probably is not there.
The assessment framework
Use this framework across three dimensions: commercial orientation, delivery model, and organisational experience. Each dimension has indicators that distinguish genuine mid-market experience from adapted enterprise practice.
Red flags and green flags
In the conversation with an agency, certain responses will tell you more than any case study. These are the responses that most reliably separate genuine mid-market experience from adapted enterprise practice.
| You ask... | Red flag | Green flag |
|---|---|---|
| "How do you handle a sponsor who can only give you a few hours a month?" | They describe a lighter version of their standard steering group model. | They explain how their delivery model is designed to make progress between check-ins, not depend on them. |
| "What happens if our data is fragmented and poorly documented?" | "We have a data preparation phase" - but no specifics on scope or duration. | They have a view on which AI use cases survive poor data environments and which do not, and scope accordingly. |
| "Tell me about a time an engagement was harder than expected." | They describe a technical challenge - a difficult integration, an unexpected API constraint. | They describe a human challenge - resistance from middle management, a sponsor who went cold, a board that moved the goalposts. |
| "How do you measure success?" | A dashboard of activity metrics - prompts processed, hours logged, features deployed. | They ask what operational outcome matters to the business and commit to measuring against that, even if it takes time to show up. |
| "How do you make sure staff actually use what you build?" | "We include training in the engagement" - usually a few sessions at the end of the build phase. | An explicit adoption model that starts at scoping, embedding usage into existing workflows rather than adding new ones. |
| "What does your engagement look like after go-live?" | They offer a support contract or a retainer with vague scope. | A defined post-go-live period with specific success criteria and a clear statement of when the engagement has achieved what it set out to do. |
Evaluating their case studies
Case studies are the most commonly gamed part of an agency's pitch. Here is how to read them more effectively.
Check the revenue band, not the name
A case study featuring a £200m business tells you almost nothing about how the agency will perform with a £40m business. Ask directly: what was the annual revenue of this client at the time of the engagement? If they hesitate or the answer is vague, the case study is probably from a business that does not resemble yours.
Look for the constraint narrative
Every genuine mid-market engagement involves constraint. Data gaps. Limited internal resource. Integration complexity with systems that have no modern API. If the case study reads like a clean story with a capable client team and modern infrastructure, it is either an enterprise story or it has been significantly sanitised.
Ask who owned the engagement on the client side
The answer tells you everything about the engagement context. "The Chief Digital Officer and their transformation team" means enterprise. "The FD, who was also running the finance function" means mid-market. The second answer suggests the agency has genuinely navigated a resource-constrained sponsor environment.
Ask what changed after the project ended
The most revealing question about any AI engagement is whether the outputs are still in use. Many AI projects deliver a proof of concept that is quietly retired after the agency leaves. If the agency cannot tell you confidently that the capability is still running and still valued, the engagement did not achieve what it claimed.
The case study question most agencies cannot answer well: "Can you connect me with a mid-market client in a similar sector, and would they be willing to tell me what the engagement was actually like - including what did not go as expected?" Agencies with genuine mid-market experience have clients who will give this reference without hesitation.
Questions to ask before you engage
- How do you price engagements, and what happens if the scope changes once you are inside?
- What is the minimum engagement you will take, and why?
- How do you handle a situation where the data does not support the original use case?
- What does a failed engagement look like for you, and has that happened?
- How often do you need meaningful input from the sponsor, and what happens when that input does not come?
- Who in your team works directly with the client, and how senior are they?
- What does your handover model look like, and how do you define when an engagement is complete?
- What is your approach when a key stakeholder becomes resistant mid-engagement?
- What is the revenue range of your typical mid-market client?
- How many of your current reference clients would describe themselves as mid-market rather than enterprise?
- Have you delivered AI capability into a business with significant legacy infrastructure? What did that require?
- What sectors have you worked in at the £20m to £100m revenue band?
- How do you measure adoption, and at what point after go-live?
- Can you give us an example of a build that did not get adopted, and what you learned from it?
- What support is included in the engagement, and what does ongoing support look like?
The mid-market AI readiness gap
One additional dimension worth assessing is whether the agency understands the difference between an AI-ready mid-market business and one that requires foundational work before AI can deliver anything meaningful. Many agencies will not tell you this. Getting the assessment wrong costs the client badly.
An agency that proposes a build before assessing your stage is either not experienced enough to know, or experienced enough to know and choosing not to say anything. The right agency will tell you honestly where you are - and, if necessary, will tell you that they are not the right first step.
Assess the team, not the pitch
The people who pitch mid-market AI engagements are rarely the people who deliver them. This is not unique to AI agencies - it is a structural problem in professional services. But it is more acute in a mid-market context, where the engagement typically runs with a small team and where a single inexperienced delivery lead can cause significant damage before anyone notices.
Before you commit, ask to meet the delivery team. Ask specifically who will be present on-site or in the operational environment during the first month of the engagement. Ask about their direct experience with businesses of your scale.
A reliable test: ask the delivery lead to describe a mid-market engagement where they personally had to navigate a significant internal resistance problem. Not the firm - them, personally. The answer will tell you whether they have genuinely done this work or whether their experience has been managed from above by someone more senior.
Agency assessment scorecard
Rate each dimension 0 to 5 based on what you have observed in your conversations, proposal review, and reference calls.
The independence problem
There is one further dimension that rarely appears in agency assessments but matters enormously for mid-market businesses: whether the agency you are speaking to has a financial interest in the recommendation it gives you.
Many AI agencies are, in practice, implementation partners for one or more AI platforms. They earn referral fees, implementation margins, or ongoing platform revenue from the technology they recommend. An enterprise business has procurement teams and legal review that help surface these conflicts. A mid-market business typically relies on the agency's judgement because it does not have the internal expertise to second-guess it.
Before you engage, ask directly: do you have any commercial relationship with the technology vendors you are likely to recommend? Do you receive referral fees, implementation margins, or ongoing platform revenue from those vendors?
The structural test: ask the agency what they would recommend if the most appropriate answer for your business was not to deploy any new AI capability at all. The willingness to give that answer, and the quality of the reasoning behind it, tells you more about their independence than any disclosure statement.
What a genuinely suitable agency sounds like
Conclusion
Assessing an AI agency's mid-market experience is not a due diligence exercise - it is a risk management exercise. The cost of getting it wrong is not a failed project. It is eighteen months of internal capital spent on something that did not deliver, a board that is now sceptical of any further AI investment, and operational staff who have been asked to change how they work twice with nothing to show for it.
The framework in this article will not eliminate that risk. No framework will. But it will help you distinguish between agencies that have genuinely navigated mid-market complexity and those that are offering you an enterprise model in smaller clothing.
Ask the uncomfortable questions. Request references from comparable businesses. Meet the delivery team before you sign anything. And pay attention when an agency is honest about the conditions that make an engagement viable - that honesty is itself a signal of the experience you are looking for.
Need an independent view before you commit?
Assured Velocity is independent and vendor-neutral. We hold no implementation revenue and carry no platform partnerships. If you are evaluating an AI agency or proposal, a fixed-scope advisory review gives you a board-ready view before capital is committed.