Most programs end the same way. Go-live happens. The team disperses. Everyone moves on to the next priority. Six months later, nobody can quite remember why the original timeline slipped, whether the promised benefits arrived, or what they would do differently next time.

The post implementation review exists to prevent exactly that. Not as a formality, not as a blame exercise, but as the structured mechanism by which an organization becomes genuinely better at delivering change over time.

In practice, it is one of the most consistently skipped activities in program management - and one of the most consequential things to skip.

What a post implementation review is (and isn't)

A post implementation review (PIR) is a structured assessment conducted after a program or project has gone live, designed to evaluate whether it delivered what it promised and to capture the learning that should inform future work.

It is not a retrospective - that is a continuous improvement activity within an agile delivery cycle. It is not a benefits realization review, though the two are closely related. And it is not a blame session, though organizations with poor psychological safety often experience it as one.

The purpose is threefold: accountability for outcomes against the original business case; organisational learning from what actually happened; and identification of residual risks or actions that did not close at go-live.

A PIR asks whether the organization got what it paid for - and whether it is genuinely more capable as a result of the program. Neither question is comfortable to answer honestly. That discomfort is exactly the point.

Why most organizations never run one

The PIR is structurally at risk from the moment a program ends. Several forces conspire against it.

Project fatigue. By go-live, teams are exhausted. The instinct is to close down, not to reopen a conversation about what went wrong. The program director wants to move on. The sponsor wants to declare victory. Nobody is pushing to run a review that might complicate that narrative.

No formal ownership. The governance structure that existed during the program - steering committee, workstream leads, PMO - typically dissolves at go-live. The PIR falls into a gap. Unless someone explicitly owns it in the program closure plan, it simply does not happen.

Fear of accountability. If the program overspent, missed deadlines, or failed to deliver the projected savings, a well-run PIR will surface that clearly. For organizations where admitting this has career consequences, the PIR becomes a document that gets quietly deprioritised.

Budget exhaustion. Program budgets rarely include a meaningful allocation for post-go-live review. When the budget is gone, any activity that requires time from senior people is difficult to justify.

The cumulative result: the same mistakes appear in the next program's risk register, identified anew, re-mitigated in the same incomplete way, and eventually materialising in the same form.

When to run it

Timing is one of the most underestimated variables in PIR design. Too early, and the review captures immediate operational turbulence rather than structural outcomes. Too late, and the people with the relevant knowledge have moved on and the candid recollections have faded.

For most implementations, the practical window is 60 to 90 days post go-live. This allows enough time for the new operating model to stabilize, for early benefits signals to become visible, and for the team to have emotional distance from the delivery phase - while keeping key people accessible and memory intact.

For major ERP implementations, significant technology transformations, or business-wide operating model changes, extend this to 90 to 120 days. These programs have longer stabilization tails and the benefits case typically runs over a 12-month horizon, so the PIR should be designed as a staged activity - a 90-day initial review followed by a 12-month benefits checkpoint.

The date should be agreed and calendared as part of program closure, not arranged reactively after the fact.

What a good PIR covers

A PIR that only asks "did we deliver on time and on budget?" is a performance review, not a learning exercise. A useful PIR covers five areas.

Delivery vs. plan. Scope, cost, and timeline - with honest variance analysis. Not defensive narrative, but factual comparison between what was committed and what was delivered, with clear explanation of material differences. This is uncomfortable but necessary. Without it, the business case process for future programs is built on optimism rather than evidence.

Benefits realization status. Are the projected benefits materialising? On what evidence? Which benefits are on track, which are delayed, and which are now unlikely to arrive? This question should be answerable with data, not opinion. If the metrics were not instrumented during the program, that itself is a finding.

What we would do differently. Process, governance, resourcing, vendor management, stakeholder engagement. This is where the most durable organisational learning lives - not in the risk log, but in the honest answer to "if we did this again, what would we change?" Capture this at the level of specific decisions, not general principles.

Capabilities built or not built. A program should leave the organization genuinely more capable - not just with a new system or process, but with people who understand it, can maintain it, and can develop it further. Was that achieved? If not, what is the risk and who owns the gap?

Residual risk and open actions. Every program closes with a tail of unresolved items - defects not yet fixed, integrations not yet tested, training not yet completed. The PIR should create a formal register of these items with named owners and dates, and confirm that they are in someone's BAU responsibility, not floating in a decommissioned program structure.

Who should be in the room

The composition of a PIR significantly determines what findings it is capable of producing.

The core participants should include the program sponsor, the program director or delivery lead, key business workstream owners, and a representative from finance or the CFO's office who can speak to the benefits case. The IT or technology lead should be present for any technology-heavy program.

What a PIR should not be is a closed session run entirely by the program team reviewing its own work. The program team has an inherent interest in defending the decisions it made. That is not a criticism - it is human nature. Independent facilitation, whether internal (a different function) or external, materially improves the quality of the findings.

Senior leadership attendance is not optional. A PIR where the sponsor delegates to a junior team produces findings that carry no weight and go nowhere. The accountability conversation requires the person who made the original commitments to be in the room.

What to do with the findings

The most common failure mode in post implementation reviews is not poor facilitation or shallow analysis. It is producing a findings document that gets filed and never acted on.

Findings without owners are observations. They do not improve anything. Every finding that requires action must have a named individual accountable for it, a clear description of the action, and a date by which it will be resolved. This is an action register, not a report.

The findings should feed directly into three places. First, the benefits realization tracking process - the PIR closes the loop on the original business case and updates the projected outcome with actual evidence. Second, the program governance framework for the next major initiative - specific changes to how the organization plans, governs, and delivers. Third, the business case assumptions used in future investment decisions - if the last program delivered 60% of its projected benefits, that calibration should inform what the next one promises the board.

Organizations that do this consistently build what is sometimes called program delivery maturity - not through formal methodology adoption, but through the accumulation of honest, institutionalised learning from each program they run. Over time, their estimates become more accurate, their risk registers more relevant, and their delivery outcomes more predictable.

The PIR is not the glamorous part of a transformation. It does not have a launch event or a go-live moment. But for organizations that take it seriously, it is the part that makes the next transformation meaningfully better than the last one.

Planning a major implementation - or reviewing one that has just landed?

Assured Velocity provides independent program assurance, PIR facilitation, and benefits tracking for mid-market organizations. Start with a scoping call.