Why Your Engagement Evaluation Is Post-Rationalisation
If evaluation criteria are defined after engagement ends, you are describing activity — not testing success.
If you designed your evaluation framework after your engagement ran, you didn’t evaluate your engagement. You described it.
A common end-of-project conversation.
The engagement has concluded. The project manager asks the team to pull together an evaluation report for the funder. Someone opens a blank document and starts listing what happened: sessions held, participants, survey responses, key themes.
The report is completed. It shows strong participation numbers, diverse representation, and clear thematic outputs. The engagement is assessed as successful.
Nobody asks whether the engagement achieved what it was designed to achieve, because nobody defined what it was designed to achieve before it started. The evaluation describes the process. It does not assess it.
Evaluation is one of the most consistently mishandled steps in the community engagement sequence — not because practitioners don’t value it, but because it is almost universally treated as something that happens after an engagement concludes rather than something that must be designed before it begins.
The result is a widespread practice of post-rationalisation dressed as evaluation. Teams document what happened, find evidence that things went reasonably well, and report against metrics that were selected because the process performed well against them. This is not without value. But it is not evaluation.
Evaluation designed after the fact measures what happened. Evaluation designed before tells you whether you succeeded.
The difference between description and evaluation
Description answers: what did we do? Evaluation answers: did it work?
Both are useful. But they are only the same thing if you defined ‘work’ before you started. If you define it after, you are not measuring whether you succeeded — you are finding the definition of success that best fits what you achieved. That is the structural logic of post-rationalisation, regardless of the intent behind it.
Real evaluation requires that you state, before implementation begins, what you are trying to achieve and how you will know whether you achieved it. This sounds straightforward. In practice it requires answering some uncomfortable questions: What does meaningful participation look like for this project? Which stakeholder groups must we hear from for the engagement to be valid? How will we know whether community input actually shaped the decision?
Description vs Evaluation
Description
What happened during the engagement process.
Evaluation
Whether the engagement achieved the outcomes it was intended to achieve.
Process measures and outcome measures
A complete evaluation framework includes both process measures and outcome measures. Most engagement evaluations only include the former.
Process Measures
Did we reach the stakeholder groups identified in mapping?
Did participation methods address identified barriers?
Did participants feel respected and heard?
Did the process meet the commitments of the chosen engagement level?
Outcome Measures
Did community input change options or recommendations?
Were stakeholder perspectives reflected in the final outcome?
Were under-represented groups included in decision inputs?
Was the reasoning for decisions communicated clearly?
Without outcome measures, it is possible — and common — to report a highly successful engagement process that produced no meaningful community influence on the decision it was designed to inform.
Common failure pattern: The evaluation report shows 92% of participants felt their views were listened to. It does not show whether those views influenced the decision. The engagement was a positive experience. Whether it served its purpose is unknown, because no outcome measure was defined before it began.
Track engagement performance — not just participation numbers.
Track engagement performance — not just participation numbers.
Get the one-page field reference and use it in your next engagement project.
The metrics trap
The dominance of input metrics in engagement evaluation — sessions, participants, responses — is understandable. They are easy to collect, easy to report, and easy to compare across projects. They also carry an implicit assumption that more is better: more participants, more sessions, more responses equals more successful engagement.
This assumption is not always wrong. But it is often not right, either. A project that reached 500 participants — all of whom were already engaged, already informed, and already broadly supportive — may have achieved far less than a project that reached 80 participants from communities who had never engaged with the organisation before and held views significantly different from those already captured.
Input metrics tell you the scale of the engagement process. They do not tell you whether the engagement improved the decision.
More participants doesn’t mean better engagement. It means bigger engagement. They’re not the same thing.
What evaluation design before implementation looks like
Evaluation questions to answer before engagement begins
Who must we hear from for this engagement to be valid? Name specific stakeholder groups — not broad categories. If you cannot reach them, that is a finding, not a footnote.
What does meaningful participation look like for each group? Not attendance — genuine contribution. How will you know the difference?
How will we know whether input influenced the decision? This requires a documented connection between what was heard and what was decided. It requires Step 9 of the sequence — real-time tracking — to be in place.
What are our process quality commitments, and how will we check them? Post-session evaluations, facilitator debrief notes, participation tracking by group.
What will we do differently if mid-process evaluation shows we are falling short? Build in a review point during the engagement — not just at the end — so that course corrections are possible.
Sequence dependency
Evaluation design (Step 8) cannot be completed without outputs from earlier steps:
This depends directly on defining engagement objectives, choosing an honest engagement level, and stakeholder mapping.
Objectives (Step 3): define what success means.
Engagement level (Step 4): defines the commitment standard.
Stakeholder mapping (Step 5): defines who must be reached.
Choosing evaluation criteria before these are clear leads to post-rationalisation. The sequence exists to prevent that.
Can AI help with this process and how?
Where AI helps: Draft evaluation frameworks early, propose process/outcome metrics, and flag missing evidence pathways.
What stays human: Define what success means for this engagement and decide which findings require course correction.
Governance check: Lock evaluation criteria pre-implementation and track any metric changes with justification and approval.
Bottom line: AI can improve evaluation design rigour, but credibility comes from human-defined standards and transparent reporting.
This post is part of a series on the sequence that drives effective community engagement. Read the full framework in our pillar post: Order of Operations — Why community engagement fails before the first session runs.
Part of Order of Operations for Community Engagement.
Next: The Thread You Can't Reconstruct
Evaluation Schedule and Governance Checkpoints
Define the evaluation schedule before implementation and tie checkpoints to delivery phases.
