Why Your Engagement Evaluation Is Post-Rationalisation
Why Your Engagement Evaluation Is Probably Post-Rationalisation
Defining success after the process ends isn’t evaluation. Here’s what real evaluation design looks like — and why it has to happen first.
If you designed your evaluation framework after your engagement ran, you didn’t evaluate your engagement. You described it.
A common end-of-project conversation.
The engagement has concluded. The project manager asks the team to pull together an evaluation report for the funder. Someone opens a blank document and starts listing what happened: sessions held, participants, survey responses, key themes.
The report is completed. It shows strong participation numbers, diverse representation, and clear thematic outputs. The engagement is assessed as successful.
Nobody asks whether the engagement achieved what it was designed to achieve, because nobody defined what it was designed to achieve before it started. The evaluation describes the process. It does not assess it.
Evaluation is one of the most consistently mishandled steps in the community engagement sequence — not because practitioners don’t value it, but because it is almost universally treated as something that happens after an engagement concludes rather than something that must be designed before it begins.
The result is a widespread practice of post-rationalisation dressed as evaluation. Teams document what happened, find evidence that things went reasonably well, and report against metrics that were selected because the process performed well against them. This is not without value. But it is not evaluation.
Evaluation designed after the fact measures what happened. Evaluation designed before tells you whether you succeeded.
The difference between description and evaluation
Description answers: what did we do? Evaluation answers: did it work?
Both are useful. But they are only the same thing if you defined ‘work’ before you started. If you define it after, you are not measuring whether you succeeded — you are finding the definition of success that best fits what you achieved. That is the structural logic of post-rationalisation, regardless of the intent behind it.
Real evaluation requires that you state, before implementation begins, what you are trying to achieve and how you will know whether you achieved it. This sounds straightforward. In practice it requires answering some uncomfortable questions: What does meaningful participation look like for this project? Which stakeholder groups must we hear from for the engagement to be valid? How will we know whether community input actually shaped the decision?
Description vs evaluation
Description: 247 participants across six sessions, representing 11 stakeholder groups.Evaluation: We set a target of reaching the five priority stakeholder groups identified in our mapping. We reached four. The fifth — residents of the South Ward — was under-represented despite two targeted outreach efforts. We know why and have built a revised approach into the next round.
Process measures and outcome measures
A complete evaluation framework includes both process measures and outcome measures. Most engagement evaluations only include the former.
Process measures assess whether the engagement was conducted as intended: Did we reach the stakeholder groups we set out to reach? Were the engagement tools effective at overcoming the barriers we identified? Did participants feel heard and respected? Did we operate in accordance with our IAP2 commitments? These measures are important. They tell you whether the process was well-run.
Outcome measures assess whether the engagement achieved its purpose: Did community input influence the decision? Were the recommendations or preferences expressed by participants reflected in the outcome — and where they were not, was there a clear and communicated reason? Did the process build the understanding and trust that justified the investment? These measures are harder to define and harder to collect, which is precisely why they are so often absent.
Without outcome measures, it is possible — and common — to report a highly successful engagement process that produced no meaningful community influence on the decision it was designed to inform.
Common failure pattern: The evaluation report shows 92% of participants felt their views were listened to. It does not show whether those views influenced the decision. The engagement was a positive experience. Whether it served its purpose is unknown, because no outcome measure was defined before it began.
The metrics trap
Companion Resource
Download Engagement Evaluation Planner
Get the one-page field reference and use it in your next engagement project.
Get The DownloadThe dominance of input metrics in engagement evaluation — sessions, participants, responses — is understandable. They are easy to collect, easy to report, and easy to compare across projects. They also carry an implicit assumption that more is better: more participants, more sessions, more responses equals more successful engagement.
This assumption is not always wrong. But it is often not right, either. A project that reached 500 participants — all of whom were already engaged, already informed, and already broadly supportive — may have achieved far less than a project that reached 80 participants from communities who had never engaged with the organisation before and held views significantly different from those already captured.
Input metrics tell you the scale of the process. They do not tell you the quality of the data, the breadth of the perspectives captured, or the degree to which underrepresented communities were genuinely included. Those things require outcome and quality measures defined in advance.
More participants doesn’t mean better engagement. It means bigger engagement. They’re not the same thing.
What evaluation design before implementation looks like
Designing your evaluation framework before implementation means answering five questions at the workplanning stage, before any sessions are scheduled.
Who must we hear from for this engagement to be valid? Name specific stakeholder groups — not broad categories. If you cannot reach them, that is a finding, not a footnote.
What does meaningful participation look like for each group? Not attendance — genuine contribution. How will you know the difference?
How will we know whether input influenced the decision? This requires a documented connection between what was heard and what was decided. It requires Step 9 of the sequence — real-time tracking — to be in place.
What are our process quality commitments, and how will we check them? Post-session evaluations, facilitator debrief notes, participation tracking by group.
What will we do differently if mid-process evaluation shows we are falling short? Build in a review point during the engagement — not just at the end — so that course corrections are possible.
The planning sequence dependency
Evaluation design (Step 8) cannot be completed without the outputs of Steps 3, 4, and 5. Your objectives define what you are measuring. Your engagement level defines the standard against which you are measuring. Your stakeholder mapping defines who must be reached for the process to be valid. Evaluation design is the step that makes those earlier decisions accountable.
Download Companion Resource
Companion Resource
Download Engagement Evaluation Planner
Get the one-page field reference and use it in your next engagement project.
Get The DownloadPart of the CE Canvas series: Order of Operations
This post is part of a series on the sequence that drives effective community engagement. Read the full framework in our pillar post: Order of Operations — Why community engagement fails before the first session runs.
Part of Order of Operations for Community Engagement.
Next: The Step That Determines Whether Communities Trust You Next Time
Ready to Build Your Engagement Plan?
CE Canvas provides AI-guided templates and best practice frameworks to help you create comprehensive community engagement plans in minutes, not hours.
About CE Canvas Team
The CE Canvas team blends deep experience in community engagement with innovative product design to transform how organisations connect with their stakeholders.