In the modern attention economy, consultation and engagement professionals have never had to work so hard to attract an audience.
‘Statistically insignificant” is an easy challenge that tends to focus on raw numbers instead of sound representation. Conversely, the number of `’events held“, “comments left” or “leaflets printed” can easily sound impressive to all parties.
Thankfully, historic data suggests that representation is not in bad shape – while there are poor engagement levels by young people in general, those responding across other minority segments (such as people with a disability) are often close to the UK national average.
In order to understand what success looks like we must also understand what normal looks like. Generally speaking, there are rarely consultations which attract more than 1% of a local population who are impacted by proposals. In-fact, the average number of responses to any consultation in the UK is approximately 1,500.
II follows that criticising a consultation based on its response rate is fully flawed. For example, ten responses might represent 100% of people affected by an issue – such as on the same street. It’s better to consider a range of factors, including the quality of responses (such as if they were informed or not).Â
Dig a little deeper and you discover that the engagement issue is also a factor. Some are highly salient whereas others fail to get any traction. Moreover, some issues attract more responses amongst certain groups – such as an abundance of females who might respond to a consultation concerning IVF cycles.
Hence, if you want to measure success, there are many factors to consider in relation to what normal looks like for any given issue, population and consulting organisation. For example:-
- Amount of conflict and consensus
- Process complaint / dissatisfaction levels
- The number of requests for extra information
- Ratio of contributors to lurkers
- Amount of community control
- Political equity
- Information provision and transparency
- Level of confidence in results (related to safeguards)
- Costs per interaction and costs avoided (including carbon)
- Perception of the design
- Repeat visits / repeat engagement
- Who was NOT involved and why
- Proportion of identified stakeholders reached
- Spillover on other channels (e.g. print)
- The range of engagement methods (e.g. online / offline / public meeting)
And at the other end of the spectrum, there are additional subjective criteria which can be applied to understand if consultation and engagement was meaningful. For example:-
- The degree of change as a result of the exercise
- The timeliness of the process
- If the exercise was genuine based on the narrative and choices available
- If the results were accurate and accurately presented
- If there was sufficient information presented to help participants make informed views
- If decision makers absorbed the output of the consultation with due care and attention
- If there were any errors, such as in the provision of information supporting arguments
- If the feedback loop was closed
Of course, benchmarking for all of this is rarely the modus operandi of our public institutions and more frequently a case for the courts. This is partly because asking questions about the quality of a consultation or engagement exercise as a part of them is a recipe for disaster.
But it’s important that lessons are learned so that incremental improvements can result. Our view is that this is where the technology can help – many of the metrics and measures are already collected and the insights are ripe for machine analysis. Watch this space!