Community engagement programs in Australia are widely adopted by emergency management organisations as one way to get communities to recognise hazards and risks and prepare for emergency events. However, evaluation of these programs remains a challenge. A study with 30 community engagement practitioners and managers from Australian emergency management organisations, councils and not-for-profit organisations was undertaken to examine how they use measurement and evaluation of community engagement for preparedness. The findings suggest that while community engagement teams understand the importance of measuring the effects of engagement efforts and preparedness activities, most still do not link engagement activities with higher-level engagement outcomes that influence communities.


Introduction

Helping people to recognise and prepare for natural hazards has become an imperative over the last decade. The states of Tasmania and Queensland were subject to unprecedented summer temperatures during Australia’s hottest summer on record (Bureau of Meteorology 2019). They also suffered damaging bushfires in ecologies that were thought to be safe from fire damage (Blackwood 2019, Forbes & Tatham 2018). The negative effects of these and similar events around the world are growing. The United Nations Office for Disaster Risk Reduction reported that between 1998 and 2017, disaster-affected countries reported tangible losses worth $US2.908bn; an increase of $US932bn from 1978–1997 (Below & Wallemacq 2018). In Australia, tangible costs of disasters are, on average, $AUD13.2bn a year, which is expected to grow to $39bn a year without factoring in the cost of climate change (Deloitte Access Economics 2017). Intangible costs are expected to be even greater. Between 1987 and 2016, 971 people in Australia lost their lives, 4370 were injured, 24,120 lost homes and 9.02 million people were affected in some way by disaster (Deloitte Access Economics 2017, p.18).

The motivation for natural hazard prevention preparation by individuals and communities is generated from the emergency management sector and local government efforts, particularly community engagement teams. Community engagement programs are generally measured in two ways:

  • headcounts or numbers of events detailing the number of people attending or spoken to (see emergency services agencies’ annual reports of 2017–18)
  • measuring the increases in preparedness levels of individuals, specific communities and state populations (such as those undertaken by Elsworth et al. 2010, Rhodes et al. 2011 as well as by agencies).

However, there is potential for measuring community engagement in a more meaningful way. Recent approaches to evaluation of community engagement take social or economic modelling approaches, employing a cost-benefit-analysis model to evaluate interventions (Coles & Quintero-Angel 2018, Gibbs et al. 2015, Street & Carr-Hill 2008). These include measuring direct impacts of community engagement on health outcomes or even lives saved, as well as economic and social indicators. Complex outcomes have been measured, such as contribution of community engagement to social capital and social networks, identification of community influencers as motivator and collective valuing of community-led actions for preparedness (Street & Carr-Hill 2008). But these efforts are rare.

Community engagement for preparedness

A key change in disaster preparedness in the past 20 years has been in the application of community engagement frameworks for community outreach. Community engagement can be considered as a pattern of activities implemented by agencies to collaborate with, and through, community members. The aim is to address, respond to or mitigate issues that affect the health, wellbeing or social status of the community (Bowen, Newenham-Kahindi & Herremans 2010; Fawcett et al. 1995; Johnston 2010; Scantlebury 2003).

The Australian Institute for Disaster Resilience (2018, p.2) defines community engagement as ‘the process of stakeholders working together to build resilience through collaborative action, shared capacity building and the development of strong relationships built on mutual trust and respect’. Community engagement facilitates community-to-agency relationships (Johnston et al. 2018) with a clear aim to build capacity in communities to contextualise and understand risk and take appropriate actions to prepare. Yet, evaluations of community engagement activities to achieve these aims are limited, leaving community engagement and emergency management practitioners with little information about the real contribution of engagement activities.

Evaluating community engagement

Evaluation is regarded as ‘the systematic application of research procedures to understand the conceptualisation, design, implementation, and utility of interventions’ (Valente 2001, p.106). Macnamara (2017) expands on the role of evaluation for governance and accountability, particularly around the use and reporting of publicly funded community engagement campaigns and the need to use meaningful engagement measures that ‘involve cognition, emotional connection and participation in conversations, as well as even deeper levels of interactivity such as collaboration’ (Macnamara 2014, p.17).

While evaluation models exist that offer insight for communication-based program planners (Macnamara 2015), there is general agreement that the foundation of any evaluation effort is to set measurable objectives and to measure meaningful outcomes including any effects (Watson 2012). Programmatic reporting of outputs, outcome and effects is regarded as best practice in evaluation (Argyrous 2018, Gregory & Macnamara 2019). However, Macnamara (2015) highlights several barriers to conducting evaluation that need to be resolved in practice. These include:

  • a lack of budget
  • a  lack of knowledge
  • a lack of standard measures
  • a lack of interest by management
  • that evaluation appears to be too complex for practice (Macnamara 2015, pp.374–375).

Emergency managers have been concerned with evaluation for some time. Gilbert (2007) raised the importance of evaluation of community engagement activities and found that measurement of impacts on communities was mostly absent in the areas of emergency management that were examined. The first national approach to evaluation was presented in the Guidelines for the Development of Community Education, Awareness and Engagement Programs (Elsworth et al. 2010) in which a realist synthesis approach to measurement was recommended. This important report also presented evaluations of activities and programs from across Australia and, for the first time, shared the evaluation techniques and results.

The National Strategy for Disaster Resilience – Community Engagement Framework (Australian Institute for Disaster Resilience 2013) outlined an approach to engagement reflective of the widely-used International Association for Public Participation (IAP2) model. The strategy framework included purpose, goals and loose objectives but provided no mention of, nor guidance on, evaluation or the need for measurement. For disaster recovery, the report, A Monitoring and Evaluation Framework for Disaster Recovery Programs (Argyrous 2018), provided a framework for evaluation of disaster recovery programs for effectiveness. However, since Elsworth and colleagues’ (2010) guidelines, only periodic evaluations of some programs and activities have been shared by state emergency management agencies and local governments across Australia. This sharing has been by committed community engagement practitioners or the researchers they have connected with (e.g. Dean 2015, Phillips et al. 2016, Redshaw et al. 2017, Webber et al. 2017). Anecdotally, it appears that some emergency agencies are working towards or have achieved embedded programs of evaluation. However, currently, there is no universal guideline or imperative for emergency management sector community engagement practitioners to measure the effects of the activities and programs they undertake.1 By improving the quality and consistency of evaluation, agencies and councils can better determine the effectiveness of their programs and also improve subsequent programs.

Community engagement practitioners face challenges in measuring success of community engagement in developing individual and community preparedness. Practitioners need to share their findings with others to enable past practices to be assessed and better practices to become accepted and adopted (Astill et al. 2018). Evaluation data are valuable because they warrant claims about the outcomes and effects that have occurred because of the engagement activities. Community engagement in emergency management needs a similar initiative and can draw on work done outside the emergency management sector. For instance, work by Johnston and Taylor (2018) provides a roadmap for improved evaluation.

This study revealed three levels or tiers of measurement of engagement. The tiers span low-level manifestation or output indicators, mid-level understanding and connecting or outcome indicators and impact indicators, suggesting higher-level action and change (Johnston & Taylor 2018, p.7; see also Watson 2012). Table 1 provides a summary of the tiers for measuring engagement.

Table 1: Tiers of engagement
Tier Sample Measurements of Engagement
1 - Low level:
Presence
Occurrence
Manifestation

Indicators of activity:

  • counts and amounts
  • social media (i.e. likes, page visits, click throughs)
  • monitoring of social media and traditional media
  • reading, viewing, visiting, impressions, awareness changes.
2 – Mid level:
Understanding
Connecting

Indicators of relationship qualities:

  • trust, reciprocity, credibility, legitimacy, openness, satisfaction, understanding
  • interaction quality
  • diffusion (patterns and networks)
  • dialogue

Indicators of engagement dimensions at individual level measuring affective, cognitive or behavioural outcomes:

  • antecedent and outcome.
3 - Higher level:
Action
Impact

Indicators of social embeddedness:

  • of self and others
  • social awareness and civic (greater good) indicators
  • acknowledgment of others (diversity, empowerment)
  • action, change and outcomes at the social level
  • engagement in ecological systems
  • recognition of diverse perspectives
  • social capital
  • emergency agency and coordinated actions.

In Table 1, Tier 1 engagement measures or outputs are the lowest level of evaluation. Output evaluation measures and reports on activities such as practitioner tasks (the doing and creating), counts and amounts, website likes and visits and social and media monitoring (Johnston & Taylor 2018). Examples of Tier 1 measurement techniques can be seen in emergency agency and local government annual reports as well as in Dufty’s (2008) evaluation of SES FloodSmart and StormSmart programs.

Tier 2 outcome indicators illustrate a higher level of attitudinal and behavioural results from engagement activities. Measurement assesses the types of connections and relationships. Community engagement seeks changes in knowledge and perceptions of efficacy and indicators identify behavioural changes such as families and communities creating and practising disaster plans. Foster (2013) demonstrated Tier 2 measurement and evaluation in a study on emergency agency home visits, as did Every and colleagues (2015) in their work on the South Australian Community Fire Safe program.

Tier 3 engagement measures the changes in behaviour (action), attitude and social networks. The impacts can be viewed as sustainable changes that help create resilience. Examples of impact indicators include participation in community based programs or social change and action as a result of engagement. Gibbs and co-authors (2015), in one of the few examples of economic modelling in emergency management, showed that the Victorian Country Fire Authority Community Fire Guard program prevented property loss worth $732,747 and there was a reduction in fatalities costed at $1.4 million per Fireguard group every 10 years. ‘Even if the risk of major bushfire event in a region were one in 100 years, the estimated cost savings in a 100-year period is $217,116 per group’ (Gibbs et al. 2015, p.375).

So how are Australian community engagement practitioners enacting evaluation? A research question that emerged from this review asks: ‘How do community engagement practitioners understand the evaluation of engagement in an Australian emergency management context?’.

Answers to this question are important because organisational support for evaluation of community engagement and subsequent learning to guide decision-making strengthens both the outcomes from community engagement and the way it is valued (Stewart 2017). How community engagement is approached and measured can change how emergency services organisations operate. Effective community engagement can move organisations closer to their communities. Owen and colleagues (2017) found that organisations need to learn and change to develop a ‘maturity’ that allows the experiences to be generalised across the organisation and the sector.

Method

A two-stage qualitative research design used content analysis and in-depth interviews. Stage one included an analysis of documents supplied by emergency agencies, local councils and not-for-profit organisations. Content analysis examined community engagement policy, practice and implementation and the documents were searched for key performance indicators and reporting language against these indicators. Annual reports for 2017–18 were also examined.

Stage two included 30 semi-structured interviews with community engagement practitioners from participating agencies, local councils and not-for-profit organisations. Interview questions drew upon the findings of the first stage of data collection. The interviews were conducted from October 2018 to January 2019 by telephone and online using the meeting software, Zoom. Ethics approval was granted from Queensland University of Technology Human Research Ethics Committee, approval number 1800000931.

Purposive sampling was used; participants included 30 community engagement practitioners and operational staff (9 males and 21 females). Participants were recruited from a list of emergency services organisations across Australia, with additional snowball sampling used to recruit participants who could provide information about non-agency initiatives that staff thought worked well.

All states and territories were represented in the sample. Participants represented all non-metropolitan fire agencies and all but two State Emergency Services. It included three local councils, a nationwide aid agency and a local community centre. Sampling criteria were applied at three levels being disaster type, type of agency and location. The sampling was designed to capture perspectives from organisations that respond to one type of hazard and organisations that respond to many different types of hazards. Table 2 summarises the participant organisations represented.

Table 2: Types of organisations represented in the study sample.
Agency Number
Emergency management agencies *

25

Local government area councils

3

Not-for-profit organisations and others

2

Total

30

* Includes oversight agencies.

Table 3 shows states and territories organisation representation.

Table 3: Numbers of organisations by state representation.
State Number
Queensland * 10
Victoria 8
New South Wales 4
Western Australia 3
Tasmania 2
South Australia 1
Australian Capital Territory 1
Northern Territory 1
Total 30
* Includes local government that has emergency management functions in that state for mitigation, preparedness and recovery phases.

The interviews took between 40 and 80 minutes. They were recorded and transcribed (verbatim). Participants were asked questions about their role, their community engagement approaches, evaluation activities and how evaluation has helped them to identify what works and does not work.

Analysis

The analysis of the annual reports of 14 emergency services organisations and local government agencies as well as community engagement charter documents provided the content analysis sample. Interview data were analysed following iterative stages of thematic analysis of topic, analytical and interpretive coding (following Glaser 1992). Quality was maintained between two coders by using a coding guidebook.

Findings

Varying evaluation processes

An analysis of the data found that there were varying attitudes and approaches to evaluation. Most organisations used some type of monitoring and evaluation and practitioners expressed positive attitudes towards evaluation. Responses indicated that they recognised the role and importance of evaluation of community engagement for emergency preparedness but also indicated evaluation could be complex and was often a difficult or under-resourced function. Only a (very) few participant organisations had a formal, organised and scientific approach to evaluation.

Table 4: Key performance indicators included in community engagement reports and charters.
Annual report and community engagement charters Number
Included specific community engagement key performance indicators and reported against these. 8
Included specific community engagement key performance indicators and did not report against these. 2
Did not included specific key performance indicators but reported some community engagement measurement. 2
Did not include specific key performance indicators and did not report. 2

Table 4 shows that very few documents included key performance indicators.

A range of evaluation techniques were used by the participants. For a few, evaluation of community engagement aimed at improving individual and community preparedness was comprehensive and systematic. Others used ad hoc measurements reflecting a Tier 1 approach of using counts, contacts, people attending events or other output-based activities. Very few practitioners articulated a comprehensive evaluation system that reported mid- and high-level outcomes and effects. Few participants made the link between evaluation and the achievement of higher-order strategic objectives.

It was evident that participating organisations had a commitment to evaluating and reporting community engagement activities at some level. Using Johnston and Taylor’s (2018) tier typology, the commitment to measurement was analysed as it was shown in the annual reports and community engagement charter documents. From this, the approach of the participating organisations was classified according to the tier level.

Table 5: Community engagement evaluation tier most frequently used.
Tier level Number of organisations undertaking
activity at the tier level
Tier 1 11
Tier 2 2
Tier 3 1

Data collection for Tier 1 activities was the greatest as counting outputs such as numbers of people attending an event, people reached by door-to-door campaigns, website visitors, social media followers and other ‘counting’ approaches can be easier to quantify.

There were significantly fewer attempts to measure Tier 2 activities and outcomes. Examples included surveys to measure recall of campaign messages (using online or face-to-face formats), sustained knowledge and practice outcomes from training and qualitative interviews to gain insights into behaviour change.

Efforts to assess Tier 3 activities were ascertained by an ‘after action review’ following emergency events. These reviews occur when teams reflected on lives saved and how many people enacted their emergency plans. Some participants viewed qualitative data as ‘very beneficial’ when collected immediately after an emergency event. Two emergency agencies conducted large random-sample surveys of community preparedness levels at the state level.

Participants from several organisations noted that ‘conversations’ with members of the public were valuable tools to determine the overall success of community engagement programs. Conversations allowed for in-depth insights into the personal experiences of community members.

Evaluation: whose job is it?

The study data indicated that community engagement staff have access to varying capacities for evaluation. Some participants acknowledged that their agency was developing an evaluation framework. For others, evaluation was a new aspect to their function. Some indicated that previous attempts at evaluation had not delivered relevant information. A few participants pointed to the existence of specialist roles that had responsibility for evaluation. People in these roles supported the community engagement functions of the organisation. These specialist roles seemed to be tied to a person rather than job function or organisational capacity.

About one-third of participants noted that evaluation is linked to the overall community engagement strategy. They noted that evaluation was something the community development people, who are usually located in local councils, do or should do as part of their role. Many agreed that evaluation needed to be ‘embedded into community engagement’ activities for it to have meaning. Yet, many participants noted that ‘evaluation is not our remit (job)’ and ‘we don’t have the time, money or skills’. Some indicated that they did not operate at a program level that can be evaluated more easily. Additionally, participants indicated that their work with other organisations meant they were concerned that evaluation of their own specific contributions to community engagement would be difficult to tease out from cross-agency activities.

Some organisations are undertaking evaluation in a meaningful way. Organisations that win government grants often have budgets for an evaluation component albeit at the end of a project. External consultants are often commissioned to provide an objective account of a program’s outcomes and effects. There was evidence of skilled evaluation ‘experts’ joining some organisations bringing a greater evaluation perspective. However, most of the 30 participating organisations did not have a dedicated monitoring and evaluation specialist. Collecting data can be overwhelming. Some participants noted that they need ways to systematise data (or intelligence) that comes informally to the project.

Participants also noted that ‘closing the loop’ is important (Hurst & Ihlen 2018). Closing the loop means at least two things. First, closing the loop means using the results for improved organisation learning for community engagement. Second, participants also wanted to use the results to inform community engagement at the strategic planning level. 

Adaptable, scalable evaluation tools

All participants reported that they wanted to improve their evaluation capacity, even those working in organisations with evaluation experts. They indicated that adaptable, scalable tools and toolkits would assist them to undertake meaningful evaluation.

To succinctly present these perceptions of tools and approaches, participant answers were used to create a word cloud. The word cloud reflects participant interview information relating to evaluation tools and approaches. The findings show that participants recognised the importance of finding ways to authentically understand what people actually think – their focus on people and engagement, were linked closely with concepts such as ‘need’, ‘think’ and ‘communities’. Practitioners were also focused on what ‘processes’ ‘work’, couched in terms of understanding community requirements and actions.

Key points raised in the word cloud reflect the importance of processes and planning related to evaluation and how the outcomes of engagement are reflected over time. Research tools such as surveys were featured, but not in a dominant way. This suggests that practitioners understood the importance of evaluation, but tools for evaluation were either not accessible or not used. Very few participants detailed specific evaluation tools or methods.

Figure 1: Word cloud of themes related to community engagement evaluation.

Participants noted that they wanted better survey methods to measure attitude and behaviour change. Others wanted adaptable and scalable field tools to measure the outcomes of events such as workshops, training and community engagement activities. Interviews, surveys and post-incident reports can be time consuming tasks. Monitoring and evaluation templates may help organisations build capacity, standardise evaluation approaches of community engagement programs and provide practitioners a suite of tools that are easily accessible and appropriate.

Discussion

Based on the document analysis and the practitioner answers to the research question, three ways were identified to measure community engagement contribution to emergency management.

First, practitioners can use community engagement to quantify levels of community preparedness. Preparedness levels have a significant impact on operations during an emergency response phase. Second, it allows the community to understand its own level of preparedness. Finally, measuring the effects of programs for preparedness provides tangible evidence of the economic and social impacts from community engagement investments such as quantifying lives and property saved (as shown in Coles & Quintero-Angel 2018, Gibbs et al. 2015).

The interview data suggest that community engagement practitioners want a clearer link between the organisation’s strategic plan and its monitoring and evaluation of outcomes. Two solutions could provide guidance: create a culture of evaluation of community engagement and establish clear strategic connections to community engagement functions.

Astill and colleagues (2018) argued that community engagement needs to take a ‘community of practice’ approach. Such an approach brings ’together complementary knowledge and skill sets of research teams that included disaster management, geo-spatial mapping, health impact assessment and community resilience with the wide range of stakeholders planning for, preparing and responding to events when they occur’ (p.51). Creating a culture of evaluation is one way to bring about the benefits of community engagement activities.
Organisational cultures are based on shared values, experiences and behaviours. Evaluation of community engagement activities needs to be part of that culture. It needs to be routinised and internalised. Organisations need to collect data from their activities and learn from those data.

Taking a strategic approach to community engagement is also needed. The first step in evaluating community engagement is to identify the baseline of the community’s level of preparedness. Good program evaluation begins with gathering baseline data before the start of a project. Baseline data allows for planning and assessing subsequent progress and levels of success (or not) against the original aims. Baseline data can describe the existing level of community preparedness in both quantitative and qualitative terms.

The next step is to set engagement goals. Goals are broad, general statements of a desired future state. Projects, programs or campaigns may have one overarching goal or several modest goals. Goals can be abstract, however, objectives are the concrete and measurable steps needed to accomplish goals. Influencing or changing people’s behaviours, knowledge and attitudes are sometimes difficult objectives but they are central to effective community engagement. Objectives set the evaluation criteria that allows for the measuring of community engagement achievements.

All strategy objectives should be SMART: specific, measurable, achievable, relevant and time-bound. Community engagement objectives are best when they are measurable, action or outcome specific, audience specific and achievable by a specific date or timeframe. Levels of impact objectives can be informational (knowledge), attitudinal and behavioural outcomes that can also be measured. Therefore, conducting baseline research and articulating goals with SMART objectives are the foundation for evaluation of community engagement.

The findings suggested a level of discomfort among some practitioners with evaluation processes and tasks, either because of the time it would take or because they did not have a sound knowledge and skills base in this area. This points to a need for team structures to factor in measurement and evaluation and then to recruit to ensure the team has the skills and commitment to these aspects of the job. The patchy inclusion of key performance indicators relating to genuine evaluation of community engagement programs in Australian emergency agency and local council annual reports subject to this study reinforces this point. Inclusion of such key performance indicators in the annual report would indicate the organisation’s commitment to community preparedness and enable teams to devote resources to measurement and evaluation.

Conclusion

This study aimed to understand how community engagement evaluation is conceptualised and undertaken in Australian emergency management practice. Findings suggest that evaluating community engagement activities may be missing from current engagement programs and determining effectiveness and value of engagement is problematic. Study participants recognised the importance of evaluation and its role in demonstrating the level of impact their efforts have on communities. However, they recognised that evaluation is often undervalued and under-resourced or reported as outputs. Standardisation of evaluation and monitoring practice would support the resourcing and reporting of community engagement outcomes, as would support of measurement and evaluation in future preparedness and recovery doctrine for the sector.

This study is a starting point to enhance evaluation in preparedness activities. However, the study has limitations. Participants varied widely in experience and qualifications and reported a variety of community engagement evaluation approaches. Future studies would benefit from an increased sample size to reflect this diversity. In addition, future research could focus on evaluation of participatory or co-design frameworks of community engagement, where community members, stakeholders, organisations and other relevant groups co-create and design of emergency preparedness and participate in the evaluation stage.

Acknowlegments

The authors acknowledge project partners of the NSW State Emergency Service, the Inspector-General of Emergency Management Queensland, the Western Australia Department of Fire and Emergency Services, the Queensland Fire and Emergency Services, Cairns Regional Council Queensland, Ipswich City Council Queensland and Tablelands Regional Council Queensland.


Footnotes

1. This may change. There is consultation underway for a review of the Australian Institute of Disaster Resilience Community Engagement Framework (Handbook 6).

Gallery