Evaluating emergency management after an event: gaps and suggestions

Neil Dufty

Peer-reviewed Article

 


Abstract

Post-event evaluations of emergency management are critical to help emergency services providers and communities learn to build disaster resilience. This paper identifies five main types of formal post-event evaluations of emergency management that are used in Australia. It argues that these evaluations should be more consistent in their conduct and approach, more comprehensive in scope, and better timed. The paper also suggests that post-event evaluation reports should be released particularly to the affected communities.


Article

Introduction

The performance of emergency services providers is usually quickly judged by the media and the public after a hazard event. For example, only days after Hurricane Sandy struck the eastern seaboard of the United States in 2012, there were judgements made by the US press of the Federal Emergency Management Agency’s performance particularly in comparison to Hurricane Katrina in 2005. Similar scrutiny has been directed at Australia’s emergency services providers (e.g. immediately after the 2009 Black Saturday fires and 2011 Queensland floods).

Many of these ‘external’ post-event judgements are based on perceived public expectations of emergency management, media bias, and incomplete evidence. However, the evaluations by the media tend to resonate with the public as they are usually persuasive and provided relatively immediately compared with government inquiries and formal reviews that may take up to a year to complete and be released.

It is debatable whether emergency services providers should counter this ‘trial by media’ with objective and technical evaluations. It is argued here that at least a consistent, comprehensive and timely approach to the post-event evaluation of emergency management performance is required for future emergency agency and community resilience learning.

This article is essentially a ‘meta-evaluation’: an evaluation of evaluations. It is based on an investigation of a sample of Australian emergency management evaluations available on the Internet and also the author’s experience in conducting emergency management evaluations.

Based on this research, the article examines:

  1. How is emergency management evaluated after an event in Australia?
  2. What are the gaps and issues?
  3. How can it be improved?

Evaluation and emergency management

Evaluation arguably is society’s most fundamental discipline. It is oriented to assessing and helping to improve all aspects of society including emergency management (Stufflebeam & Shinkfield 2007, p. 5). It is a critical element of personal, societal and organisational learning.

While many definitions of evaluation are used, the term generally encompasses the systematic collection and analysis of information to make judgements, usually about the effectiveness, efficiency and/or appropriateness of an activity (Australasian Evaluation Society 2010, p. 3).

Due to its importance to communities and countries in protecting lives and property, emergency management performance is heavily evaluated by governments and their emergency agencies. Exercising, drilling, and after-action reviews are core internal emergency management evaluation activities. Other internal evaluations can be conducted in a range of areas including program delivery (e.g. training), system and staff performance, workforce satisfaction, and the extent of interoperability. Most of these evaluations are conducted by emergency services providers, with a few outsourced to academic institutions and private consultants.

Post-event evaluations

The large majority of emergency management evaluations occur between events as part of agency preparedness. However, there are some evaluations conducted as part of post-event learning, particularly related to improving emergency management performance for future events.

From the research for this paper, five main types of formal post-event emergency management evaluations were identified in Australia. There are:

  • government inquiries and reviews
  • after-action reviews and operational debriefs
  • community meetings/debriefs
  • community surveys and other social research, and
  • independent evaluations.

Comrie (2013) differentiates an inquiry as ‘a formal investigation to determine the facts of a case’ from a review, being ‘a general survey or assessment of a subject or thing’. Government inquiries and reviews are conducted when governments deem the disaster significant enough to warrant this level of evaluation. Recent examples in Australia include the 2009 Victorian Bushfires Royal Commission, the Queensland Floods Commission of Inquiry, and the Victorian Review of the 2010–11 flood warnings and response.

Each of these government inquiries and reviews was conducted by government-appointed senior personnel. They investigated issues such as disaster risk reduction (structural and non-structural measures), operations of dams (for flood), insurance, emergency response (e.g. command and control, evacuation), agency organisational structure, warning systems and recovery arrangements.

The inquiries and reviews were guided by terms of reference and included evaluation techniques such as consultation with affected communities, emergency agency consultations, public hearings and written submissions. These techniques were used to collect review data, with subsequent data analysis informing the findings, judgement and recommendations. The Victorian Bushfires Royal Commission Final Report made 67 recommendations, the final report of the Queensland Floods Commission of Inquiry made 177 recommendations, and the report of the Victorian Review of the 2010–11 flood warnings and response made 93 recommendations. All interim and final reports were released to the public including via websites.

After-action reviews (AARs) and debriefs are held by emergency services providers soon after significant emergencies and declared disaster events. An AAR is distinct from a debrief in that it begins with a clear comparison of intended versus actual results achieved (USAID, 2006). Both generally focus on what was planned, what worked well, what did not work well and what opportunities there are for improvement. The AAR and debrief reports are normally not released to the public in Australia.

Some Australian emergency services providers have held community meetings or community debriefs soon after an event. Outside of being part of a government inquiry, these appear to occur in an ad hoc fashion i.e. based on factors such as the priorities and resourcing of the agency or political pressure. They provide an opportunity for communities to discuss aspects of preparedness, response and recovery, and, invariably, their thoughts on the performance of emergency services providers. In some cases, community meeting reports are released to the public—an example being the Review of the Tostaree Fire (Office of the Emergency Services Commissioner 2011, p. 50).

Although some affected communities have complained that they have not been consulted after an event, there has been some criticism of the way in which community meetings and debriefs are run when they are held. For instance, some communities have felt that the post-event meetings did not allow for candid and open discussion if chaired by emergency services providers and have called for the use of skilled independent facilitators (see Molino Stewart 2009). This request is further supported by concern that meetings may ‘get out of hand’ due to the vehemence and dominance of some participants.

‘Social research’ refers to research conducted by social scientists, which follows a systematic plan. The main types of social research used in post-event evaluations of emergency management in Australia are community surveys (for quantitative data) and focus groups (for qualitative data). They can be standalone reports or part of the government inquiries and independent evaluations. Some are commissioned (Heath et al. 2011); others (e.g. Vachette & King 2011) are part of academic research. A particular focus for social research has been the performance of warning systems as these systems are at the interface between emergency management and communities.

Participants in the social research can include residents, businesses, special interest groups and potentially vulnerable groups (e.g. culturally and linguistically diverse communities, older people). Social research results usually enter the public domain as published articles and/or conference presentations, while only a few of the agency-commissioned reports are released to the affected communities and the public generally.

Independent post-event evaluations are normally conducted by private consultancies or academic institutions and are usually commissioned by emergency services providers. This outsourcing provides an objective and transparent appraisal of emergency management performance that would be difficult for the emergency services providers to achieve with possible vested interests. This type of evaluation appears to occur due to factors such as agency priorities, funding availability, and political pressure.

Independent, post-event evaluation can examine aspects of emergency management performance such as command and control, interoperability, warning systems, public information, community education, and evacuation and recovery arrangements. It can also include social research to gauge community interactions with emergency management organisations before, during and after the event.

A key requirement of the independent evaluation is the development of a negotiated evaluation plan preferably based on the evaluation terms of reference and the emergency agency’s performance management measures. As Owen (2006) stresses:

‘A major milestone that needs to be reached through negotiation is an evaluation plan. While there may be differences in emphasis in the degree of planning, effective use of evaluation findings is heavily dependent, in all arrangements and settings, on the degree to which the evaluator and clients agree on a plan for the evaluation. This is the up-front agreement that determines the directions the evaluation will take.’ (Owen 2006, p.67)

Most independent post-event emergency management evaluations are not released to the public possibly due to sensitivities. A recent example of an evaluation that was released to the public is the 2012 North East Victoria Flood Review (Office of the Emergency Services Commissioner 2012).

Gaps

There is an inconsistency in the use of post-event emergency management evaluations in Australia. The agency AAR/debrief is the sole consistent method of post-event evaluation used. Government inquiries and major reviews, with their associated large costs and effort, are understandably only used for major disasters. Other evaluation methods tend to be triggered by a range of factors; the result being that, generally, there is no consistent, planned approach.

From reviewing several evaluations released to the public, apart from the AARs/debriefs which have a standard framework, there is little consistency in the evaluation approach and measurables (e.g. performance indicators and benchmarks), even when the evaluation is released by the same emergency services provider.

Other than the government inquiries/reviews, few of the post-event evaluations across the different types are released to the public.

The overall scope of the evaluations is narrow. Other than government inquiries, the evaluations tend to concentrate on specific aspects of emergency management (e.g. command and control, and emergency planning). Few consider the complex relationships between emergency agencies and communities that need to be examined to fully gauge the performance of emergency management in relation to the overall impact of the event.

The timing of the post-event evaluation is very important. Some evaluations are conducted several months after the event. This is appropriate to examine the recovery phase but if the details of the response need to be assessed, then community meetings and social research should occur soon (e.g. within one month) after an event.

An improved approach

Consistency

To deliver a more consistent approach, post-event evaluation should, along with pre-event evaluation, be part of an emergency agency’s strategic and preparedness planning. From both a theoretical and practical point of view ‘planning’ and ‘evaluation’ are inseparable concepts. According to Khakee (1998):

‘As soon as actions are put together in a plan, option possibilities arise. They do so even when one does not prepare an explicit plan. An organisation can choose between several alternative actions. This in turn requires possibilities in order to judge possible results of the alternative actions. The latter is termed ‘evaluation’. In other words, evaluation is a necessary element of planning.’ (Khakee 1998, p. 359)

According to the 2009 Victorian Bushfires Royal Commission (p. 20),

‘if fire agencies are to lift their capability and performance and improve the response capacity of individuals and communities, they need to become true evidence-based learning organisations. The Commission proposes that the fire agencies adopt and fund a culture of reflective practice that routinely pursues current research, searches for best practice, and habitually evaluates policies, programs and procedures with a view to improving internal practice and that of the communities they serve.’

Some emergency agencies explicitly include as part of their corporate planning strategies a move towards being an evidence-based learning organisation. For example, the NSW State Emergency Services (NSW SES) in its NSW SES Plan 2011–2015 has a service delivery goal (Goal 5) related to being a learning organisation through evaluation. However, for all emergency services providers this learning should include regular post-event evaluations that should not be limited to internal AARs/debriefs. Community input should form part of the evaluation process.

If possible, post-event evaluations should be conducted in relation to a standard set of emergency management performance indicators and benchmarks to help gauge improvement over time (although it can be difficult comparing different emergency scenarios within, let alone across, hazards). Some emergency services providers have identified these measurables and are using them for post-event evaluations. For example:

‘as part of its role to provide assurance on the effectiveness of Victoria’s emergency management arrangements, the Office of the Emergency Services Commissioner (OESC) is developing a Performance Monitoring Framework to track the performance of elements of emergency management across all hazards. Once finalised, the Framework will enable the OESC to use a consistent post-incident approach to measure performance to support improvement across the emergency services sector.’ (Office of the Emergency Services Commissioner 2012)

Comprehensive scoping

The scope of the post-event evaluations should not only be introspective but also examine the external complex interrelationships of emergency management before, during and after an event. For instance, it may be that emergency management performance is heavily impacted by community behaviours (e.g. community unwillingness to evacuate may suggest poor performance even if community warnings are timely, relevant and tailored) and by aspects of disaster risk reduction (DRR) such as urban planning, structural mitigation works and building codes.

To visualise these interrelationships, Figure 1 shows a conceptual evaluation scoping ‘framework’ which links emergency management with DRR and communities prior to an event. Depending on the scope of the evaluation, other factors can be added to the Venn diagram such as governance, leadership and funding.

A post-event evaluation that includes an examination of prevention and preparedness could use the conceptual triumvirate shown in Figure 1 to investigate some of the influences on emergency management performance. For example, community hazard education and engagement provided by emergency agencies should involve learning across these three complex systems (Dufty, 2012, p. 155). The performance of community hazard education and engagement in motivating appropriate preparedness behaviours is not only a function of emergency agency programs, but also the learning emanating from DRR and the psychological and sociological makeup of the affected communities.

For the response phase, the post-event evaluation should examine the interrelationship directly between emergency management and communities (with the removal of DRR which provides a level of residual risk before the event). A key part of this interrelationship is the effectiveness of warning systems and disseminated public information.

For the recovery phase in Figure 1, DRR should be replaced in the evaluation scoping framework by ‘economic support’ (e.g. insurance, government assistance), as the performance of emergency management is largely influenced by this factor and the psychological and sociological dynamics of the affected communities.

Figure 1. A relationship that should be considered in the evaluation of emergency management performance.

A diagram shows emergency management, Disaster risk reduction and Communities represented as three overlapping circles, with each concept overlapping with the other two concepts, and all three overlapping in the centre.

Timing

As mentioned, some post-event evaluations of emergency management are usually conducted several months after an event. However, if response is being evaluated, social research should occur as soon as possible after the event. When interviewing or meeting with people it is important to be sensitive to the impact of the event on both the emergency agency staff and community members. According to the American Psychological Association (2011):

‘there is not one ‘standard’ pattern of reaction to the extreme stress of traumatic experiences. Some people respond immediately, while others have delayed reactions—sometimes months or even years later. Some have adverse effects for a long period of time, while others recover rather quickly’.

Providing evaluations to affected communities

Although there will always be media and public ‘evaluations’ (favourable and unfavourable) of the emergency management performance after an event, there are strong arguments for governments, through their emergency agencies, to provide formal evaluations to affected communities and the general public.

One of the priority outcomes of the National Strategy for Disaster Resilience (COAG 2011) is ‘information on lessons learned—from local, national, and international sources—is accessible and available for use by governments, organisations and communities’ in relation to risk reduction and emergency management. It is conceivable that this would include lessons learned after an event and that this evaluation should be co-ordinated and reported by emergency services providers.

There have been some direct requests from affected communities to receive post-event evaluations (e.g. Molino Stewart 2009). These communities want an objective assessment of the event and, if they participated in social research and meetings, want to know they have been heard. Furthermore, the Australian flood and fire emergency agencies have large numbers of volunteers who live in the affected communities. It may, in some circumstances, be difficult for them to cope with negative comments and innuendo (valid or not) in their communities after an event. An official post-event evaluation may help to ‘clear the air’ and provide an objective view on what occurred. It could also be used to acknowledge and help celebrate the achievements of the volunteers.

Conclusion

Post-event emergency management evaluations other than AARs/debriefs tend to be done on an ad hoc basis by Australian emergency services providers, possibly because they are not an integral part of agency preparedness planning and are open to the vagaries of funding and politics. Other than major disaster government inquiries, few post-event evaluation reports are released to the affected communities.

A more consistent, comprehensive, and timely approach to Australian post-event emergency management evaluation is suggested. These evaluations should be reported to affected communities. This will help in improving emergency agency and community learning for future hazard events and overall disaster resilience.

References

American Psychological Association 2011, Managing traumatic stress: Tips for recovering from disasters and other traumatic events, fact sheet. At: www.apa.org/helpcenter/recovering-disasters.aspx.

Australasian Evaluation Society 2010, AES Guidelines for the Ethical Conduct of Evaluations. At: www.aes.asn.au/about-us/about-evaluation.html.

COAG 2011, National Strategy for Disaster Resilience: Building our nation’s resilience to disasters, Australian Government.

Comrie, N 2013, Review of State Flood Warnings and Response, Presentation to the 8th Victorian Flood Conference, Melbourne, February 2013. At: www.8thvicfloodconference.org.au/presentations/Neil%20Comrie%20-%20Review%20of%20State%20Flood%20Warnings%20and%20Response.pdf.

Dufty, N 2012, Learning for Disaster Resilience, Proceedings of the Australian & New Zealand Disaster and Emergency Management Conference held 16–18 April 2012, Brisbane Exhibition and Convention Centre, pp.150–164. At: http://anzdmc.com.au/proceedings.pdf.

Heath, J, Nulsen, C, Dunlop, P, Clarke, P, Bürgelt, P, & Morrison, D 2011, Report on the February 2011 Fires in Roleystone, Kelmscott and Red Hill, research by the University of Western Australia funded by the Bushfire Cooperative Research Centre and Fire and Emergency servicess Authority of Western Australia. At: www.bushfirecrc.com/managed/resource/bushfire_final_report_0.pdf.

Khakee, A 1998, Evaluation and planning: inseparable concepts, The Town Planning Review, vol. 69, no. 4.

Molino Stewart 2009, May 2009 East Coast Low Flood Warning Community Feedback Report, NSW State Emergency services. At: www.ses.nsw.gov.au/resources/research-papers/case-studies-/may-2009-east-coast-low-flood-warning.

NSW State Emergency Service 2011, NSW SES Plan 2011–2015.

Office of the Emergency Services Commissioner 2011, Review of the Tostaree Fire Report, Victorian Government. At: www.firecommissioner.vic.gov.au/our-work/review/fire-review.

Office of the Emergency Services Commissioner 2012, Report of the 2012 North East Victoria Flood Review, final report prepared by Molino Stewart Pty Ltd. At: www.oesc.vic.gov.au/home/reviews+and+inquiries/2012+north+east+victoria+flood+review.

Owen, JM 2006, Program evaluation; forms and approaches, 3rd edition, Allen & Unwin, Crows Nest, Australia.

Queensland Floods Commission of Inquiry 2012, Final report, Queensland Government.

Stufflebeam, DL & Shinkfield, AJ 2007, Evaluation theory, models and applications, Jossey-Bass, San Francisco.

Topping, K 2011, Strengthening resilience through mitigation planning, Natural Hazards Observer, vol. 36, no.2.

USAID 2006, After-action review: technical guidance, United States Government.

Vachette, A & King, D 2011, Cyclone Yasi: Experiences of Backpackers in Townsville and Cairns during Cyclone Yasi, Centre for Disaster Studies, James Cook University. At: www.jcu.edu.au/cds/public/groups/everyone/documents/technical_report/jcu_078897.pdf.

Victorian Bushfires Royal Commission 2009, Final report summary, State Government of Victoria.

Victorian Government 2011, Review of the 2010–11 Flood Warnings & Response.

About the author

Neil Dufty is a Principal of Molino Stewart Pty Ltd. He has conducted over 30 evaluations of emergency management across Australia. He recently conducted evaluations of community bushfire warnings, community alert sirens and flood emergency management.