In November 2023, the Melbourne School of Population and Global Health hosted a workshop with disaster and climate change practitioners and researchers to explore the possible uses, risks, ethics and opportunities of AI to mitigate mental health and wellbeing effects of disasters and climate change. This paper provides the main findings from the workshop.
There are rapidly increasing climate disasters happening in a time of surging Artificial Intelligence (AI) technologies. Simultaneously, there is a growing recognition of the health risks of climate change and discussion of ways to potentially address these risks through use of AI.1,2 For example, there is growing discourse on the risks and benefits of AI use within health care systems.3,4 However, specific uses of AI to manage the health and wellbeing effects of disasters (which are projected to increase in frequency and severity due to climate change1) remain understudied.
At the November 2023 workshop, key concepts were identified from practitioner experimentation with a Large Language Model (LLM) to address gaps in knowledge found in a rapid literature review. This review found that speculation predominates on the use of AI for climate change and there is limited literature on immediate AI applications for practitioners. Our experiment was to use GPT4 and the AskYourPDF Plugin to prepare a grant application to support recovery and climate adaptation in a disaster-affected community that experienced great material loss and the death of children and a teacher. The grant opportunities were real,5,6 evidence-based resources were uploaded as guidance7,8 and the disaster-affected community was a fictional compilation of real cases.
The experiment highlighted concerns and opportunities about the LLMs. Participants indicated that GPT4 was ‘good for summarising, brainstorming and getting things started’. Risks noted included concerns that use of LLMs to construct grant ideas would ‘lead people to bypass asking the community what their primary concerns are [in disaster recovery]’. For some participants, it made them ‘feel dead inside’ in the sense of losing aspects of creativity and human-to-human interactions. Logistically, participants noted that LLMs are only as good as the questions asked and they indicted a need for prompt-writing resources. Other concerns were raised, including how to adjust grant processes if using GPT4 becomes a widespread practice, leading to grant applications looking the same. Ethical concerns included the profit-driven setup of OpenAI, perpetuation of racism and sexism by GPT49 and a potential ‘narrowing effect’ if certain ideas are given more precedence than others. For example, GPT4 suggested solar panels to address climate change but did not recommend supporting a community’s grieving during anniversaries of the disaster.
It is critical to confront the practical and ethical complexities of AI use. The concepts described are important areas for continued critique and experimentation within emergency and disaster management, research and planetary health.