Roadmap Towards Responsible AI in Crisis Resilience Management

Cheng-Chun Lee, Tina Comes, Megan Finn, Ali Mostafavi

Research output: Working paper / PreprintPreprint

28 Downloads (Pure)

Abstract

Novel data sensing and AI technologies are finding practical use in the analysis of crisis resilience, revealing the need to consider how responsible artificial intelligence (AI) practices can mitigate harmful outcomes and protect vulnerable populations. In this paper, we present a responsible AI roadmap that is embedded in the Crisis Information Management Circle. This roadmap includes six propositions to highlight and address important challenges and considerations specifically related to responsible AI for crisis resilience management. We cover a wide spectrum of interwoven challenges and considerations pertaining to the responsible collection, analysis, sharing, and use of information such as equity, fairness, biases, explainability and transparency, accountability, privacy and security, inter-organizational coordination, and public engagement. Through examining issues around AI systems for crisis resilience management, we dissect the inherent complexities of information management and decision-making in crises and highlight the urgency of responsible AI research and practice. The ideas laid out in this paper are the first attempt in establishing a roadmap for researchers, practitioners, developers, emergency managers, humanitarian organizations, and public officials to address important considerations for responsible AI pertaining to crisis resilience management.
Original languageEnglish
PublisherCornell University - arXiv
Number of pages18
Publication statusPublished - 20 Jul 2022

Publication series

SeriesarXiv.org
Number2207.09648
ISSN2331-8422

Keywords

  • resilience
  • crisis management
  • responsible AI
  • explainable AI
  • trust
  • complexity

Cite this