July 20, 2024

Pulse Bliss

most important health challenges

Leveraging AI in emergency management and crisis response

Organizational, not technical challenges, have kept AI from scaling in EP&R

Combining the computational power of AI with the intuition and judgment of humans can make EP&R organizations both more efficient and more effective. However, effective human-machine teaming can be challenging. Based on EP&R’s past experience with AI and automation, some common pitfalls include the following.

Balancing data integration with security

Imagine orchestrating an ensemble where each instrument plays its unique key—a dissonant symphony that echoes the complexity of data integration in EP&R. Integrating data from diverse sources, formats, and standards across various agencies and systems can be as intricate as composing harmonious music. To harness its full potential, AI needs high-quality, consistent data that flows seamlessly. However, the pandemic exposed the challenges of integrating patient data at the scale of a national health care emergency. A patchwork US public health system with disparate, fragmented data sources can make integration a major challenge. In fact, more than a third of local health agencies were unable to access surveillance data from local emergency departments during the pandemic.14 The lack of integrated data led to delays in tracking and responding to the virus’ spread, underlining the importance of efficient data integration in times of crisis.15

Data is more than mere information; it can be sensitive and critical. During emergencies, handling data while protecting privacy and security becomes paramount. Storing the large amounts of health data needed to fuel AI tools can make an attractive target for cybercriminals. For example, the WannaCry ransomware attack of 2017 compromised patient data at numerous hospitals worldwide.16

The solution to data interoperability is not necessarily storing all data in a centralized place. New technical advances can help realize the benefits of AI, but without increasing privacy risk. For example, homeomorphic encryption removes the specifics of data while retaining features needed for analysis. Federated learning can tackle the need to share sensitive data. Rather than moving data between organizations, federated learning moves AI models instead, allowing advanced models to be trained on several different data sets without organizations losing control of their data. These new technologies should be paired with new governance processes to help make the resulting insights widely available while protecting the privacy of the underlying data.

Overcoming resource constraints

While many government organizations experiment with AI tools as proofs of concept, they often struggle to adopt successful pilots on a larger scale, in part due to constraints in the size and timing of resources.

Adopting an enterprise-level AI tool is not just a procurement exercise. It should have continued funding to keep developing the tool, protect it from model drift, and ensure its sustained accuracy and utility. Similarly, resources aren’t just needed for technology alone but also for the workforce. New tools can create the need for new skills and even entirely new roles, such as data scientist or algorithm auditor. Organizations that adopt AI-specific roles are 60% more likely to achieve their project goals than those that do not.17 Reskilling the workforce for AI doesn’t necessarily mean adding many data scientists or creating scores of new positions. Often, it means giving existing staff the skills they need to work effectively with AI. For instance, providing IT staff the skills to deal with new forms of data traffic, frontline workers with the skills to use AI tools to augment their work, and legal staff the understanding of how AI works to help ensure regulatory compliance.18

Navigating ethical and legal challenges

AI’s use in emergencies can have far-reaching consequences, making it very important to implement AI with equity, transparency, and accountability. The distribution of COVID-19 vaccines offers just one example that highlights these concerns, sparking debates globally on fairness and equity in deciding who should receive vaccines first.19 When used properly, AI can actually improve equity by identifying hidden patterns of bias or the roots of unequal outcomes.20

However, AI governance is needed at every step of the model lifecycle to help achieve equitable outcomes. From labeling training data with appropriate uses to periodically reevaluating model outputs to check for model drift, careful AI governance is a central tool for promoting the ethical use of AI.

Good governance can help organizations use AI ethically, but broader legal challenges exist. With disasters that can span county, state, and even national boundaries, EP&R organizations may have to comply with a diverse set of potentially contradictory AI regulations. To promote not just ethical use but also legal compliance requirements, EP&R organizations need to have a deep understanding of how their AI tools work, what outcomes they produce, and how they are used in day-to-day operations.

Tailoring AI for varied disasters

Disasters come in many forms—public health, natural, and human-made—each with a unique cadence. For example, the responses to the 2014 Ebola outbreak in West Africa differed from those to the 2011 Fukushima nuclear disaster, emphasizing the importance of tailoring strategies to the unique nature of each emergency. Today’s AI tools feature a trade-off between breadth and accuracy. Large language models are powerful general-purpose tools that can deal with very different scenarios but may struggle with the accuracy of certain details. On the other hand, more bespoke AI models can be extremely precise in solving specific problems they were trained on, but their scope is limited to only that problem.

As a result, agencies should balance the need to be prepared for any scenario with the need to have accurate, trustworthy conclusions. And do this, while the technology itself is constantly evolving.