AI for Good blog

Using AI to better understand natural hazards and disasters

Disaster Management | Environment & Climate change

As the realities of climate change take hold across the planet, the risks of natural hazards and disasters are becoming ever more familiar. Meteorologists, aiming to protect increasingly populous countries and communities, are tapping into artificial intelligence (AI) to get them the edge in early detection and disaster relief.

Al shows great potential to support data collection and monitoring, the reconstruction and forecasting of extreme events, and effective and accessible communication before and during a disaster.

This potential was in focus at a recent workshop feeding into the first meeting of the new Focus Group on AI for Natural Disaster Management. The group is open to all interested parties, supported by the International Telecommunication Union (ITU) together with the World Meteorological Organization (WMO) and UN Environment.

“AI can help us tackle disasters in development work as well as standardization work. With this new Focus Group, we will explore AI’s ability to analyze large datasets, refine datasets and accelerate disaster-management interventions,” said Chaesub Lee, Director of the ITU Telecommunication Standardization Bureau, in opening remarks to the workshop.

New solutions for data gaps

“High-quality data are the foundation for understanding natural hazards and underlying mechanisms providing ground truth, calibration data and building reliable AI-based algorithms,” said Monique Kuglitsch, Innovation Manager at Fraunhofer Heinrich-Hertz-Institut and Chair of the new Focus Group.

In Switzerland, the WSL Institute for Snow and Avalanche Research uses seismic sensors in combination with a supervised machine-learning algorithm to detect the tremors that precede avalanches.

“You record lots of signals with seismic monitoring systems,” said WSL researcher Alec Van Hermijnen. “But avalanche signals have distinct characteristics that allow the algorithm to find them automatically. If you do this in continuous data, you end up with very accurate avalanche data.”

Real-time data from weather stations throughout the Swiss Alps can be turned into a new snowpack stratigraphy simulation model to monitor danger levels and predict avalanches.

Modelling for better predictions

Comparatively rare events, like avalanches, offer limited training data for AI solutions. How models trained on historical data cope with climate change remains to be seen.

At the Pacific Northwest Seismic Network, Global Navigation Satellite System (GNSS) data is monitored in support of tsunami warnings. With traditional seismic systems proving inadequate in very large magnitude earthquakes, University of Washington research scientist Brendan Crowell wrote an algorithm, G-FAST (Geodetic First Approximation of Size and Timing), which estimates earthquake magnitudes within seconds of earthquakes’ time of origin.

In north-eastern Germany, deep learning of waveforms produces probabilistic forecasts and helps to warn residents in affected areas. The Transformer Earthquake Alerting Model supports well-informed decision-making, said PhD Researcher Jannes Münchmeyer at the GeoForschungsZentrum Potsdam.

Better data practices for a resilient future

How humans react in a disaster is also important to understand. Satellite images of Earth at night – called “night lights” – help to track the interactions between people and river resources. The dataset for Italy helps to manage water-related natural disasters, said Serena Ceola, Senior Assistant Professor at the University of Bologna.

Open data initiatives and public-private partnerships are also using AI in the hope of building a resilient future.

The ClimateNet repository promises a deep database for researchers, while the CLINT (Climate Intelligence) consortium in Europe aims to use machine learning to detect and respond to extreme events.

Some practitioners, however, are not validating their models with independent data, reinforcing perceptions of AI as a “black box”, says Carlos Gaitan, Co-founder and CTO of Benchmark Labs and a member of the American Meteorological Society Committee on AI Applications to Environmental Science. “For example, sometimes, you have only annual data for the points of observations, and that makes deep neural networks unfeasible.”

A lack of quality-controlled data is another obstacle in environmental sciences that continue to rely on human input. Datasets come in different formats, and high-performing computers are not available to all, Gaitan added.

AI to power community-centred communications

Communications around disasters require high awareness of communities and their comprising connections.

“Too often when we are trying to understand the vulnerability and equity implications of our work, we are using data from the census of five or ten years ago,” said Steven Stichter, Director of the Resilient America Program at the US National Academies of Science (NAS). “That’s not sufficient as we seek to tailor solutions and messages to communities.”

A people-centered mechanism is at the core of the Sendai Framework for Disaster Risk Reduction, a framework providing countries with concrete actions that they can take to protect development gains from the risk of disaster.

If AI can identify community influencers, it can help to target appropriate messages to reduce vulnerability, Stichter said.

With wider internet access and improved data speeds, information can reach people faster, added Rakiya Babamaaji, Head of Natural Resources Management at Nigeria’s National Space Research and Development Agency and Vice Chair of the Africa Science and Technology Advisory Group on Disaster Risk Reduction (Af-STAG DRR).

AI can combine Earth observation data, street-level imagery, data drawn from connected devices, and volunteered geographical details. However, technology alone cannot solve problems, Babamaaji added. People need to work together, using technology creatively to tackle problems.

With clear guidance on best practices, AI will get better and better in terms of accessibility, interoperability, and reusability, said Jürg Luterbacher, Chief Scientist & Director of Science and Innovation at WMO. But any AI-based framework must also consider human and ecological vulnerabilities. “We have also to identify data biases, or train algorithms to interpret data within an ethical framework that considers minority and vulnerable populations,” he added.

 

Image credit: Camptocamp.org via Wikimedia Commons