<img src="https://news.mit.edu/sites/default/files/styles/news_article__cover_image__original/public/images/202411/MIT-ai-Flood-01-press.jpg?itok=nrm-0zHL” />
Visualizing the possible impacts of a hurricane on people's homes before it arrives can help residents prepare and decide if they should evacuate.
MIT scientists have developed a method that generates satellite images of the future to represent what a region would look like after a possible flood. The method combines a generative ai model with a physics-based flood model to create realistic bird's-eye images of a region, showing where flooding is likely to occur given the strength of an approaching storm.
As a test case, the team applied the method to Houston and generated satellite images showing what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with real satellite images. Images taken from the same regions after Harvey hit. They also compared ai-generated images that did not include a physics-based flood model.
The team's physics-enhanced method generated satellite images of future floods that were more realistic and accurate. In contrast, the unique ai method generated images of flooding in places where flooding is not physically possible.
The team's approach is a proof of concept, intended to demonstrate a case where generative ai models can generate realistic and reliable content when combined with a physics-based model. To apply the method to other regions to represent flooding from future storms, it will need to be trained on many more satellite images to learn what flooding would look like in other regions.
“The idea is: One day we could use this before a hurricane, where it will provide an additional layer of visualization for the public,” says Björn Lütjens, a postdoc in MIT's Department of Earth, Atmospheric, and Planetary Sciences, who led the research while was a doctoral student in the Department of Aeronautics and Astronautics at MIT (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization that helps increase that readiness.”
To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team created it. available as an online resource for others to try.
the researchers They report their results today in the magazine IEEE Transactions on Geoscience and Remote Sensing. Co-authors of the MIT study include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, AeroAstro professor and director of the MIT Media Lab; along with collaborators from multiple institutions.
Generative Adversarial Images
The new study is an extension of the team's efforts to apply generative artificial intelligence tools to visualize future climate scenarios.
“Providing a hyperlocal perspective on climate appears to be the most effective way to communicate our scientific results,” says Newman, lead author of the study. “People relate to their own postcode, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal and relatable.”
For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing or “adversary” neural networks. The first “generator” network is based on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between real satellite images and those synthesized by the first network.
Each network automatically improves its performance based on feedback from the other network. The idea, then, is that this adversarial tug-of-war should ultimately produce synthetic images that are indistinguishable from reality. However, GANs can still produce “hallucinations” or objectively incorrect features in a realistic image that shouldn't be there.
“Hallucinations can fool viewers,” says Lütjens, who began to wonder if such hallucinations could be prevented, so that generative ai tools could be trusted to help inform people, particularly in noise-sensitive scenarios. risk. “We were thinking: How can we use these generative ai models in a climate impact environment, where having reliable data sources is so important?”
Flood hallucinations
In their new work, the researchers considered a risk-sensitive scenario in which generative ai is tasked with creating satellite images of future floods that could be reliable enough to inform decisions about how to prepare and potentially evacuate people outside. of danger.
Policymakers can usually get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the end product of a series of physical models that typically begin with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how the wind might push any nearby body of water onto land. A hydraulic model then maps where flooding will occur based on local flood infrastructure and generates a color-coded visual map of flood elevations in a particular region.
“The question is: Can satellite image visualizations add another level to this, that is a little more tangible and emotionally engaging than a color-coded map of reds, yellows and blues, and at the same time be reliable?” says Lütjens.
The team first tested how generative ai alone would produce satellite images of future floods. They trained a GAN with real satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they commissioned the generator to produce new flood images of the same regions, they found that the images looked like typical satellite images, but a closer look revealed hallucinations in some images, in the form of flooding where it should not be possible (e.g. example, in places of higher elevation).
To reduce hallucinations and increase the reliability of the ai-generated images, the team paired the GAN with a physics-based flood model that incorporates real physical parameters and phenomena, such as the path of an approaching hurricane, storm surge, and flood patterns. Using this physics-enhanced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as predicted by the flood model.
“We show a tangible way to combine machine learning with physics for a use case that is risk-sensitive, requiring us to analyze the complexity of Earth's systems and project future actions and possible scenarios to keep people out.” of danger,” says Newman. “We are eager to get our generative ai tools into the hands of decision makers at the local community level, which could make a significant difference and perhaps save lives.”
The research was supported, in part, by the MIT Portugal Program, the DAF-MIT artificial intelligence Accelerator, NASA, and Google Cloud.