Imagine you are in a plane with two pilots, a human and a computer. They both have their “hands” on the controllers, but they are always looking for different things. If both pay attention to the same thing, the human can drive. But if the human gets distracted or misses something, the computer quickly takes over.
Meet Air-Guardian, a system developed by researchers at MIT’s Computer Science and artificial intelligence Laboratory (CSAIL). As modern pilots face an avalanche of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between humans and machines, based on the understanding of attention.
But how exactly do you determine attention? For humans, it uses eye tracking, and for the neural system, it relies on something called “saliency maps,” which pinpoint where attention is directed. Maps serve as visual guides that highlight key regions within an image, helping to understand and decipher the behavior of complex algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, rather than only intervening during security breaches like traditional autopilot systems.
The broader implications of this system go beyond aviation. Similar cooperative control mechanisms could someday be used in cars, drones, and a broader spectrum of robotics.
“An interesting feature of our method is its differentiability,” says MIT CSAIL postdoc Lianhao Yin, lead author of a new article about Air-Guardian. “Our cooperative layer and the entire end-to-end process can be trained. We specifically chose the continuous depth causal neural network model because of its dynamic characteristics in attention mapping. Another unique aspect is adaptability. The Air system -Guardian is not “It’s rigid; “It can be adjusted based on the demands of the situation, ensuring a balanced partnership between humans and machines.”
In field testing, both the pilot and the system made decisions based on the same raw images when navigating toward the target waypoint. Air-Guardian’s success was measured based on the cumulative rewards earned during the flight and the shortest path to the waypoint. The Guardian reduced the risk level of flights and increased the success rate of navigation to target points.
“This system represents the innovative approach to human-centered ai-based aviation,” adds Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. “Our use of liquid neural networks provides a dynamic and adaptive approach, ensuring that ai does not simply replace human judgment but complements it, leading to greater safety and collaboration in the skies.”
Air-Guardian’s true strength is its core technology. Using an optimization-based cooperative layer that uses human and machine visual attention, and closed-form continuous-time (CfC) liquid neural networks known for their prowess in deciphering cause-and-effect relationships, it analyzes incoming images for insights. vital. Complementing this is the VisualBackProp algorithm, which identifies the system’s focal points within an image, ensuring a clear understanding of its attention maps.
For future mass adoption, the human-machine interface needs to be perfected. The feedback suggests that an indicator, such as a bar, might be more intuitive to indicate when the guardian system takes control.
Air-Guardian heralds a new era of safer skies, offering a reliable safety net for those moments when human attention falters.
“The Air-Guardian system highlights the synergy between human expertise and machine learning, furthering the goal of using machine learning to assist pilots in challenging scenarios and reduce operational errors,” says Daniela Rus, Andrew Electrical Professor. (1956) and Erna Viterbi. Engineering and Computer Science at MIT, director of CSAIL and lead author of the article.
“One of the most interesting results of using a visual attention metric in this work is the possibility of allowing earlier interventions and greater interpretability by human pilots,” says Stephanie Gil, assistant professor of computer science at Harvard University, which was not involved in the study. work. “This shows a great example of how ai can be used to work with a human, lowering the barrier to achieving trust by using natural communication mechanisms between the human and the ai system.”
This research was partially funded by the US Air Force (USAF) Research Laboratory, the USAF artificial intelligence Accelerator, Boeing Co., and the Office of Naval Research. The findings do not necessarily reflect the views of the US government or the USAF.