If you have seen cartoons like Tom and Jerry, you will recognize a common theme: an elusive objective avoids its formidable adversary. This “cat and mouse” game, whether literal or other, implies looking for something that escapes you more and more in each attempt.
Similarly, evading persistent computer pirates is a continuous challenge for cybersecurity teams. Keeping them chasing what is out of reach, MIT researchers are working on an approach to the so -called “artificial adversary intelligence” that mimics the attackers of a device or network to test the network defenses before real attacks occur. Other defensive measures based on ai help engineers strengthen their systems further to avoid ransomware, data theft or other hacks.
Here, a-Mayo O'Reilly, a principal investigator of the Computer Sciences and artificial intelligence (CSAIL) laboratory that leads the Learning of any scale for all groups (ALFA), analyze how artificial adversary intelligence protects us from cyber threats.
P: How can artificial adversary intelligence play the role of a cyber attacker, and how portrays the intelligence of artificial adversary to a cyber defender?
TO: Cybernetic attackers exist throughout a competition spectrum. At the lowest end, there are called screenwriters or threat actors who spray known exploits and malware with the hope of finding a network or device that has not practiced good cyber hygiene. In the middle there are cybernetic mercenaries that are better resources and organized to take advantage of companies with ransomware or extortion. And, at the upper end, there are groups that are sometimes backed by the State, which can launch the “most difficult persistent threats” more difficult to detect (or APT).
Think of specialized and disastrous intelligence that these attackers marked, that is adverse intelligence. The attackers make very technical tools that allow them to hack the code, choose the right tool for their objective and their attacks have multiple steps. In each step, they learn something, they integrate it into their situational consciousness and then make a decision on what to do next. For sophisticated APTs, they can strategically choose their goal and devise a slow and slow visibility plan that is so subtle that its implementation escapes our defensive shields. They can even plan misleading evidence that points to another hacker!
My research goal is to replicate this specific type of offensive or attacker intelligence, intelligence that is oriented adverse (intelligence in which human threat actors trust). I use ai and automatic learning to design cyber agents and model the adversary behavior of human attackers. Also model the learning and adaptation that characterizes cyber weapons races.
I must also keep in mind that cyber defenses are quite complicated. They have evolved their complexity in response to the growing attack capabilities. These defense systems imply design detectors, processing system records, trigger appropriate alerts and then trianging them in incident response systems. They have to be constantly alert to defend a very large attack surface that is difficult to track and very dynamic. On this other side of the competition of the attacker versus defender, my team and I also invented the service of these different defensive fronts.
Another thing stands out on adverse intelligence: both Tom and Jerry can learn to compete with each other! Their skills are exacerbated and locked in an arms race. One improves, then the other, to save your skin, also improves. This Improvement of Teta by eye advances and later! We work to replicate cyber versions of these arms races.
P: What are some examples in our daily lives where artificial adversary intelligence has kept us safe? How can we use adverse intelligence agents to stay ahead of threat actors?
TO: Automatic learning has been used in many ways to guarantee cybersecurity. There are all kinds of detectors that filter threats. They are tune in with an anomalous behavior already recognizable types of malware, for example. There are enabled triage systems for ai. Some of the spam protection tools on your cell phone are enabled for ai!
With my team, cyber attacking design enabled for ai that can do what the threat actors do. We invent the ai to give our cyber agents expert skills and programming knowledge, so that they are able to process all types of cybercrime, plan attack steps and make informed decisions within a campaign.
Smart adverse agents (such as our cyber attackers) can be used as a practice when trying the network defenses. A lot of effort is made to verify the robustness of a network to attack, and ai can help with that. In addition, when we add automatic learning to our agents, and our defenses, they play an arms race that we can inspect, analyze and use to anticipate which countermeasures can be used when we take measures to defend ourselves.
P: What new risks are adapting and how do they do it?
TO: It seems that the new software will never be launched and new systems configurations are being designed. With each launch, there are vulnerabilities to which an attacker can point. These can be examples of weaknesses in the code that are already documented, or they can be novel.
The new configurations represent the risk of errors or new ways of being attacked. We do not imagine ransomware when we were dealing with service denial attacks. Now we are juggling with cybernetic espionage and ransomware with IP theft (intellectual property). All our critical infrastructure, including telecommunications networks and financial, medical, municipal and water care systems, are objectives.
Fortunately, much effort is being dedicated to defending critical infrastructure. We will have to translate that to products and services based on ai that automate some of those efforts. And, of course, to continue designing more intelligent and intelligent adverse agents to keep alert or help us practice our cyber assets.
(Tagstotranslate) Mit Csail (T) Anycale Learning for All (ALFA) Group (T) Cat games and mouse