Editor's Image
Let's face it: While some IT professionals may have a knee-jerk reaction against ai given the current hype, it's only a matter of time before ai is integrated into many daily business processes, including cybersecurity controls. But now, when this technology is still young, it can be difficult to understand the real implications and challenges of ai automation.
This article debunks a couple of common myths about how ai can improve cybersecurity and provides IT and cybersecurity leaders with recommendations on how to make informed decisions about what to automate.
Don't believe the myth that ai will replace all of your employees. Even if that were possible, we, as a society, are not ready for that leap. Imagine getting on a plane and realizing that no human pilot ever enters the cockpit before departure. There would undoubtedly be a mutiny on board and the passengers would demand that a pilot be present for the flight. As effective as the autopilot feature is, it has its limitations, which is why people still want a human to be in charge.
In fact, we did not see a purge of human personnel as the industrial revolution took hold. While machinery took on elements of manual labor, it did not replace humans themselves. Rather, machines brought greater efficiency, predictability, and consistency to the manufacturing process. In fact, new jobs and even new industries were born that required new skills and greater diversity. Similarly, ai will bring new levels of efficiency, scalability and accuracy to business processes, as well as create new opportunities and transform the labor market. In other words, you will still need cybersecurity personnel, but ai assistance will empower them.
Another major misconception is that ai automation will inevitably reduce costs. This may sound familiar to you; The same was said about the cloud not long ago. Organizations that migrated their data centers to the cloud found that while the cloud's OPEX cost structure has advantages over traditional CAPEX expenses, the final costs are similar for large environments, in part because more sophisticated systems require more skilled (and expensive!) talent. Likewise, automation will change the distribution of costs, but not the overall costs.
Finally, a fully automated ai-based security solution is sometimes considered a desirable goal. In reality, it is a pipe dream that raises questions of trust and auditability. What happens if that automation malfunctions or is compromised? How do you verify that results remain aligned with business objectives? The truth is that we are in the early stages of this new automated ai paradigm and no one really understands how ai automation could one day be exploited from a security perspective. ai and automation are not silver bullets (nothing is).
Certain processes are better suited for automation than others. Here is a good three-point assessment that can help you decide if a security process is suitable for automation:
- The process is repetitive and time-consuming when done manually.
- The process is well defined enough to be converted into an algorithm.
- The results of the process are verifiable, so a human can determine when something is wrong.
You don't want your expensive security talent doing things like reviewing security logs, fixing misconfigurations, or interpreting alerts from prescribed metrics. By equipping them with ai-based security tools, you can increase their visibility, increase their understanding of different threats, and accelerate their ability to respond to attacks.
More broadly, let's consider how professional sports teams are investing in technology to improve the performance of their athletes. Similarly, you should provide your security teams with the automated tools they need to up their game. For example, insider threats pose a significant risk, but it is virtually impossible to monitor all users in the company, and dishonest employees are often only detected when they have already caused at least some damage. ai-based solutions can be much more efficient at reducing this risk: a User and Entity Behavior Anomaly Detection (UEBA) solution can detect subtle changes in a user's data access patterns and differences between their behavior in comparison with its peers, which indicate a potential risk that requires immediate review.
Another area where ai can take your team's capabilities to a whole new level is threat hunting. Automated solutions can more accurately identify traces of attacks that may have been thwarted by your protection mechanisms and compare them to your threat intelligence. These may be signs of a major attack and you can better prepare for it.
ChatGPT, Bard, and thousands of other amazing new apps give executives the opportunity to experience ai in action. By working with your security teams, you can explore potential applications of the technology. But instead of moving ahead blindly, it's vital to carefully evaluate which processes make sense to automate. This due diligence will help IT leaders ensure that the risks of a proposed new technology do not exceed its benefits.
Ilya Sotnikov is a security strategist and vice president of user experience at Netwrix. She is responsible for technical enablement, UX design, and product vision and strategy.