IT Pros Intrigued by AI Tools for Offensive Security
IT pros are looking hard at the potential for artificial intelligence-driven tools to transform their approach to offensive security. That’s a takeaway from a poll this month of attendees at an ActualTech Media webinar.
In general, offensive security refers to proactive and adversarial security measures to identify, exploit, and fix vulnerabilities before malicious attackers can find and exploit them. To be clear, the “offensive” in offensive security means going on offense against your own networks and infrastructure, of course, not to “hacking back” against perceived attackers, which would be illegal in many cases and, given challenges with attribution, often be misdirected at fellow victims rather than enemies.
Offensive security consists of elements like penetration testing, red teaming, vulnerability management, exploit development, and social engineering. IT pros seem excited by the possibilities of turning over some of those processes to AI – 44 percent of poll respondents, a plurality, said they were exploring AI solutions, although they haven’t implemented them yet.
Reasons for the interest are clear in several offensive security categories. AI-driven penetration tests theoretically could be conducted more frequently, more quickly, and less expensively than those driven by human red teams. Same for exploit development. Generative AI is already improving the social engineering attacks of bad actors; the case for using it to improve internal training tools is obvious.
IT pros are more in the kicking-the-tires stage than the adoption stage, though. Only 7 percent of respondents described themselves as actively integrating AI-driven tools into their security processes. Meanwhile, skepticism is real. Eleven percent of respondents colored themselves skeptical about the effectiveness of AI in offensive security. Another big group, 24 percent, are continuing to rely on traditional methods with no immediate plans for AI adoption — which could be counted as another faction of the skeptic camp. The other 13 percent were unaware of AI applications in offensive security.
Sources of skepticism or at least a slow-walk approach abound. One of the biggest is that human constraints on pen testing and red teaming are well established. A runaway AI agent might knock down networks or shut down the business by accident, due to any failures in the model to establish robust controls against dangerous testing methods. Additionally, anything having to do with AI has the potential for black box results, in which IT might not be able to pinpoint why the model is flagging a certain vulnerability. Hallucinations and false positives are other AI-related risks. Then of course, there are concerns that data about the uncovered vulnerabilities or the configuration of the network could somehow accidentally be exposed to attackers through inadequate security in the AI model.
To the extent that providers of AI-driven offensive security tools can overcome those objections and prove the value of the positives, our poll finds that a lot of IT decisionmakers are strongly considering handing over some of their offensive security to AI.