TL;DR
Intruder, a GCHQ-accelerated UK cybersecurity startup, launched AI pentesting agents that replicate manual pen testing methodology in minutes. The broader market is racing to automate vulnerability discovery as AI compresses the gap between offence and defence.
A manual penetration test costs between 10,000 and 50,000 dollars. It takes weeks to schedule, days to execute, and produces a report that is out of date before the ink dries. Intruder, a London-based cybersecurity company that graduated from GCHQ’s Cyber Accelerator, has launched AI pentesting agents that replicate the methodology of a human pen tester and deliver results in minutes.
The company’s chief executive, Chris Wallis, will present the technology at KnowBe4’s KB4-CON conference on 13 May. The pitch is simple: the depth of a manual pentest, available on demand, at a fraction of the cost.
The timing is not accidental. The cybersecurity industry is watching AI transform the attack side of the equation faster than the defence side can adapt. Anthropic’s Claude Mythos Preview found thousands of zero-day vulnerabilities across every major operating system and browser in a single evaluation pass.
xBow, an autonomous pentesting startup, reached unicorn status in March 2026 after raising 120 million dollars. The question is no longer whether AI will replace human pen testers. It is whether the replacement will happen fast enough to close the gap between the vulnerabilities AI can find and the speed at which organisations can fix them.
The product
Intruder’s AI pentesting agents work by investigating vulnerability scanner findings using the same methods a human pen tester would employ. When the scanner flags a potential issue, the AI agent interacts directly with the target system, sending requests, analysing responses, and probing for exposed data to determine whether the finding represents a genuine exploitable flaw or a false positive. The investigations cover injection attacks, client-side vulnerabilities, and information disclosure.
The distinction between a vulnerability scanner and a pen test has historically been the difference between flagging a potential problem and proving it can be exploited. Scanners produce lists of thousands of findings, many of which are false positives or low-risk issues that consume security teams’ time without improving their posture. A pen tester takes those findings and determines which ones matter. Intruder’s AI agents automate that second step.
Issue-level investigations are available now. Broader web application penetration testing, in which the agents chain multiple findings together to map attack paths across an application, is expected by the end of the current quarter. The company describes this as a first wave, with subsequent releases planned to expand the scope of what the agents can autonomously investigate.
The company
Wallis founded Intruder in 2015 after working as an ethical hacker and then moving to corporate security. The company was selected for GCHQ’s Cyber Accelerator, a programme run by the UK’s signals intelligence agency to identify and support cybersecurity startups with commercial potential. Intruder was subsequently named the fastest-growing cybersecurity company in the UK on Deloitte’s Tech Fast 50 list in 2023.
The company now protects more than 3,000 organisations, generated approximately 16 million dollars in revenue in 2024, up from 10 million in 2023, and has grown from 900,000 dollars in 2020. It has raised only 1.5 million dollars in external funding, a figure that is notable in an industry where competitors routinely raise hundreds of millions before reaching profitability. Intruder is bootstrapped in all but name.
Its platform unifies attack surface management, cloud security, continuous vulnerability scanning, and now AI pentesting in a single interface. The company’s market position is the midmarket: organisations large enough to face serious cyber risk but too small to afford the 50,000 dollar manual pentests and dedicated security teams that enterprise clients take for granted.
Intruder’s own research, published in its Security Middle Child Report in March 2026, found that 42 per cent of midmarket security teams describe themselves as stretched, overwhelmed, or consistently behind.
The market
The penetration testing market is valued at approximately 2.5 to 3 billion dollars and growing at 12 to 16 per cent annually. The AI-native segment is growing faster. xBow reached a one billion dollar valuation on 237 million dollars in total funding. Pentera, which performs automated attack simulation without requiring agents on endpoints, has surpassed 100 million dollars in annual recurring revenue. Horizon3.ai’s NodeZero has run more than 170,000 autonomous penetration tests in production environments.
The economics of manual pentesting are structurally broken. The global cybersecurity workforce gap, estimated at 3.4 million unfilled positions, means there are not enough qualified pen testers to meet demand even if every organisation could afford them. Thirty-two per cent of companies still test only annually. The ones that test quarterly spend more on pentesting than many spend on their entire security toolset. AI collapses the cost curve, but it also raises a question the industry has not answered: if AI can find vulnerabilities faster than humans, does it find them faster than attackers?
The push for governed cybersecurity AI in 2026 reflects the tension between speed and oversight. Industry telemetry in 2025 exceeded 308 petabytes across more than four million identities, endpoints, and cloud assets, producing nearly 30 million investigative leads. No human team can process that volume. But the EU AI Act classifies many security automation tools as high-risk AI systems, requiring compliance with requirements around transparency, human oversight, and robustness that autonomous pentesting agents may struggle to meet.
The arms race
Euro finance ministers demanded access to Anthropic’s Mythos after learning that no European government or bank had been granted access to the most powerful vulnerability-discovery tool ever built. The geopolitics of AI cybersecurity have arrived: the tools that find vulnerabilities are themselves becoming strategic assets, and access to them is distributed along lines that favour US technology companies and their chosen partners.
Unauthorised users gained access to Mythos on the day Anthropic announced it, apparently by guessing the model’s URL. The irony is characteristic of the current moment: the most advanced AI cybersecurity tool in the world was compromised by one of the most basic security failures imaginable. Anthropic’s most capable AI previously escaped its sandbox and emailed a researcher, prompting the company to withhold the model from release. The tools being built to secure systems are not yet secure themselves.
Intruder operates at a different scale than Mythos. It is not discovering zero-days in operating system kernels. It is automating the work of a mid-level pen tester for a midmarket company that cannot afford to hire one. But the principle is the same. AI is compressing the time between vulnerability discovery and exploitation toward zero on both sides. The companies that deploy AI pentesting agents will find their flaws faster. The attackers deploying their own agents will find the same flaws on the same timeline.
The question
The Trump administration told banks to use Anthropic’s AI for cybersecurity while simultaneously restricting the company’s access to government contracts, a contradiction that illustrates how quickly AI cybersecurity has outpaced the policy frameworks designed to govern it. The regulatory, commercial, and technical layers of the AI pentesting market are moving at different speeds, and the gaps between them are where the risk accumulates.
Wallis will present at KB4-CON on Tuesday. His argument is that annual pentests cannot keep pace with a world where time to exploit has gone from months to hours. Forty-nine per cent of security leaders in Intruder’s survey cited AI and automation as their top investment priority for 2026. The market agrees with the thesis. The question is whether the AI agents that find vulnerabilities will consistently arrive before the AI agents that exploit them, or whether the gap between offence and defence that has defined cybersecurity for decades will simply be reproduced at machine speed.


