Blog
AI Accelerated Offense Is Here. Most Security Programs Are Not Ready.
AI isn’t just making attackers stronger. It’s speeding everything up. Security was built on having time to find and fix issues. That time is disappearing. Discovery now scales with compute, but remediation doesn’t. The result is simple. More vulnerabilities, less time, and a model that can’t keep up.

John Amaral
CTO, Co-Founder
Published :
Apr 23, 2026
I had the opportunity to read Anthropic’s recent piece on preparing for AI accelerated offense, and it confirms something that has been building for a while.
We are not just making attackers more capable. We are changing the pace of the game.
For years, security has operated on a simple assumption. Vulnerabilities exist, they are discovered over time, and organizations have a window to respond. That window has never been perfect, but it has been real. Teams could scan, prioritize, patch, and move forward. There was friction, but there was also time.
AI is removing that time.
What Anthropic is really describing is a shift from human-paced security to compute-paced security. Vulnerability discovery is becoming something that scales with compute, not people. Systems can now scan continuously, reason about code, and surface issues far faster than any human team.
That sounds like progress. In isolation, it is.
But it introduces a new kind of pressure.
The first impact is volume.
If discovery becomes cheap and continuous, the number of findings explodes. Organizations that already struggle with vulnerability backlogs will see those backlogs grow faster than they can realistically manage. This is not just more noise. It is more real signal arriving faster than teams can process it.
The second impact is the collapse of the exploit window.
Historically, there has been a gap between discovery and exploitation. That gap is shrinking. In some cases, it effectively disappears. If AI can both find and reason about vulnerabilities, then the time between “this exists” and “this is being used” becomes very small.
That changes how you think about response.
The third impact is where things start to break.
It is not detection. It is remediation.
Most security programs are built around a pipeline:
Detect issues
Triage and prioritize
Fix over time
That model depends on manageable input and human-paced execution. AI breaks both assumptions.
Discovery scales.
Remediation does not.
You can improve patching speed. You can automate parts of triage. But the core problem remains. You are still reacting to a system that is generating more work than you can absorb.
Anthropic points toward faster patching, better prioritization, and more continuous validation. Those are all good recommendations. But they still operate inside the same model. They assume that vulnerable software will continue to flow into systems and that security teams will manage that risk after the fact.
That assumption is starting to crack.
Because if vulnerabilities are discovered instantly and exploited just as quickly, then “catching up” becomes the wrong objective. Even if you double your speed, or improve it by an order of magnitude, you are still chasing something that is accelerating with compute.
There is also a human dimension here that is easy to overlook. More findings means more decisions. Even with AI assisting, someone needs to define policy, understand impact, and accept risk. As volume increases, the cognitive load increases. Security becomes less about solving problems and more about managing overload.
This is why the paper leans into ideas like continuous validation and secure defaults. It is pointing toward a world where you cannot rely on periodic checks or manual workflows. Systems need to be evaluated continuously, and the baseline needs to be safer from the start.
That is the signal.
The Deeper Takeaway
The deeper takeaway is not just that attackers are getting better. It is that the reactive model itself is under pressure. When discovery and exploitation move at machine speed, any workflow that depends on human response becomes a bottleneck.
Risk starts to look different. It is no longer just about severity or exploitability. It becomes a function of time:
How quickly something can be discovered
How quickly it can be weaponized
How quickly you can remove it
If your answer to that last question depends on queues, tickets, and coordination, you are going to struggle.
That is where this is heading.
Anthropic does a strong job of outlining the pressure that is coming. The open question is what replaces the current model when that pressure becomes too much to handle.
Continue Reading








