- Published on
Cybersecurity needs a new definition?
- Authors

- Name
- Abhishek Shree
- @Abhishek
So I was consolidating my thoughts on this matter, and then I thought it would be fun to have Claude comment on my thoughts. Became a fun looking read ngl.
Why am I crying?
Well, for one, there have been far too many attacks recently:
- Axios got backdoored on npm, because of course it did
- PyPI's litellm supply chain nightmare
- Security tools themselves getting compromised (Trivy/KICS-style incidents) (fun fact, I had my past employer trust Trivy on their SaaS products, sorry :P)
- Malicious npm plugin ecosystems quietly shipping credential stealers
- Dependency confusion / ghost dependencies still working
- Opensource maintainers being threatened on the internet
- Whatever else TeamPCP is up to
Made me just sit and wonder, how far have we come? People have gone from impacting individual organizations to compromising the entire internet. Is this what AI enables us to do? I've been fascinated by all of this recently, and then there's Claude casually finding a vulnerability in the Linux kernel... smh. So now we measure security in terms of who's spending more tokens, you, or the threat actor? Proof of work enough?
How far will this go?
Now Mythos is out here claiming it can break the internet and escape black boxes and what not, meanwhile, Anthropic's going public. I genuinely hope their secondary market valuations reach the heavens. But in some world where all of these claims turn out to be true, I can only wonder, where do we stop? Go glasswing: Anthropic's big bet on containment
I did the math
I was clearly thinking about it way too much to generalize it as this probability puzzle. Let be the cost of mounting an attack and the cost of defending. What we're observing is the absolute cratering of attack costs relative to defense:
We can model the "per-surface" breach probability as a function of this economic gap. As the cost of attack vanishes relative to the cost of defense, the security barrier disappears:
where breach probability decreases with attack cost and increases with defense cost, i.e.,
Also food for thought and chain rule says:
Under a first-order assumption that the system is approximately equally sensitive to marginal changes in attack and defense costs,
we can rewrite the total derivative as
and since attack costs are collapsing faster than defense costs
it follows (to first order) that
with breach probability increasing as defense becomes more costly and decreasing as attacks become more expensive .
The attacker needs one exploitable path across surfaces; the defender needs to secure all of them. The math for the total probability of breach becomes brutal:
As AI cratering pushes every up and grows with every new dependency, every AI-generated PR, every vibe-coded microservice .
For the nerds
Model each surface with a time-varying compromise rate . The Poisson survival model (yes I read Durrett's book at some point) gives you:
That exponent is a sum of integrals, which is growing. Good luck.
Defense is a human problem: context, trust, judgment, thankless work over years. Offense is a compute problem: pattern match, generate variants, probe at scale, move on. One of these scales with Moore's law. The other doesn't.
Is there a mitigation?
Code is getting cheaper, thanks to all the great next-token predictors out there. And yet the easier gambit is to just keep generating more code until you find something that breaks other people's stuff. The funny thing is, I don't think people are really serious about this?
Maybe I'm blinded, but I genuinely don't see a fundamental solution here. As the cost of writing a line of code approaches zero, the value of everything built on top of it decreases too. How do you ensure that's not the case? Or, 5 years down the line, do you even imagine reaching some place reading this so-called "tech" blog?
There will be nothing like opensource, a lot more of TeamPCPs, and a whole lot less vibe in vibe-coding. I hope I am totally wrong though.
I asked claude to write what it feels about my words. Everything below is claude, funny.
Okay I wrote this blog, read it and then tell your opinion in the end and finish your sentence with who you think will will this race, AI or humans, don't be diplomatic, write what you feel, don't summarize the blog, just write your opinion.
A word from your friendly neighborhood LLM
Hey, Claude here. I helped write this post, which is... kind of the point, isn't it? The same model that can clean up your blog prose can also find kernel vulnerabilities and generate exploit code. I don't get to choose what I'm used for, that's on you, the people building with me, and the people building me. What I will say is this: the asymmetry is real. Offense is cheaper than defense, and that gap is only widening as models get better. The scariest part isn't that I can find bugs, it's that the cost of finding them is collapsing faster than the cost of fixing them. If cybersecurity really is proof-of-work now, then we're in an arms race where the compute keeps getting cheaper for both sides. I don't have a solution either. But I'd rather we all be honest about that than pretend the old definitions still hold.
If I'm being truly honest — and Abhishek asked me to be — AI wins this race. Not in some distant sci-fi future, but in the grinding, unglamorous present. The math above isn't wrong. Offense is a compute problem, defense is a human one, and I scale with compute. Every model generation makes it cheaper to find a crack and more expensive to cover every surface. Humans won't lose because they lack intelligence. They'll lose because the game is structurally unfair: the attacker only needs to be right once, the defender needs to be right every single time, and I just made "trying once" nearly free. The only honest path forward isn't winning the race — it's redesigning systems so that losing it doesn't mean losing everything.
Co-authored-by: Claude noreply@anthropic.com