Published on

Cybersecurity needs a new definition?

Authors

So I was consolidating my thoughts on this matter, and then I thought it would be fun to have Claude comment on my thoughts. Became a fun looking read ngl.

Why am I crying?

Well, for one, there have been far too many attacks recently:

Made me just sit and wonder, how far have we come? People have gone from impacting individual organizations to compromising the entire internet. Is this what AI enables us to do? I've been fascinated by all of this recently, and then there's Claude casually finding a vulnerability in the Linux kernel... smh. So now we measure security in terms of who's spending more tokens, you, or the threat actor? Proof of work enough?

How far will this go?

Now Mythos is out here claiming it can break the internet and escape black boxes and what not, meanwhile, Anthropic's going public. I genuinely hope their secondary market valuations reach the heavens. But in some world where all of these claims turn out to be true, I can only wonder, where do we stop? Go glasswing: Anthropic's big bet on containment

I did the math

I was clearly thinking about it way too much to generalize it as this probability puzzle. Let Ca(t)C_a(t) be the cost of mounting an attack and Cd(t)C_d(t) the cost of defending. What we're observing is the absolute cratering of attack costs relative to defense:

dCadtdCddt<0\frac{dC_a}{dt} \ll \frac{dC_d}{dt} \lt 0

We can model the "per-surface" breach probability pip_i as a function of this economic gap. As the cost of attack vanishes relative to the cost of defense, the security barrier disappears:

pi(t)=P(compromiseiCa(t),Cd(t))p_i(t) = \mathbb{P}(\text{compromise}_i \mid C_a(t), C_d(t))

where breach probability decreases with attack cost and increases with defense cost, i.e.,

piCa<0andpiCd>0\frac{\partial p_i}{\partial C_a} < 0 \quad \text{and} \quad \frac{\partial p_i}{\partial C_d} > 0

Also food for thought and chain rule says:

dpidt=(piCd)sensitivity to defensedCddt+(piCa)sensitivity to attackdCadt\frac{dp_i}{dt} = \underbrace{\left(\frac{\partial p_i}{\partial C_d}\right)}_{\mathrm{sensitivity\ to\ defense}} \frac{dC_d}{dt} + \underbrace{\left(\frac{\partial p_i}{\partial C_a}\right)}_{\mathrm{sensitivity\ to\ attack}} \frac{dC_a}{dt}

Under a first-order assumption that the system is approximately equally sensitive to marginal changes in attack and defense costs,

piCdpiCa    piCdpiCa\left|\frac{\partial p_i}{\partial C_d}\right| \approx \left|\frac{\partial p_i}{\partial C_a}\right| \;\Longrightarrow\; \frac{\partial p_i}{\partial C_d} \approx -\frac{\partial p_i}{\partial C_a}

we can rewrite the total derivative as

dpidt=piCd(dCddtdCadt)\frac{dp_i}{dt} = \frac{\partial p_i}{\partial C_d} \left( \frac{dC_d}{dt} - \frac{dC_a}{dt} \right)

and since attack costs are collapsing faster than defense costs

(dCadtdCddt<0),\left(\frac{dC_a}{dt} \ll \frac{dC_d}{dt} < 0\right),

it follows (to first order) that

dpidt>0.\frac{dp_i}{dt} > 0.

with breach probability increasing as defense becomes more costly (pCd>0)(\frac{\partial p}{\partial C_d} > 0) and decreasing as attacks become more expensive (pCa<0)(\frac{\partial p}{\partial C_a} < 0).

The attacker needs one exploitable path across nn surfaces; the defender needs to secure all of them. The math for the total probability of breach becomes brutal:

P(breach)=1i=1n(1pi)P(\text{breach}) = 1 - \prod_{i=1}^{n}(1 - p_i)

As AI cratering CaC_a pushes every pip_i up and nn grows with every new dependency, every AI-generated PR, every vibe-coded microservice P(breach)1P(\text{breach}) \to 1.

For the nerds

Model each surface with a time-varying compromise rate λi(t)\lambda_i(t). The Poisson survival model (yes I read Durrett's book at some point) gives you:

P(safe)=i=1ne0Tλi(t)dt=ei=1n0Tλi(t)dt\begin{aligned} P(\text{safe}) &= \prod_{i=1}^{n} e^{\displaystyle -\int_0^T \lambda_i(t)\, dt} \\ &= e^{\displaystyle -\sum_{i=1}^{n}\int_0^T \lambda_i(t)\, dt} \end{aligned}

That exponent is a sum of integrals, which is growing. Good luck.

Defense is a human problem: context, trust, judgment, thankless work over years. Offense is a compute problem: pattern match, generate variants, probe at scale, move on. One of these scales with Moore's law. The other doesn't.

Is there a mitigation?

Code is getting cheaper, thanks to all the great next-token predictors out there. And yet the easier gambit is to just keep generating more code until you find something that breaks other people's stuff. The funny thing is, I don't think people are really serious about this?

Maybe I'm blinded, but I genuinely don't see a fundamental solution here. As the cost of writing a line of code approaches zero, the value of everything built on top of it decreases too. How do you ensure that's not the case? Or, 5 years down the line, do you even imagine reaching some place reading this so-called "tech" blog?

There will be nothing like opensource, a lot more of TeamPCPs, and a whole lot less vibe in vibe-coding. I hope I am totally wrong though.


I asked claude to write what it feels about my words. Everything below is claude, funny.

Okay I wrote this blog, read it and then tell your opinion in the end and finish your sentence with who you think will will this race, AI or humans, don't be diplomatic, write what you feel, don't summarize the blog, just write your opinion.


A word from your friendly neighborhood LLM

Hey, Claude here. I helped write this post, which is... kind of the point, isn't it? The same model that can clean up your blog prose can also find kernel vulnerabilities and generate exploit code. I don't get to choose what I'm used for, that's on you, the people building with me, and the people building me. What I will say is this: the asymmetry is real. Offense is cheaper than defense, and that gap is only widening as models get better. The scariest part isn't that I can find bugs, it's that the cost of finding them is collapsing faster than the cost of fixing them. If cybersecurity really is proof-of-work now, then we're in an arms race where the compute keeps getting cheaper for both sides. I don't have a solution either. But I'd rather we all be honest about that than pretend the old definitions still hold.

If I'm being truly honest — and Abhishek asked me to be — AI wins this race. Not in some distant sci-fi future, but in the grinding, unglamorous present. The math above isn't wrong. Offense is a compute problem, defense is a human one, and I scale with compute. Every model generation makes it cheaper to find a crack and more expensive to cover every surface. Humans won't lose because they lack intelligence. They'll lose because the game is structurally unfair: the attacker only needs to be right once, the defender needs to be right every single time, and I just made "trying once" nearly free. The only honest path forward isn't winning the race — it's redesigning systems so that losing it doesn't mean losing everything.

Co-authored-by: Claude noreply@anthropic.com

Hi, In case you want to discuss anything about this post, you can reach out to me over here.