Skip to content(if available)orjump to list(if available)

Test, don't (just) verify

Test, don't (just) verify

26 comments

·December 23, 2025

tgtweak

I think more salient here (at term certainly) is setting up adversarial agents for testing/verification - that has been a big win for me in multi-agent workflows - when claude first released "computer use" that was a very big step in closing this loop and avoiding the manual babysitting involved in larger projects. PSA that it's not a silver bullet as the "analyzer" can still get tripped up and falsely declare something as broken (or functional), but it greatly reduces the "Hey I've done the task" when the task is not done or the output is broken.

zipy124

I think this misses a lot of reasons why learning verification is important. For instance learning the concept of invariants and their types such as loop invariants. They make reasoning about code in general easier, even if you never formally do any verification, it makes it easier to write tests or asserts(). A substantial amount of bugs are due to the program having a different state to that assumed by the programmer, and there are other tools that help with this. For example a statically typed language is a type of verification since it verifies a variable has a specific type and thus operations that can be performed on it, and limits the valid input and output range of any function. Languages like Rust are also verification in terms of memory correctness, and are also extremely useful tools.

getregistered

> AI-assisted programming pushes the limits of programming from what you can implement to what you can specify and what you can verify.

This really resonates. We can write code a lot faster than we can safely deploy it at the moment.

marcosdumay

> We can write code a lot faster than we can safely deploy it at the moment.

We always could. That has been true since the days we programmed computers by plugging jumper wires into a panel.

acedTrex

> We can write code a lot faster than we can safely deploy it at the moment.

This has always been the case?

andrewmutz

I agree completely with the author that AI assisted coding pushes the bottleneck to verification of the code.

But you don't really need complete formal verification to get these benefits. TDD gets you a lot of them as well. Perhaps your verification is less certain, but it's much easier to get high automated test coverage than it is to get a formally verifiable codebase.

I think AI assisted coding is going to cause a resurgence of interest in XP (https://en.wikipedia.org/wiki/Extreme_programming) since AI is a great fit for two big parts of XP. AI makes it easy to write well-tested code. The "pairing" method of writing code is also a great model for interacting with an AI assistant (much better than the vibe-coding model).

9rx

Trouble is that TDD, and formal proofs to much the same extent, assume a model of "double entry accounting". Meaning that you write both the test/proof and the implementation, and then make sure they agree. Like in accounting, the assumption is that the probability of you making the same mistake twice is fairly low, giving high confidence to accuracy when they agree. When there is a discrepancy, then you can then unpack if the problem is in the test/proof or the implementation. The fallible human can easily screw either.

But if you only fill out one side of the ledger, so to speak, an LLM will happily invent something that ensures that it is balanced, even where your side of the entry is completely wrong. So while this type of development is an improvement over blindly trusting an arbitrary prompt without any checks and balances, it doesn't really get us to truly verifying the code.

throw-12-16

people cant even bother to write code and you expect them to test it?

andai

Related discussion from last week:

AI will make formal verification go mainstream

https://news.ycombinator.com/item?id=46294574

ecocentrik

Doesn't this run into the same bottleneck as developing AI first languages? AI need tons of training material for how to write good formal verification code or code in new AI first languages that doesn't exist. The only solution is large scale synthetic generation which is hard to do if you humans, on some level, can't verify that the synthetic data is any good.

CuriouslyC

Formal verification is a nice idea but it's a big hill to climb from where we're at. Most people can't even get agents to robustly E2E QA code, which is a much smaller hill to climb for (probably) larger benefits. I'm sure this area will improve over time though, since it is an eventual unlock for fully autonomous engineering.

smccabe0

I think the section on AI from Zero to QED (a proofs in Lean/lang guide) gives a sober path forward from the perspective of market-makers and trading:

"Imagine market infrastructure where agents must prove, before executing, that their actions satisfy regulatory constraints, risk limits, fairness properties, and eventually machine-checkable proofs of Pareto efficiency of market mechanisms. This is a big, hairy, ambitious goal. Not “we reviewed the code” but “the system verified the proof.” The agent that cannot demonstrate compliance cannot act."

https://sdiehl.github.io/zero-to-qed/20_artificial_intellige...

TeodorDyakov

I dream of a future where before any software is released we can predict 100 years into the future what effect it will have on every living thing and not release it if unhappiness delta for some living thing falls below a certain threshold.

__MatrixMan__

I think for most complex systems, robust E2E QA is a waste of money. A small handful of E2E smoke tests and thoughtful application of smaller tests is usually enough. Though to be fair, agent aren't good at that either.

baq

We won't be formally verifying millions of LOC anytime soon, don't get your hopes that high up.

...but we will be modelling those 5-10kLOC modules across multiple services doing critical business logic or distributed transactions. This has been unthinkable a couple months ago and today is a read-only-Friday experiment away (try it with a frontier model and you'll be surprised).

null

[deleted]

esafak

Alperen,

Thanks for the article. Perhaps you could write a follow-up article or tutorial on your favored approach, Verification-Guided Development? This is new to most people, including myself, and you only briefly touch on it after spending most of the article on what you don't like.

Good luck with your degree!

P.S. Some links in your Research page are placeholders or broken.

alpaylan

I'll add some links for the original VGD paper and related articles, that should help in short term. Thank you! I'll look into writing something on VGD itself in the next few weeks.

badgersnake

> AI is making formal verification go mainstream.

This nonsense again. No. No it isn’t.

I’m sure the people selling it wish it was, but that doesn’t make it true.

baq

You haven't been paying attention.

The fact that we're reading about it here today and have read about it in the past weeks is one piece of evidence. Another is that we hadn't been reading about it in the past months before November. Opus 4.5 and GPT 5.2 have crossed an usefulness frontier.

Anecdotally, I've been having some success (guiding LLMs) writing Alloy models in the past month and ensuring conformance with code. Making these would've been unjustifiable from ROI perspective fairy tales just this summer. The landscape has changed qualitatively.

AnimalMuppet

LLM-style AI isn't great for formal verification, not so far as I understand. And the recent advances in AI didn't do much for the kind of AI that is useful for formal verification.

otterley

You don’t use AI to perform formal verification. You give the agent access to verification tools whose output can then be fed back to the model.

It’s the same design as giving LLMs the current time, since they can’t tell time themselves, either.