You should write "without bugs"
90 comments
·January 23, 2025jp57
SAI_Peregrinus
Correctness of a program is usually distinct from performance.
One obvious exception is code processing secret data. There, performance variance creates observable side-effects (timing side channels) which can be used to determine the secrets' values.
Another is any sort of hard real-time system, where performance is critical to correctness. For example, a brake-by-wire system that took 10 seconds to respond to the pedal being pressed would be incorrect, because of poor performance.
Otherwise, I agree. There might be some other exceptions, but striving for correctness first is a good way to write code.
Jtsummers
Even for hard real-time systems, focusing on correctness first is often the right way to go. At least in my experience, it's typically been much easier to make a correct system fast than a fast wrong system correct. With some particularly delightful (read: catastrophic) results when some folks really wanted to push their fast wrong code (fixed years ago, fortunately, and they had a good corporate culture change not long after so no point in naming and shaming).
SAI_Peregrinus
I agree. But performance can't be ignored, and for some systems like those I mentioned it's not distinct from correctness. Performance doesn't always mean "as fast as possible", e.g. for systems dealing with secret data it means "without leaking information via side channels", for something like petting a window watchdog it means "slow enough not to pet before the window opens, fast enough not to pet after the window closes".
alaaalawi
Achieving Correctness is really satisfying. however it is hard and difficult. IMOH this does in general polarize the scene (proving fanatics on one extreme and the other side who are not even testing the code) IMHO flushing out what you are designing does help and goes along the way of having fewer bugs. one old relatively yet easy and accessible formal toolkit which helpful in flushing out process is Z notation . one of the accessible books, old yet an easy read and rewarding is
"Software Development with Z. A practical approach to formal methods in software engineering"
https://archive.org/details/softwaredevelopm0000word/page/n1... .
there are other notations developed later. but its simplicity and easiness even while scribling on paper or word processor gets me back to using it every now and then
fvdessen
I've a good track record of having my programs work without bugs, I don't think it's too hard. The way I work is to restrict myself to using building blocks that I know work well and produce correct results. For example: using state machines, never breaking out of a loop, tackling all the edge case before the body, using simple data structures, don't use inheritance or exceptions, don't interact with files natively, don't use recursion, etc. etc.
When I face a programming problem I map the solution to those simple building blocks, and If I can't I try to change the problem first.
Formal methods are hard if you want to prove the correctness of a hard algorithm, but you can almost always do without a hard algorithm and use something really basic instead, and for the basic things you don't need formal methods.
The people who write the most bugs in my experience do it because they don't fully understand the building blocks they're using, and rely on other things for correctness like the type checker or unit tests. They view code as a stochastic process, write code they don't fully understand and have accepted the resulting bugs as a fact of life.
anonzzzies
It is a great time to work with proofs and formal verification; so many nice tools with different goals like idris, agra, lean, coq, tla+, etc, but also unit and system tests, however generally clients don't pay for that; they pay for sloppy and buggy.
Ferret7446
> Program correctness can be proven.
It can be proven, in broadly the same sense as all of the atoms in your body can simultaneously "decide" to exist an inch to the right due to quantum field theory.
It is not practical to prove correctness for the vast majority of programs, and there are programs that are demonstrably incorrect, cannot be made correct, and yet are useful and still function anyway (e.g., closing TCP connections cannot be done "correctly").
hulitu
> It is not practical to prove correctness for the vast majority of programs,
That is the excuse i hear a lot from software developers, when everything they do is to test the expected behaviour of a program, without any edge case.
And this is also the reason why iMessage and WhatsUp are full of one click exploits.
strken
The comment you were replying to is about formal proofs of correctness, probably using tools like Rocq (formerly Coq) and TLA+.
These tools can take an extreme amount of time to use and require specialised skillsets that are difficult to hire for. They go far beyond standard software engineering practices like unit and E2E testing. It is genuinely, seriously not practical to verify the average startup's SaaS web app with TLA+; you'd go bankrupt.
Someone who's used them more than me might be able to comment on exactly how hard it would be, but I felt it was unfair to you not to explain.
AtlasBarfed
Oh yeah, just prove it correct. Simple
Wait, code is not simple. Are the requirements simple? Oh wait, in reality they aren't simple either.
The complexity of the formal specification must ramp up with the complexity of the code, because the task being asked is complex.
So where does the debugging end if the specification is wrong?
How do you prove the requirements are correct?
Unit testing is helpful but wasn't the panacea, because the most important bugs are in integration.
I'm also going to guess that formal proofs will have hellacious issues with system boundaries. They can't prove correctness of the state on the other end of the pipe.
So you overcome that, time and money. What happens when the requirements/specification changes? How much of the verification is invalidated?
Proof systems need to prove their economic validity in the he software realm. Heck, where are the proof systems for subsections of the Linux kernel or other vitally important inner loops of computing? Those actually have stable specifications. Weren't go multitasking formally verified/provably correct in some sense? Why haven't correctness tools tackled other things?
tliltocatl
Correctness is also distinct from applicability/usefulness. A program that lacks some important edge cases is marginally more useful than a program that takes impractical amount of time for most inputs. So "Correctness of a program is distinct from its performance" doesn't imply "performance is optional".
vendiddy
When you convince yourself of the program correctness, are you using techniques from the class?
Any advice on reasoning about correctness?
codeulike
I clicked on this because of the crazy title but its actually a really inisghtful article, e.g. "Conversely, there are people with commitment issues; they want to experiment non-stop and thus have no faith in robustness." ... like there's this belief that bugs will just happen anyway so why worry about them. But the authors point is that a little bit of extra thought and work can make a lot of difference to quality.
wswope
> the authors point is that a little bit of extra thought and work can make a lot of difference to quality
Care to bring home the thesis on how that’s actually really insightful?
markerz
There are two examples that come to mind:
I’ve caught multiple production bugs in code review by just carefully reasoning about the code and changing some variable names to match my mental model. I didn’t quite understand how bad it was, but I could tell it wasn’t right so I left a comment saying “this seems wrong”. What happened after? It was merged without addressing and it triggered a P1 regression in production two weeks later. Like the author said, it takes time and energy to truly understand a program and think through all the modes of execution. BUT I think it’s a worthwhile exercise. It just doesn’t really get rewarded. In my experience, this happened at least twice a year over my last 10 years of working in software.
The other example is blindly accepting AI generated code, which I think is an extension of copying template / boilerplate code. You take a working example and massage it to do what you want. It includes so many things that either aren’t needed or don’t make sense, but it works at first glance so it’s submitted and merged. For example, build configs, JSON or YAML configs, docker container build files, helm charts, terraforms, gradle builds. It takes a lot to learn these things so we often just accept it when it works. It’s exhausting to learn it all but if you do, you’ll be much better at catching weird issues with them.
I think the problem is we trick ourselves into thinking we should spend more time coding than anything else, but it’s everything else that causes us more problems and we should build the muscles to handle everything else so that we waste less time on those mistakes. We’re often already really good at shipping features but terrible at finding and handling bugs or configs or builds or infrastructure or optimizing for performance.
atomicnumber3
"It just doesn’t really get rewarded."
This is the entirety of the problem. Also why open source programs are so often "surprisingly" high quality.
Bad reward functions in companies don't just not reward people who do good work. It *actively punishes* them because stack ranking is a zero-sum game. And as much as people joke about stack ranking and lambast the dinosaurs who used to do it on purpose, it's still how it all actually work. It's just distributed stack ranking - each manager and manager of managers has their own local stack rank that bubbles up to who gets fired and who gets promoted.
So people who throw shit at the wall and make product that sell but have a shitty user experience get promoted and people who plod along and make things that work, or fix things that are broken (but not so much that they don't sell) filter to the bottom of the list and get cut, or leave when they don't get promoted or get raises.
Sure, there's some golden mix in between throwing shit at the wall and fixing key UX-ruining bugs. But these people still get outcompeted by people who purely ship n scoot.
warkdarrior
The author makes the insightful observation that they write non-buggy code by being careful, in contrast to the vast majority of developers who write code full of bugs. Being careful is left to the reader, but it should be easy. /s
ozim
Author makes insightful observation that once you start paying attention deliberately - after some time you won’t have to be deliberately careful because you will be careful by default.
There are devs who don’t pay attention and devs who pay too much attention to context of the change they are implementing. I think author also outlined which things one might pay attention to so they would be considered careful.
debarshri
I think there's always argument that you don't know what you don't know. How much thought do you put on writing code with out bugs. Bug could be caused by the business logic, the language internals, the runtime environment internal and variation. I think what people often ignore writing piece of software is an iterative process, you build, deploy and learn from the operation and you fix and repeat.
If you keep thinking of all possible issues that can happen that becomes a blackhole and you dont deliver anything.
swatcoder
> writing piece of software is an iterative process
Often, yes. Absolutely.
> you build, deploy and learn from the operation and you fix and repeat.
But no, not at all in this way. This is generally not necessary and definitely not a mindset to internalize.
Commercial software products are often iterative, especially for consumer or enterprise, because they need to guess about the market that will keep them profitable and sustainable. And this speculative character has a way of bleeding down through the whole stack, but not for the sake that "bugs happen!" -- just for the sake that requirements will likely change.
As an engineer, you should always be writing code with a absolutely minimal defect rate and well-understood capabilities. From there, if you're working an a product subject to iteration (most now, but not all), you can strive to write adaptable code that can accomodate those iterations.
swee69
> As an engineer, you should always be writing code with a absolutely minimal defect rate and well-understood capabilities.
I think the problem with the purists is that this is just a moral claim - it's not based on how businesses + marketplaces actually work. The lower you attempt to crank the defect rate (emphasis on the word "attempt"), the slower you will iterate. If you iterate too slow, you will be out-competed. End of discussion. This is as true in open-source as it is in enterprise SaaS. And in any case, you're just begging the question: how do we determine the "absolutely minimal" rate in advance?
> you can strive to write adaptable code that can accomodate those iterations.
This is a damaging myth that has wasted countless hours that could have otherwise been spent on fixing real, CURRENT problems - there is no such thing as writing "adaptable" code that can magically support future requirements BEFORE those requirements are known. If you were that good at predicting the future you would be a trader, not an engineer.
debarshri
I mostly agree with you.
In first few iterations of writing the code, you often don't have complete picture of capabilities, capabilities change on the fly dictated by change in requirement. There is no baseline of what minimal defect rate it. Over period of time and iterations you build that understanding and improve the code and process.
I'm not saying that you don't think before you write code but often over thinking leads of unnecessary over engineering and unwanted complexity
hulitu
> I think what people often ignore writing piece of software is an iterative process, you build, deploy and learn from the operation and you fix and repeat.
I presume you didn't use any Microsoft operating system (or program). /s
teddyh
Writing code with fewer bugs is a function of experience. But, the reason is not entirely what you would think it is. Sure, a lot of it is anticipating problems previously experienced, and writing code that handles problems, or entire classes of problems, previously encountered.
However, with more experience comes a better understanding of the general metastructure of code, and therefore an ability to hold more code in your head at a time. (Compare for instance the well-known increased ability of chess masters to memorize chess boards, compared to non-chess players.)
When you’re an inexperienced programmer, you need to write the code down (and run it to test if it works) before you know if the code and algorithm solves the problem. This makes the inexperienced programmer take shortcuts while writing down the code, in order to get the code written down as fast as possible, while it is still clear in their mind.
The experienced programmer, on the other hand, can easily envision the entire algorithm in their head beforehand, and can therefore spare some extra attention for adding error checking and handling of outlier cases, while writing the code for the first time.
Also, as the article states, when you make a conscious habit of always writing code which checks for all errors and accounts for all outliers, it becomes easier with time; practice makes perfect, as it were. This is essentially a way to speed up the natural process described above.
ChrisMarshallNY
Experience has taught me to test.
A lot.
I always find bugs, when I test, no matter how "perfect" I think my code should be.
Also, I find that a lot of monkey testing is important. AI could be very beneficial, here. I anticipate the development of "AI Chaos Monkeys."
https://littlegreenviper.com/various/testing-harness-vs-unit...
hulitu
> Also, I find that a lot of monkey testing is important. AI could be very beneficial, here. I anticipate the development of "AI Chaos Monkeys."
Well, you just made me realise that there is still a use for those LLMs beside generating propaganda. The problem, i guess will be, that nobody will be willing to spend time on those bug reports.
ChrisMarshallNY
> nobody will be willing to spend time on those bug reports.
That's basically a question of culture.
Bad culture will result in bad results, no matter what tools and techniques we use.
jordansmithnz
If you actually want to write software without bugs:
Assume that your code will have bugs no matter how good you are. Correct for that by making careful architecture decisions and extensively test the product yourself.
There’s no silver bullet. If you put in enough time and make some good decisions, you can earn a reputation for writing relatively few bugs.
ChrisMarshallNY
Yup.
When I test, I always find bugs. Never fails.
Watching this posting dive down from the HN front page has been interesting (and expected).
ChrisMarshallNY
Hmm… Looks like it was “second chanced,” but it’s still struggling to stay relevant. Hasn’t really been shown much love.
It’s rather discouraging to see how discussions of Quality Development are treated, hereabouts.
Quality seems to be heresy.
skulk
I will remember this advice the next time I decide to write a bug.
charles_f
Yes, don't write bugs and your code will be bug free.
lizard
In a college chemistry lab we would have to write lab reports of our work. The instructor made it very clear that he has never given 100% on a lab reports because there's always something to improve.
A one of my CS cohorts happened to be in the same class so we teamed up for the first lab project. It was pretty straightforward, we collected whatever information, and started working on our report. We didn't bother spending much time on it because we already knew we'd lose points for something or another.
When we got it back, there was a big, red "100" on top. We checked around and it did look like we were the only ones that got a perfect score, so we went to the instructor and, mostly jokingly, said, "What's up with this?" to which he stayed on beat and replied, "Do you want me to take another look?"
It's not hard to do good work, but you do have to make a habit of it. Re-read what you write, preferably out loud, to make sure it actually makes sense.
You'll still make errors and mistakes and you won't catch them all, but no one's going to care about a typo or two unless you draw attention to it with more glaring problems. And I think this is where metrics, especially things like code coverage, can actually be detrimental, because they bring attention to the wrong things.
Specifically, in places I've seen code coverage enforced, tests (written by consultants making x5-10 more than I do) tend to look like, `assert read_csv("foo,bar") == [["foo", "bar"]]` that execute enough of the function to satisfy the coverage requirements, but everyone is surprised when things break using a real CSV document.
The corollary of the author's trick is that if you keep making excuses to produce poor work, you may subconsciously decline instead.
TypingOutBugs
This feels like fluff. You can think a few steps all you like but bugs will creep in, those you can’t think about, those in areas you don’t quite understand, those that require weird sequences of events.
layer8
The key, IMO, is having awareness about when you don’t quite understand something, because that means you can’t reason about the code to prove it correct to yourself in your head. And then, avoid shipping code in such a state at (almost) all cost. This awareness can be trained, and I suspect that the author’s virtually-bug-free shipping record is based on that. My personal experience is that bugs are nearly always caused by code where I ignored my inner uncertainty about the code.
listenallyall
I dont think the author's goal (or reality) is perfection, zero bugs. A fluent English speaker, even an English professor, will occasionally trip up on a word or write a confusing sentence. But if thoughtfulness and planning reduce 80% of bugs or more, that's a big win.
corytheboyd
Just add other teams of people doing work in parallel, then make all the work depend on each other, and bugs will become even more inevitable. All the integration, end-to-end, contract, etc. testing in the world won’t save you, the savant incapable of writing bugs, from encountering and having to deal with bugs.
lcnPylGDnU4H9OF
> My “trick” during that final year was simple: I always tried to write correctly, not just when I was asked to, but all the time. After a year of subconscious improvements, I aced the exam.
Practice makes permanent. Perfect practice makes perfect.
txru
I'm trying this with learning piano, and I see the advice in a good number of places-- if I make a mistake in a phrase, I repeat the phrase 5-7 times correctly, instead of pushing through. It's been working out well so far-- I'm not 'burning in' my mistakes.
SketchySeaBeast
Ah, but perfect is the enemy of good.
Night_Thastus
Perfect is the enemy of ever shipping an actual product.
bccdee
That's a justification for making a product with fewer features, not for making a product that's packed with bugs.
ChrisMarshallNY
Hmm... Hasn't been my experience. I've been shipping a lot of really high-Quality stuff for decades.
It just takes a lot of work. No shortcuts.
But that WFM. YMMV.
chowells
This is pretty spot-on. A culture of testing everything will ossify code by not understanding what a "unit" is and amplifying developer churn by testing implementation details instead of the actual units. A culture of just getting features out the door will suffer under the weight of every change dragging on all future changes.
If you want to really get code that can be adapted whenever requirements change, you need to be thoughtful. Understand the code you write. Understand the code you choose not to write. Understand the code that was there before you got there. Think about the edge cases and handle them in a way that makes sense.
I'd call it "practicing writing code without bugs" rather than "writing code without bugs", though. In the end it's a practice. Is it going to be what you work towards every day, or is just an afterthought?
ChrisMarshallNY
> Understand the code you write. Understand the code you choose not to write. Understand the code that was there before you got there.
Yup. I have seen so many people write stuff that could just be a YAML script, tying together massive dependency trees that they have no clue about.
Then, they lose it, when things go pear-shaped.
However, if the goal is to sell the company before the chickens come home to roost, it's a feature, not a bug.
ChrisMarshallNY
I do my best. I usually have so few bugs in my final ship products, that it's not worth it to have a tracker.
Getting there, though, I have lots of bugs. It's just that I want them gone, before I pat my app on the butt, and send it out into the field.
I often see people use Voltaire's phrase "Perfect is the enemy of the good," to justify writing bug farms. I'm not sure that this is what he meant.
csours
Functional Core - are you unit testing side effects?
https://www.destroyallsoftware.com/screencasts/catalog/funct...
taeric
I don't think anyone says you should just casually ship bugs. Quite the contrary, most are ok with the idea that, if you see a bug, fix the bug. But, there can be no doubt that there is diminishing returns on chasing down every potential bug.
This reads to me like the idea that a rich person walking down the road wouldn't pick up a $20 they happen to see at their feet. Of course they will. Why not?
What they don't do, is waste time walking around looking for spare money that has been dropped. Because that would almost certainly be a waste of time.
Similarly, use your tools to write as efficient and bug free code as you can. Make it as flexible and allow for any future changes you can accommodate along the way. But "along the way" should be "along the way of delivering things in a timely manner." If you stray from that, course correct.
atq2119
> What they don't do, is waste time walking around looking for spare money that has been dropped. Because that would almost certainly be a waste of time.
Why is it usually a waste of time? Because people rarely lose multiple bills of money, and if they do, our vision system is well equipped to spot the other bills quickly.
The opposite is often true with software in my experience.
When there is a bug, it's often because the software is in a state of imbalance and confusion, and there are multiple bugs nearby.
And humans tend to be relatively bad at spotting bugs.
So, when you see a bug it is usually worth spending a moment to reflect on whether you've fixed the bug properly and whether there are other bugs in the vicinity. It is likely to be worth it just for the bug fixes.
But there's also the learning effect that comes with it as described by TFA.
taeric
Again, if you see a bug, fix it. If you are already in a section of code, read through all of it. And heck, if you are not running late on anything, feel free to start trying to re-architect parts that you think are off.
If I'm just caught by a strawman at the start of the essay, apologies on that. I legit don't know anyone that casually encourages bugs as long as you have features. Tolerances are a thing, but so is negligence.
ChrisMarshallNY
About 30 years ago, there was this wonderful book, called Writing Solid Code[0]. Reading it, was a watershed, in my personal development.
It has many techniques described, that have since become Canon, and a number that have not aged so well.
One that is probably impractical, these days, is Step Through Your Code. He recommends stepping through every line of your code in a symbolic debugger, making sure that the code flow is what you expect, and the app state is appropriate.
Every now and then, I can do it. Often, when I’m already there, for something else. It really does work.
atq2119
It's not that I know people who casually encourage bugs, but I know plenty of people who don't habitually do the kind of thing you describe in your first paragraph.
ChrisMarshallNY
> I don't think anyone says you should just casually ship bugs.
I'm not going to link to it, but there's an old post from someone here, that pretty much sums up the zeitgeist.
They say that if the code quality on your MVP doesn't physically disgust you, you're probably focusing on code quality too much.
That is, quite literally, making the conscious decision to "casually ship bugs."
taeric
I mean, is that a general feel across industry, or something a rando online offered up as a witty quip? I have never worked anywhere where people were casual about shipping bugs. Some of the healthier places I've been have focused on not getting worked up when you do. Don't take pride in it, but don't sweat the small mistakes. You will make them, whether you want to or not.
ChrisMarshallNY
I wouldn’t know, but it was certainly presented as a “general feel,” and, anecdotally, almost every interaction that I’ve personally had with “modern” tech companies (in fact, with one, it resulted in a multimillion-dollar disaster), have shown me that “feel.”
The disaster I mentioned was particularly heartbreaking, because the tech was solid, and the people behind it, were good, but they absolutely refused to give respect to Quality, and everything went to shit.
Just look at the way any discussion of Quality Development gets treated on HN. This very post nosedived, ten minutes after it posted. The only reason that it’s still around, is because it must have been “second-chanced.”
In grad school I took a formal methods class where we proved properties about programs that completely changed how I think about bugs. The main things I took from the class were
1. Correctness of a program is distinct from its performance.
2. Program correctness can be proven.
3. Optimizing for performance often makes it harder to prove correctness.
I do not actually use formal methods in my work as a developer, but the class helped improve my program quality nonetheless. Now I generally think in terms of a program being correct rather than having no bugs. Technically these are the same thing, but the change of language brings a change of focus. I generally try to use the term "error" instead of "bug", for an incorrect program.
My strategy is to write the simplest correct version of the program first, convince myself that it is correct, and then optimize, if necessary without regressing on correctness. I generally use tests, rather than formal proofs, though, so of course there is still the possibility of uncaught errors, but this strategy works well overall.
Thinking this way also gives me guidance as to how to break down a program into modules and subprograms: anything that is too big or complex for me to be able to reason about its correctness must be subdivided into pieces with well-defined correctness.
It also has clarified for me what premature optimization means: it is optimizing a program before you know it's correct.
(EDIT: fixed "reason about its complexity" to say "reason about its correctness" in the penultimate paragraph.)