Skip to content(if available)orjump to list(if available)

Look Out for Bugs

Look Out for Bugs

30 comments

·September 4, 2025

electric_muse

The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection.

What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing. Reading code can help, sure, but treating it as some kind of superpower is survivorship bias.

ratmice

One thing worth pointing out here is you that when reading you rarely will actually find the bug that you set out to, undoubtedly you'll notice others. Because you're reading the code with a mindset of "what awkward conditions failed to be handled appropriately such that xyz could happen".

It is also valuable to both form a hypothesis of how you think the code works, and then measure in the debugger how it actually works. Once you understand how these differ, it can be helpful in restructuring the code so it's structure better reflects it's behavior.

Time spent reading code is almost never fruitless.

Amorymeltzer

The jumping off point given in the lede of the post—<https://www.teamten.com/lawrence/programming/dont-write-bugs...>—ends with this:

>If you want a single piece of advice to reduce your bug count, it’s this: Re-read your code frequently. After writing a few lines of code (3 to 6 lines, a short block within a function), re-read them. That habit will save you more time than any other simple change you can make.

So, more focused on a ground-up, de novo thing as opposed to inheriting or joining a large project. Different models of "code" and different strokes for different folks, I guess, but the big takeaway I like from that initial piece is:

>I spent the next two years keeping a log of my bugs, both compile-time errors and run-time errors, and modified my coding to avoid the common ones.

It was a different era, but I feel like the act of manually recording specific bugs probably helps ingrain them better and help you avoid them in the future. Tooling has come a long way, so maybe it's less relevant, but it's not a bad thing to think about.

In the end, a lot of learning isn't learning per se, but rather learning where the issues are going to be, so you know when to be careful or check something out.

geocar

> The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection.

Yes, yes, why bother reading your code at all? After all, eventually 15 years will pass whether you do anything or not!

I think if you read it while it's 500 lines, you'll see a way to make it 400. Maybe 100 lines. Maybe shorter. As this happens you get more and more confident that these 50 lines are in fact correct, and they do everything that 500 lines you started with do, and you'll stop touching it.

Then, you've got only 1,5m lines of code after 15 years, and it's all code that works: that you don't have to touch. Isn't that great?

Comparing that to the 15m lines of code that doesn't work, that nobody read, just helps make the case for reading.

> What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing.

Nonsense. The most widely-deployed software with the lowest bug-count is written in C. Type systems could not have done anything to improve that.

> sure, but treating it as some kind of superpower is survivorship bias

That's the kind of bias we want: Programs that run for 15 years without changing are the ones we think are probably correct. Programs that run for 15 years with lots of people trying to poke at them are ones we can have even more confidence in.

vbezhenar

In my experience there's one approach, which might not necessarily prevent bugs, but helps to reduce their numbers, and does not require much effort. I'm trying to use it whenever possible.

1. Code defensively, but don't spend too much time on handling error conditions. Abort as early as possible. Keep enough information to locate error later. Log relevant data. For example just put `Objects.requireNonNull` for public arguments which must not be null. If they're null, exception will be thrown which should abort current operation. Exception stacktrace will include enough information to pinpoint the bug location and fix it later.

2. Monitor for these messages and act accordingly. My rule of thumb: zero stack traces in logs. Stacktrace is sign of bug and should be handled one way or another.

With bug prevention, it's important to stay reasonable, there's only so much time in the world and business people usually don't want to pay 10x to eliminate 50% bugs. And handling theoretical error conditions also adds to the complexity of codebase and might actually hurt its maintainability.

mamcx

> What actually prevents bugs at scale is boring stuff:

There is a layer above this: To understand, really, what are the requirements and to check if are delivered. You can have perfect code that do nothing of consequence. Is the equivalent of `this function is not used by anything` but more macro.

But of course, the problem is to decipher the code, where what you say helps a ton.

alphazard

> type systems, invariant checks

Yes, something strange happens in large systems, where it's better to assume they work the way they are supposed to, rather than deal with how they (currently) work in reality.

It's common in industry for (often very productive) people to claim the "code is the source of truth", and just make things work as bluntly as possible. Sprinkling in special cases and workarounds as needed. For smaller systems that might even be the right way about it.

For larger systems, there will always be bugs, and the only way for the number of bugs to tend to zero is for everyone to have the same set of strong assumptions about how the system is supposed to behave. Continuously depending on those assumptions, and building more and more on them will eventually reveal the most consequential bugs, and fixing them will be more straightforward. Once they are fixed, everything assuming the correct behavior is also fixed.

In large systems, it is worse to build something that works, but depends on broken behavior than to build something that doesn't work, but depends on correct behavior. In the second case you basically added an invariant check by building a feature. It's a virtuous process.

voihannena

I've also noticed that a strong type system and things like immutability have worked tremendously well for minimizing the amount bugs. They can't necessarily help with business rules but a compiler can definitely clear out all the "stupid" bugs.

notpachet

Dealing with a 15-year old legacy codebase with strong types: awful but manageable. Without: not a chance.

coxley

You can definitely use this approach for large projects. No matter how big, at some point you are just reading a function or file. You don't need to read every single file to find bugs.

This can be combined with a more strategic approach like: https://mitchellh.com/writing/contributing-to-complex-projec...

markbnj

I once found a bug in code that was read to me over the phone while I sat in an airport waiting for a flight. So I agree that constructing a model of the program in your head is the key, and you can use various interfaces for that. Some are more optimal than others. When I first started learning to write programs we very often debugged from printed listings for example. They rolled up nicely but random access was very slow.

ChrisMarshallNY

In Writing Solid Code[0], Steve Maguire recommends stepping through every line of code, in a symbolic debugger.

Sounds crazy, but I usually end up doing that, anyway, as I work.

Another tip that has helped me, is to add code documentation to inline code, after it’s written (I generally add some, but not much inline, as I write it. Most of my initial documentation is headerdoc). The process of reading the code, helps cement its functionality into my head, and I also find bugs, just like he mentions.

[0] https://writingsolidcode.com/

lapcat

> In Writing Solid Code[0], Steve Maguire recommends stepping through every line of code, in a symbolic debugger.

> Sounds crazy, but I usually end up doing that, anyway, as I work.

This doesn't sound crazy to me. On the contrary, it sounds crazy not to do it.

How many bugs do we come across where we ask rhetorically, "Did this ever work?" It becomes obvious that the programmer never ran the code, otherwise the bug would have exhibited immediately.

ChrisMarshallNY

True, dat.

Writing Solid Code is over 30 years old, and has techniques that are still completely relevant, today (some have become industry standard).

Reading that, was a watershed in my career.

jamil7

I also do this quite a lot but pair it with an automated test to repeatedly trigger the breakpoint with different values and round out the tests and code accordingly.

ChrisMarshallNY

That sounds like an excellent practice!

K0nserv

I agree with the idea of not making bugs in the first place. Overall I think this piece is great and includes good suggestions. However, personally I think the best weapon to avoid writing bugs is making them impossible in the first place, ala "Making invalid state unpresentable".

Interestingly there's a post from the last day arguing that "Making invalid state unpresentable" is harmful[0], which I don't think I agree with. My experience is that bugs hide in crevices created by having invalid states remain representable and are often caused by the increased cognitive load of not having small reasoning scopes. In terms of reading code to find bugs, having fewer valid states and fewer intersections of valid state makes this easier. With well-define and constrained interfaces you can reason about more code because you need to keep fewer facts in your head.

electric_muse's point in a sibling comment "The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection." is a good case study in this too. Having poorly scoped state boundaries means this reasoning is hard, here too making invalid states unpresentable and interfaces constrained helps.

0: https://news.ycombinator.com/item?id=45164444

gobdovan

Isn't this basically what a debugger gives you? You say "follow the control flow" and "track state," but those are exactly what I do when stepping through code with invariants and watchpoints. The only real difference I see is that reading doesn't require a reproducible example, while debugging does. Otherwise, the habits seem nearly identical.

lapcat

> The only real difference I see is that reading doesn't require a reproducible example, while debugging does.

You can manipulate values in a debugger to make it go down any code path you like.

foobarbecue

Why is "public static void ..." written in Cyrillic here? I guess this might be a joke?

kaathewise

The first programming language I learned was Java. And for us non-native speakers who didn't know English very well at that point public static void did indeed sound like a magic spell. It was behind both an understanding and a language barriers

zahlman

When I first saw Java, I had already seen multiple dialects of BASIC, plus Turing (a Pascal dialect), HyperTalk (the scripting language of HyperCard, and predecessor of AppleScript), J (an APL derivative), C and C++. I'm also a native speaker of English.

Your perception is still warranted. It was clear enough to me what all of that meant, but I was well aware that static is an awkward, highly overloaded term, and I already had the sense that all this boilerplate is a negative.

Joker_vD

To try and relate to the native English speakers the impression of how the usual boilerplate feels like arbitrary magical incantations to novice programmers (and non-native English speakers, I guess).

vlaaad

Does this person also identify performance issues by reading the code? This is completely impractical.

IshKebab

You totally can identify performance issues by reading code. E.g. spotting accidentally-quadratic, or failing to reserve vectors, or accidental copies in C++. Or in more amateur code (not mine!) using strings to do things that can be done without them (e.g. rounding numbers; yes people do that).

It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".

jcgrillo

Practitioners of this approach to performance optimization often waste huge swaths of their colleagues' time and attention with pointless arguments about theoretical performance optimizations. It's much better to have a measurement first policy. "Hmm that might be slow" is a good signal that you should measure how fast it is, and nothing more.

lapcat

> Does this person also identify performance issues by reading the code? This is completely impractical.

This sounds like every technical job interview.

Nevermark

Once your code is optimized so that manual mental/notepad execution is fast enough, it will crush it on any modern processor.

152334H

Extremely funny post.

The author doesn't grasp how much of what they've written amounts to flexing their own outlier intelligence; they must sincerely believe the average programmer is capable of juggling a complex 500 line program in their heads.