Skip to content(if available)orjump to list(if available)

Discovery Coding

Discovery Coding

89 comments

·January 29, 2025

lubujackson

Someone said on HN a while back that this sort of approach is the most sensible, because any design doc is just an incomplete abstraction until you get in the weeds to see where all the tricky parts are. Programming is literally writing plans for a computer to do a thing, so writing a document about everything you need to write the plan is an exercise in frustration. And if you DO spend a lot of time making a perfect document, you've put in far more work than simply hacking around a bit to explore the problem space. Because only by fully exploring a problem can you confidently estimate how long it will take to write the real code. Often times after the first draft I just need to clean things up a bit or only half complete a bad solution before improving it. Yes, there are times you need to sit and have a think first, but less often than you might imagine.

Of course, at a certain scale and team size a tedious document IS the fastest way to do things... but god help you if you work on a shared code base like that.

I've always thought the rigid TDD approach and similar anti-coding styles really lend themselves to people that would rather not be programming. Or at least have a touch of the OCD and can't stand to not have a unit test for every line of code. Because it really is a lot more work both up front and in maintenance to live that way.

Cyber-paper is cheap, so don't be afraid to write some extra lines on it.

fhd2

TDD is being done in weird ways by lots of people from what I've seen. I always understood the book's advice to never write code without a test as both aspirational, and productivity advice, not a hard rule.

My first job predated (at least our knowledge of) TDD and unit test frameworks. We would write little programs that would include some of our code and exercise them a bit during development. Later when everything was working and integrated, we'd throw it away. I believe that used to be called scaffolding (before Rails gave that term a different meaning).

When I got into unit testing and some degree of TDD a while later, I kinda kept the same spirit. The unit tests help me build the thing without needing ten steps to test that it works. Sure, I keep the tests, but primarily as documentation on how the parts of the system that are covered should behave. And when it's significantly easier to test something manually than to write a unit test, I tend towards that.

In languages that have good REPLs, I tend to write fewer tests, cause they function as a universal test scaffold.

Trying to reach 100 % test coverage and using unit tests for QA strikes me as strange. They're at most useful to quickly detect regressions. But most of these monster test suites become a burden over time from my experience. A pragmatic test suite rarely does. There's a lot of potential in having the right balance between unit tests, integration tests and manual testing. There's a lot of time wasted if the balance is off.

With this mindset, I totally write tests for a prototype if it looks like it'll save me time. Not even close to 100 % coverage though.

ChrisMarshallNY

> I believe that used to be called scaffolding

We always called them "Unit Tests." Same for what we now call Test Harnesses.

Sometime in the last decade or so, "Unit Test" has become a lot more formalized, to mean the code-only structural testing we see, these days.

I tend to like using Test Harnesses[0], which are similar to what you described.

Unit tests are great, but I have found, in my own work, that 100% code coverage is often no guarantee of Quality.

I have yet to find a real "monkey testing" alternative. I suspect that AI may give us that, finally.

Oh, I also do "Discovery Coding," but I call it "Evolutionary Design."[1] I think others call it that, as well.

[0] https://littlegreenviper.com/various/testing-harness-vs-unit...

[1] https://littlegreenviper.com/various/evolutionary-design-spe...

CobrastanJorji

I think it's a dangerous philosophy for professional work. You know what happens to code that works? It ships.

If you code out a solution to the problem in order to discover the problem space, I think the idea here is that you then can go back and write a better solution that accounts for all of the stuff you discovered via refactoring and whatnot. But you're not going to do that refactoring. You're going to ship it. Because it works and you don't know of any problems with it. Are there scaling problems? Probably, you haven't run into any yet. Does your solution fit whatever requirements other interested parties might have? Who knows! We didn't do any designing or thinking about the problem yet.

pjc50

This depends on whether your organization benefits from getting things done, or whether it's better to do nothing than 90% of the solution. Lots of organizations build elaborate work-prevention infrastructure to make sure that nothing less than 100%, or even 110%, gets done.

> Does your solution fit whatever requirements other interested parties might have? Who knows! We didn't do any designing or thinking about the problem yet

Often people don't know what they want until you show them something, at which point they start telling you what's wrong with it. Design by "strawman".

Many heavily lauded startup businesses were built on code that barely worked and got backfilled later.

Veserv

Any person who would not go back and do that necessary redesign is unfit to call themselves a craftsperson.

Any organization that would not demand you do that necessary redesign is a organization unfit for producing critical systems.

Any organization that would go even further and prevent/de-prioritize you from doing that necessary redesign is unfit to call whatever it is that they do “engineering”.

Why would you want to work at such a dystopian hellscape if you have the choice?

Note that this is only about shipping the known incomplete design to customers because it “works”.

And to get ahead of it, yes, startups selling systems held together by spit and twine are perfect examples of “unfit for critical systems”; you should not bet critical things on them until they mature.

zamalek

Discovery code doesn't have to be bad code. My rule of abstractions is that I defer them until they are needed, even if I know that they will be needed in the next sprint or whatever. I'll do a much better job making a precise abstraction when faced with an existing solution and a new problem.

Even if discovery code were bad code; I'll take "bad" code 8 out 10 times if the competition is abstractions for abstractions' sake (which "good" code often is). Bad code is often simple and direct, and therefore simple to comprehend and fix. Fancy code is a game of jenga.

Also, requirements often change by the next sprint - so my well-laid plains would be moot either way.

kamaal

>>Discovery code doesn't have to be bad code.

One of things that a lot of people are NOT used to- Always writing good code, no matter what you are writing it for.

A while back I was building a writing small snippets to fix a production issue. A colleague looked over my shoulder and said you don't have to do all that, just get it done for now.

He was quite surprised when he learned that is how I always wrote code. It feels like people have different coding styles for different situations. They do tend to have a mode where they are doing it for production, and they are quite slow at it.

slt2021

>>You know what happens to code that works? It ships.

there is nothing wrong with that, it is up to the developer and a team to enforce and gatekeep quality

or up to the business user to accept working code

smaudet

> You know what happens to code that works? It ships.

That's a process problem. Even professional writers have "drafts", you shouldn't be shipping first-draft code regardless. I guess maybe you are the equivalent of a 2 cent rag, and then product sucks, but we won't get into all the ways you can suck as a software org...

Programming is neither writing an article nor designing an engine, its somewhere in the middle. You can apply "discovery" or "drafting" to the professional process as much as you can apply engineering design paradigms. The formal act of writing (unit) testing provides this opportunity, I find, to take the "discovery" implementations, harness them in specific "design" requirements, and produce a polished product.

TDD had this backwards, design then develop, as if that would somehow produce fantastic designs (it doesn't, just a lot of test code). BDD (Behavior Driven Development) is better here, you aren't driven by your tests as much as your behaviors. You may discover them, but once you do you test that they continue to work correctly.

BobbyTables2

It’s very simple.

As soon as the code works, tell absolutely no one.

Then rewrite it….

zdragnar

This is basically waterfall and agile in a nutshell.

The extreme end of the other side of the spectrum is research and documentation to the extreme, coming up with a rigid, long development process that doesn't readily allow for any sort of iteration.

The only real question is what is the difference between a working prototype that gets thrown away and an actual MVP. You need buy in from the business up front, otherwise you won't be given the option to refactor or redo parts, you'll be told it is good enough and there are features that need shipping.

Jcampuzano2

If this were the prominent mentality, some of the greatest pieces of software there are would never have existed because they would run into too many brick walls to even try.

Do you never go back and redesign things later? Can we not ship unless we have all ends neatly tied up?

TheGoodBarn

I am an avid discovery coder and was actually day dreaming an outline for a similar article on my way home on the bus today. I think this is an extremely important concept at all levels of engineering and something we all need to adopt at one point or another in our careers / practice.

I think it follows a few topics, "the art of the POC/Spike" or just exploratory coding. These things give us a tangible hands on approach for understanding the codebase, and I think lend to better empathy and understanding of a software system and less rash criticisms of projects that may be unfamiliar.

This is particularly relevant to me right now as I am discovery coding a fairly large project at my company and working with product to lay out design and project planning. Whats difficult to express from my current standing is how the early stages of these types of projects are more milestone / broad based rather than isolated small key pieces. Sure I can spend a week delivering design, architecture, epic, outline docs for all the known and unknown features of the project (and I am). But at the same time I need to discover and test out base case / happy path solutions to the core business problem to more accurately understand the scope of the project.

I think its something I particularly love about being a TL / IC at my company. I have the flexibility and trust to "figure it out" and the working arrangement to provide adequate professional documentation at the appropriate time. I am fortunate to have that buy in from leadership and certainly recognize it as a unique situation.

All that being said:

1. Learn how to effectively isolate and run arbitrary parts of your system for YOUR understanding and learning 2. Make it work, make it right, make it fast. 3. Learn to summarize and document your findings in a suitable fashion for your situation 4. Encourage this throughout your team. Useful in all aspects from bug triage to greenfield work

pranavmalvawala

I’m this but the problem sometimes is I get overwhelmed by the stuff I’m working on. But honestly I wouldn’t do it any other way

tibbar

One trick I find helpful is to start by coding a subset of the problem with the goal of understanding the structure better. A brute-force solution, a simulation, a visualization of data, etc. And then use the discoveries of that process to do the real planning.

colordrops

I've heard this sort of activity called "pathfinding".

yen223

It's a "spike" in Agile parlance, if you want to sell this approach to agile people

dr_kiszonka

I usually "play around with" with a problem or data, but "pathfinding" sounds much better.

dusted

I call this exploratory programming, though my approach aligns more with the article I posted here, than with the Wikipedia definition.

I primarily use this method as a step preceding the actual production-quality implementation. It’s not like a prototype—I don’t throw everything away when I’m done. Instead, I extract the valuable parts: the learned concepts, the finished algorithms, and the relevant functions or classes. Unit tests are often written as part of setting up the problem, so I lift those out as well.

I’ve greatly enjoyed this approach, particularly in JavaScript&|TypeScript. Typically, I solve the difficult parts in a live environment and extract the solutions when I find them. I used to use my own "live environment" (hedon.js), but I eventually reversed the approach and built an environment around the built-in Node.js REPL (@dusted/debugrepl). I include this, at least during debugging and development builds, allowing me to live-code within a running system while having access to most, if not all, of the already-implemented parts of the program.

This approach lets me iterate at the function-call or expression level rather than following the traditional cycle of modifying code, restarting the program, reestablishing state, and triggering the desired call, something that annoys me to no end for all the obvious reasons.

anilakar

A large fraction of code I write at work is either network protocol reverse engineering or interfacing with physical devices. Peeling stuff open layer by layer is often the only way to approach a problem and if I had to document everything beforehand, I would end up writing the same program twenty times.

globalise83

I believe this is actually where Gen AI tools like Claude.AI come into their own. For example, in the past few weeks we needed to plan a complex integration project with both frontend and backend integrations, dependencies on data provided in backend and frontend, need to send data backwards and forwards from the 3rd party and our backend, etc, and in total probably a half-dozen viable alternative ways of doing it.

Using Claude plus detailed prompting with a lot of contextual business knowledge, it was possible to build a realistic working prototype of each possible approach as separate Git branches and easily demo them within about two days. Doing this also captured multiple hidden constraints aka "gotchas" in the 3rd party APIs.

Building each of these prototypes in the working Java codebase would have been a massive, time-consuming and pointless activity when a decision still needed to be made on which approach to go with. But getting Claude AI to whip up a simplified replica of our business systems using realistic interfaces and then integrate the 3rd party was super-easy. Generating alternative variants was as simple as running a script to consolidate the source files and getting Claude to generate the new variant, almost without any coding needed.

And because this prototype was built in few html and js files and run using node js, there is literally zero possibility of it becoming part of the production codebase.

chriscbr

I really appreciate this essay.

I've never been a [traditional] artist, but I reckon that those working in the arts, and even areas of the programming world where experimentation is more fundamental (indie game development, perhaps?), would intuit the importance of discovery coding.

Even when you're writing code for hairy business problems with huge numbers of constraints and edge cases, it's entirely possible to support programmers that prefer discovery coding. The key is fast iteration loops. The ability to run the entire application, and all of its dependencies, locally on your own machine. In my opinion, that's the biggest line in the sand. Once your program has to be deployed to a testing environment in order to be tested, it becomes an order of magnitude harder to use a debugger, or intercept network traffic, or inspect profilers, or do test driven development. It's like sketching someone with a pencil and eraser, but there are 5-10 second delays between when you remove your pencil and when the line appears.

Unfortunately, it seems like many big tech companies, even that would seem to use very modern development tooling otherwise, still tend to make local development a second class citizen. And so, discovery coders are second class citizens as well.

pj_mukh

Yea, TIL, I'm a discovery coder. Always found planning early in Greenfield projects kind a pointless. Planning is almost step 3 or 4. I almost always prototype the most difficult/opaque parts, build operations around testing and revising (how do you something is good enough?), and then plan out the rest.

sfn42

Hard agree on local development. I always make apps run locally and include a readme that describes all the steps for someone else to run it locally as well.

Ideally that should be as simple as adding a local app settings file (described in readme so people don't have to start reading the code to figure out what to put in it) for secrets and other local stuff (make sure the app isn't trying to send emails locally etc), and running Docker compose up. If there are significantly more steps than that there better be good reasons for them.

hi-wintermute

This reminds me of a tongue-in-cheek phrase we used to use in college.

"Hours of coding can save minutes of planning."

"Discovery Coding" sounds fun, but be careful with your time!

boxed

The opposite is also true.

I've seen the opposite happen maaaaany times, but I have never had what you describe happen that I can remember. Coding, if done top down, will find the real problems really fast. Discussions don't have this property of touching reality.

Discussing a problem is like theology.

Coding it is like science.

One involves thinking real hard, the other involves hard reality.

elcapitan

Also, minutes of planning without understanding the topic can lead to months of coding.

esperent

> careful with your time!

In my experience, this aphorism applies equally to any form of coding, and probably to nearly any complex human activity.

If you love writing outlines and plans, you can just as well waste time on that as the discovery coder does in their pathfinding. Not to mention the amount of time you can waste on refactoring and reorganizing.

sfn42

Can't plan what you dont know. That's the point, you discover/explore what you need in order to make a proper plan.

nurbl

I find that I always learn something valuable by diving in and trying ideas out concretely. High-flying plans can also cause a lot of wasted coding on things that won't work out.

yellowapple

Reminds me of the alleged programming approach of Dr. Joe Armstrong of Erlang fame (RIP): write a program, then rewrite it, then rewrite it, and so on until it's good enough.

That's also how I tend to program, though usually as an accidental consequence of my ADD brain getting distracted, then being entirely dissatisfied with my code (or worse: I was too clever with it and it's indecipherable) when I come back to it, prompting yet another rewrite.

TheCapeGreek

Never thought of it this way, but it makes sense. My default response to probing/planning type questions from business is "uhhh no clue, I have to dive into the code first and find out" precisely because of this.

benwerd

The actual term for a "discovery writer" is a "pantster" - i.e., you're writing by the seat of your pants - and I think that's a reasonable term to adopt here too.

Confession: I'm a pantster in writing both code and prose. In both cases, coming back and writing a spec (an eng spec in the case of code, a synopsis in the case of prose) is a reasonable thing to do. Structure is good, but the point is that it shouldn't get in the way of actually getting started and making some progress.