I want to be a Journey Programmer Again
71 comments
·June 15, 2025ctoth
hexhowells
OP here, I whipped this up in like 10 minutes after modelling the problem from a new perspective (I want to be less of a perfectionist with my blogs) so there are definitely grey areas I didn't consider/cover.
I do think LLMs can be good for certain boilerplate code whilst still allowing you to enjoy the problems you care about, and as far as my binary definitions this is more of a grey area.
I guess for me, this has introduced a slippery slope where if the LLM can also code the "fun" stuff, I'll be more inclined to use it, which defeats the whole purpose for me. Perhaps being able to identify which type of project I am working on, it can help me avoid using LLMs to enjoy programming more again!
throwaway31131
Maybe you could ask the LLMs to stub out whatever you consider fun leaving you with a LeetCode style problem to solve. I could see that being fun. I actually really like LeetCode in the same way some people like doing Sunday crossword puzzles.
drbojingle
I'm 100% in the same boat. Bring on the brave new world and let me go higher
hnlmorg
> OP treats all implementation details as equally valuable parts of "the journey,"
Do they? That wasn’t my take away from the article.
My impression was that the author missed the enjoyment of problem solving because they overused AI. Not that they think all problems are equal.
For what it’s worth though, I do agree with your more general point about AI use. And in fact that’s how I’ve used AI code generation too. “Solve the tedious problem quickly so you can focus on the interesting one”.
Fraterkes
I get your point, I think the difficult thing is that these tools are not delineated: preground-ink does not have the capability to write your stories for you, but with llms we constantly have to reasses which parts of the thing we are building merrit our attention.
If llms get better, will you have to decide whether you actually care about writing decision trees, or if instead you just want to, more generally, curate procedural interactions (or something)?
My point is: if these next few years every project becomes an exercise in soul-searching for which parts of my work actually interest me, it is maybe less work not to use these tools, or alternatively, find something fullfilling that doesn’t involve making something.
hintymad
A trajectory question: has anyone thought about becoming a journeyman in their day-to-day work? Like a backend engineer switching to building machine learning models. Or a frontend engineer moving into optimizing LLM serving infrastructure. The challenge isn’t so much technical—it’s social.
Here's a typical scenario: you're a well-respected senior engineer at your company. Say you're an E8 at Meta. You spend your days in meetings, write great documentation, and read more papers than most, which helps you solve high-level architectural problems. You’ve built deep expertise in your domain and earned a strong reputation, both internally and in the industry.
But deep down, you know you’re rusty with tools. You haven’t written production code in years. You’re solid in math and machine learning theory from all the reading, but you’ve never actually built and shipped production ML models. You're fluent in linear algebra and what not, but you don't know shit about writing CUDA libraries, let alone optimizing them. When you check the job specs at companies like OpenAI, you see they’re using Rust. You might be able to write a doubly linked list in Rust, but let’s be honest—you’d struggle to write a basic web service in it.
So switching domains starts to feel daunting. To say the least, you'll lose your edge to influence. Even if you’re willing to take a pay cut, the hiring company might not even want you. Your experience may help a little, but not enough. You’d have to give up your comfortable zone of leading through influence and dive back into the mess of writing code, fixing elusive bugs, and building things from scratch—stuff you used to love.
But now? You’ve got a family of five. You get distracted more often. Leadership fits your life better—you can rely more on experience, communication, intuition. Still, a part of you misses being a journeyman.
So how does someone actually make that move? Do you just bite the bullet and try? Stick to adjacent areas to play it safe? Join a company doing the kind of work you want, but stay in your current domain at first—say, a backend engineer goes to OpenAI but still works on infra? Or is there another path?
pyman
Feels like we're heading towards a world where computer languages disappear, and we just use human language to tell machines what to do. Kinda like how typewriters got replaced by computers in the 80s. Back then, people spent so much time making sure there were no typos, they'd lose focus on the actual story they were trying to write.
Same thing's happening now with code. We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs, etc, and not enough time thinking about the real problem we're trying to solve.
From Assembly to English. What do you reckon?
sanderjd
As much as I'm finding LLMs incredibly useful, this "world where computer languages disappear" doesn't resonate with me at all. I have yet to see any workflows where the computer language is no longer a critical piece of the puzzle, or even significantly diminished in importance.
I think there is an important difference between LLM-interpreted English, and compiler-emitted Assembly, which is determinism.
The reason we're still going from human prompt to code to execution, rather than just prompt to execution, is that the code is the point at which determinism can be introduced. And I suspect it will always be useful to have this determinism capability. We certainly spend a lot of time debugging and fixing bugs, but we'd spend even more time on those activities if we couldn't encode the solutions to those bugs in a deterministic language.
Now, I won't be at all surprised if this determinism layer is reimplemented in totally different languages, that maybe are not even recognizable as "computer language". But I think we will always need some way to say "do exactly this thing" and the current computer languages remain much better for this than the current techniques to prompt AI models.
lubujackson
I predict we enter a world where these wand waving prompts are backed by well-structured frameworks that eliminate the need to dig in the code.
Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.
A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.
sanderjd
This is possible. But when I read something like this, I just wonder: Why would this be more efficient than doing this with the same component we already call "libraries" - that is, a normal library or component created with some computer language - and just using AI to create and perfect those libraries more quickly?
I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?
thenoblesunfish
English is not well-specified or unambiguous. Programming languages aim to be. This is a massive difference. Recall that laws are specified in English.
pyman
This is an interesting debate. For me, the real question is: What's the goal of any language (human or programming)?
In my opinion, it's to communicate intent, so that intent can be turned into action. And guess what? LLMs are incredibly good at picking up intent through pattern matching.
So, if the goal of a language is to express intent, and LLMs often get our intent faster than a software developer, then why is English considered worse than Python? For an LLM, it's the same: just patterns.
quesera
Laws attempt to solve this problem with verbosity. It works pretty well but of course the exceptions are always interesting.
But I think the domain of an AI-first PL would or could be much smaller. So the language is "lower-level" than English, but "higher-level" than any existing PL including AppleScript etc, because it would not have to follow the same kinds of strict parser rules.
With a smaller domain, I think the necessary verbosity of an AI-first PL could be acceptable and less ambiguous than law.
null
Disposal8433
> We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs
I definitely don't do that. It's a very small part of my job. And AFAIK, LLMs cannot generate assembly language yet, and CPUs don't understand English.
pyman
We live in a world with 7,000 human languages and around 8,000 programming languages. Most people only learn a handful, which limits how effectively they can express intent. This is inefficient.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
sitzkrieg
ive used various llms to generate x86, mips, riscv assembly with mostly usable results. you tend to see what it was trained on pretty quickly if you go deep tho
raincole
> Back then, people spent so much time making sure there were no typos, they'd lose focus on the actual story they were trying to write.
Were you a published author in the 80s?
Because I highly doubt this was how writers in 80s thought of their job.
pyman
No, but I've studied the history of computers and keyboards. There's plenty of evidence that writing with typewriters was much slower than using a computer. Writers were also more limited creatively, since they couldn't easily edit or move things around once the page was written.
ofjcihen
Slow doesn’t necessarily mean less creative. In fact it’s been argued that being slow and deliberate actually pulls you out of automated patterns of thinking and gives you time to mull over what you want to say.
This is even enhanced when you create a superficial barrier such as writing in all caps.
suzzer99
> Feels like we're heading towards a world where computer languages disappear, and we just use human language to tell machines what to do.
I agree, but it feels like we need a new type of L_X_M. Like an LBM (Large Behavior Model), which is trained on millions of different actions, user flows, displays, etc.
Converting token weights into text-based code designed to ease the cognitive load on humans seems wildly inefficient compared to converting tokens directly into UI actions and behaviors.
null
syx
While I agree with all the previous comments, your comment sparked an idea in me. I started imagining a future where we develop a new programming language optimized for LLMs to write and understand. In this hypothetical scenario, we would still need developers to debug and review the code to ensure deterministic outputs. Maybe this isn't so far-fetched after all. Of course, this is just speculation and imagination on my part.
mdaniel
Relevant: LLMunix - A Pure Markdown Operating System - https://news.ycombinator.com/item?id=44279456 - Jun, 2025 (1 comment)
horsellama
you’d need a training set covering all the useful cases. Something that we don’t have even now for mainstream languages
dataviz1000
Another good analogy is how calculators, people who performed mathematical calculations, were replaced by machines. Sure they were eventually put out of work, nonetheless, the mechanical and then electronic calculators eventually made entire industries so efficient it increased everyone's wealth and created new positions and jobs.
We will be fine.
throwaway31131
I could never relate to the programmers who wrote code for the sake of writing code. I write a lot of code, but for me the code is a means, not an end.
So I look at tools like LLMs as just the latest incarnation of tools to reduce the number of hours the human has to spend to get to the end.
When I very first started programming, a very long time ago, the programmer actually had to consider where in memory, like at what physical address, things were. Then tools came along and it’s not a thing. You were not a programmer unless you knew all about sorting and the many algorithms and tradeoffs involved. Now people call sort() and it’s fine. Now we have LLMs. For some things people think they’re great. Me personally I have not found utility in them yet (mostly because I don’t work on web, front end, or in python) but I can see the potential. But dynamic loaders and sort() didn’t replace me, I’m sure LLMs won’t either, and I’ll be grateful if it helps me get to the end with less time invested.
cube2222
Yeah, this,
LLMs to me are primarily:
1. A way to get over writers block; they can quickly get the first draft down, which I can then iterate on; I’m one of those people who generally first implement something in a dirty way just to get it working, and then do a couple more iterations / rewrites on it, so this suits my workflow perfectly. Same for writing a first draft of a design doc based on my brain dump.
2. A faster keyboard.
Generally, both of these mean that energetically, coding is quite a bit less mentally tiring for me, and I can spend more energy on the important/hard things.
jackdoe
> hollow destination.
I can say that in the last 2 years chatgpt/claude have added more code to my projects than me, and I am programming for 25 years (counting the rejected tokens as well).
When I use copilot/cursor it is so violent, it interrupts my thoughts, it makes me a computer that evaluates its code instead of thinking about how my code is going to interact with the rest of the system, how it evolves and how it is going to fail and so on.
Accept/Reject/Accept/Reject.. and in the end of the day, I look back, and there is nothing.
One day, it lagged a bit, and code did not come out, and I swear I didn't know what to type, as if it was not my code. On the next day I took time off work to just code without it. During that time I used it to write a st7796s spi driver and it did an amazing job, I just gave it 300 pages docs, and told it what api to make and it made amazing driver, I read it, and I used it, saved me half a day of work easily.
Life is what overcomes itself, as the poet said, I am not sure "destination programmers" exist. Or even if they do, I don't know what their "destination" means. If you want to get better, reflect on what you do and how you do it, and you will get better.
I wrote https://punkx.org/jackdoe/misery.html recently out of frustration, maybe you will resonate with it.
PS: there is no way we will be able to read llm's code in near future, it will easily generate millions of lines for you per day, so we will need to find am interface to debug it, a bit like Geordi from Star Trek. LLMs will be our lens into complexity.
globnomulous
Students of ancient languages fall into one of two camps: those who use translations for 'assistance' and those who don't. Classroom experiences have shown me that the two groups of students learn vastly different skills.
The group who struggle through texts by themselves with relying on any shortcuts -- they just sit with the text -- probably won't become top-shelf philologists, but when you give them a sentence they haven't seen before from an author they've read, the chances are very good that they'll be able to make sense of it without assistance. These students learn, in other words, how to read ancient languages.
The group who rely on translations learn to do precisely that: rely on a translation. If you give them a text by an author they've 'read' before and deny them use of side-by-side translation, they almost never had any clue how to proceed, even at the level of rudimentary parsing. Is that word the second-person-singular aorist imperative middle or is it the aorist infinitive active? They probably won't even know how to identify the difference -- or that there is one.
Our brains are built for energy conservation. They do what, and only what, we ask of them. Learning languages is hard. Reading a translation is easy. Given the choice betweem the harder skill and the easier, he brain will always learn the easier. The only way to learn the harder one is to remove the option: sit with the text; struggle.
So far I've been able to avoid LLMs and AI. I've written in other comments on HN about this. I don't want to talk to an anthropmorphic chat UI, which I call "meeting-based programming." I want to work with code. I want to become a more skillful SWE and better at working with programming languages, software, and systems. LLMs won't help me do this. All the time they save me -- all the time they steal from reading code, thinking about it, and consulting documentation -- is time they've stolen from the work I actually want to do. They'll make me worse at what I do and deprive me of the joy I find in it.
I've argued with teammates about this. They don't want to do the boring stuff. They say AI will do it for them. To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening and giving up the parts of themselves that they'll need to call upon when they find something 'interesting' to work on (edit: and I'd wager that what they consider interesting will be debased over time as well, as programming effort itself becomes foreign and a less common practice.)
xandrius
One could say this about absolutely any technology.
Using a hoe is making you weaker than if you just used your bare hands. Using a calculator is making your brain lose skill in doing complicated arithmetic in your head.
Most have never built a fire completely from scratch, they surely are lacking certain skills but do/should they care?
But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
sotix
Does the hoe operate itself?
I took a statistics course in high school where we learned how to do everything on a calculator. I was terrible and didn’t understand statistics at the end of it. My teacher gave me a gentleman’s C. I decided to retake the course in college where my teacher taught us how to calculate the formulas by hand. After learning them by hand, I applied everything on exams with my calculator. I finished the class with a 100/100, and my teacher said there was no need for me to take the final exam. It was clear I understood the concept.
What changed between the two classes? Well, I actually learned statistics rather than how to let a tool do the work for me. Once I learned the concept, then I was able to use the tool in a beneficial way.
bsder
> To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening the parts of themselves that they call upon when they want to work on the 'interesting' stuff.
It's worse than that, people who rely too much on the AI never learn how to tell when it is wrong.
This is different from things like "nobody complains about using a calculator".
A calculator doesn't lie; LLMs on the other hand lie all the time.
(And, to be fair, even the calculator statement isn't completely true. The reason why the HP 12C is so popular is that calculators did lie about some financial calculations (numerical inaccuracy). It was deemed too hard for business majors to figure out when and why so they just converged on a known standard.)
mbil
Thanks for sharing. I’m a “destination” programmer most of the time, and I’ve welcomed LLMs into my work. I actually wrote about this from my perspective just the other day: https://matthewbilyeu.com/blog/2025-06-14/vibecoding-s-allur...
charlie0
The solution to this, imo, is to expand the definition of what it means to "program". I'm increasingly realizing that AI tools are the new programming substrate. I've been able to heavily automate workflows and I use the word workflows loosely here.
It's allowed me tackle other parts of the knowledge stack that I would otherwise have no time for. For example, learning more about product management, marketing, and doing deeper research into business ideas. The programming has now gone strictly from coding to automating the flows related to these other jobs. In that sense, I'm still "programming", it just looks different and doesn't always involve an IDE. Bonus is my leverage has dramatically increased.
einpoklum
> I'm increasingly realizing that AI tools are the new programming substrate
Human programming is the old, and new, programming substrate - and the liberal substrate for what AI tools do. They're trained on it.
MarkusQ
To be honest, I had the same reaction when I started using high-level languages. I wasn't touching the metal (certainly not as much as I had been when solving problems sometimes involved things like repurposing unused bits on a multiplexed bus talk to a new peripheral) and it somehow felt less real. But pretty quickly the range of problems I was addressing shifted, and everything clicked back into focus. I'd never _really_ been touching the metal and I always had been (and still was) in touch with it. Ditto giving up stick shift. And I imagine at some point artists felt the same thing when they transitioned to commercially prepared oil paint.
lelele
This. The mental shift resembles the one away from machine language, then away from assembly, then away from C... But programmers who still knew how things worked at lower levels had an edge on others.
socalgal2
I sense similar things to the OP. This feeling of not really thinking through some of the things I would have thought through before
At the same time, at least at the moment, this feels like just another tool. I'm old, started programming in the early 80s. Basic->Asm->C->C++ (perl-python-js-ts-go). Throughout my life things have gotten easier. Drawing an image on my Atari 800 or Apple II was way harder than it is on any PC today in JavaScript with the Canvas API or some library like three.js. Reading files, serialization, data strcutures, I used to have to write all that code by hand. I learned how to parse files, how to deal with endian issues, alignment issues, write portable code, etc but today I can play a video in 3 lines of JavaScript. I'm much happier just writing those 3 lines than writing video encoders/decoders by hand (did that in the 90s) and I'm much happier writing those 3 lines than integrating ffmpeg or some other video library into C++ or Rust or whatever. Similarly in 3D, I'm much happier using three.js or Unreal or Unity than writing yet another engine and 100+ tools.
ATM LLMs feel like just another step. If I'm making a game, I don't want the AI to design the game, but I do want the AI to deal with all the more tedious parts. The problem has been solved before, I don't need to solve it again. I just want to use the existing solution and get to the unique parts that make whatever I'm making special.
analog31
>>> Like many people I've become more reliant on LLM tools as time has passed...
"Time has passed", indeed. Like 9 months. This just reminded me in a quaint way how we've gotten used to such rapid progress.
danielbln
I love solving problems, ideally with somewhat creative solutions. Code is one way of accomplishing that, and there are many fun parts to that process. The composition of functionality, the design and structure and so on. The most enjoyment however I get from getting something solved, and if I have to leave the intricate dance with the code to the machine to get there faster and often better, I'll happily do it.
Let me share a problem I solved recently (well, a year ago god I'm getting old)!
I had to write this template loader for my space sim... reference resolution, type mapping, and YAML parsing. This isn't the code I wanted to write. The code I wanted to write was behavior trees for AI traders, I'm playing with an idea where successful traders can combine behavior trees yada yada, fun side project.
But before I could touch any of that, I had to solve this reference resolution problem. I had to figure out how to handle cross-references between YAML files, map string types to Python classes, recursively fix nested references. Is this "journey programming"? Sure, technically. Did I learn something? I guess. But what I really learned is that I'd already solved variations of this problem a dozen times before.
This is exactly where I'd use Claude Code or Aider + plan.md now - not because I'm lazy or don't care about the journey, but because THIS isn't my journey. My journey is watching AI merchants discover trade routes, seeing factions evolve new strategies, debugging why the economy collapsed when I introduced a new resource.
OP treats all implementation details as equally valuable parts of "the journey," but that's like saying a novelist should grind their own ink. Maybe some writers find that meaningful. Most just want to write. I don't want to be a "destination programmer" - I want to be on a different journey than the one through template parsing hell.