We can’t circumvent the work needed to train our minds
131 comments
·September 10, 2025trjordan
klodolph
I’ve tried the second path at work and it’s grueling.
“Almost certainly succeed” requires that you mostly plan out the implementation for it, and then monitor the LLM to ensure that it doesn’t get off track and do something awful. It’s hard to get much other work done in the meantime.
I feel like I’m unlocking, like, 10% or 20% productivity gains. Maybe.
BinaryIgor
Exactly, same for me
null
rorylaitila
Yeah I think this is what I've tried to articulate to people that you've summed up well with "You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing" - Most of the bottleneck with any system design is the hard things, the unknown things, the unintended-consequences things. The AIs don't help you much with that.
There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.
wduquette
In my experience, a lot of the hard thinking gets done in my back-brain while I'm doing other things, and emerges when I take up the problem again. Doing the regular work gives my back-brain time to percolate; doing hard thing after hard thing doesn't.
mrguyorama
Also at the end of the day, humans aren't machines. We are goopy meat and chemistry.
You cannot exclusively do hard things back to back to back every 8 hour day without fail. It will either burn you out, or you will make mistakes, or you will just be miserable.
Human brains do not want to think hard, because millions of years of evolution built brains to be cheap, and they STILL use like 10% of our daily energy.
CuriouslyC
I stay at the architecture, code organization and algorithm level with AI. I plan things at that level then have the agent do full implementation. I have tests (which have been audited both manually and by agents) and I have multiple agents audit the implementation code. The pipeline is 100% automated and produces very good results, and you can still get some engineering vibes from the fact that you're orchestrating a stochastic workflow dag!
danenania
I'd actually say that you end up needing to think more in the first example.
Because as soon as you realize that the output doesn't do exactly what you need, or has a bug, or needs to be extended (and has gotten beyond the complexity that AI can successfully update), you now need to read and deeply understand a bunch of code that you didn't write before you can move forward.
I think it can actually be fine to do this, just to see what gets generated as part of the brainstorming process, but you need to be willing to immediately delete all the code. If you find yourself reading through thousands of lines of AI-generated code, trying to understand what it's doing, it's likely that you're wasting a lot of time.
The final prompt/spec should be so clear and detailed that 100% of the generated code is as immediately comprehensible as if you'd written it yourself. If that's not the case, delete everything and return to planning mode.
mmargenot
> You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.
You don't have to remember everything. You have to remember enough entry points and the shape of what follows, trained through experience and going through the process of thinking and writing, to reason your way through meaningful knowledge work.
rafaquintanilha
"It is requisite that a man should arrange the things he wishes to remember in a certain order, so that from one he may come to another: for order is a kind of chain for memory" – Thomas Aquinas, Summa Theologiae. Not ironically I found the passage in my Zettelkasten.
mmargenot
It's weird to read this from zettelkasten.de, given that the method is precisely about cultivating such a graph of knowledge. "Knowing enough to begin" seems to me to be the express purpose of writing and maintaining a zettelkasten and other such tools.
skydhash
I would say they mean being able to recall, not having everything at once. It’s being able to answer the 5 why’s.
wduquette
I arrange my code to follow a certain order, so that I can get my head back into a given module quickly. I don't remember everything; there's too much over the weeks, months, and years. But I can remember enough to find what I need to know if I structure it properly. Not unlike, you know, a Zettlekasten.
mvieira38
Just to be clear, are you saying that to know something:
1- You may remember only the initial state and the brain does the rest, like with mnemonics
2- You may remember only the initial steps towards a solution, like knowing the assumptions and one or two insights to a mathematical proof?
I'd say a Zettlekasten user would agree with you if you mean 1
keremk
Actually this is how LLMs (with reasoning) work as well. There is the pre-training which is analogous to the human brain getting trained by as much information as possible. There is a "yet unknown" threshold of what is enough pre-training and then the models can start reasoning and use tools and the feedback from it to do something that resembles to human thinking and reasoning. So if we don't pre-train our brains with enough information, we will have a weak base model. Again this is of course more of an analogy as we yet don't know how our brains really work but more and more it is looking remarkably aligned with this hypothesis.
skybrian
This is task-specific. Consider having a conversation in a foreign language. You don't have time to use a dictionary, so you must have learned words to be able to use them. Similarly for other live performances like playing music.
When you're writing, you can often take your time. Too little knowledge, though, and it will require a lot of homework.
stronglikedan
I always tell people that I don't remember all the answers, only where to find them.
mallowdram
Of course you have to remember everything. Your brain stores everything, and you then get to add things by forgetting, but that does not mean you erase things. The brain is oscillatory, it works somehow by using ripples that encode everything within differences, just in case you have to remember that obscure action-syntax...a knot, a grip, a pivot that might let you escape death. Get to know the brain, folks.
chrisweekly
Interesting take. I respectfully differ. IIRC, Feynman said something akin to my POV:
Brains are for thinking. Documents / PKM systems / tools are for remembering.
IOW: take notes, write things down.
FWIW I have a degree in cognitive psychology (psychobiology, neuroanatomy, human perception) and am an amateur neuroscientist. Somewhat familiar w/ the brain. :)
mallowdram
Feynman wasn't a neurobiologist.
I'd read Spontaneous Brain by Northoff (Copernican, irreducible neuroscience) or oscillatory neurobiology Buzsaki.
The brain is lossless.
I would agree that external forms of memory are evolutionarily progressive, that ability to utilize the external forms requires a lossless relationship.
Once we grasp the infinitely inferior external of arbitrariness (symbols words) are correlated through superior, lossless, concatenated internals (action-neural-spatial-syntax), until we can externalize that direct perception, the externals are deeply inferior, lossy forms.
HPsquared
A bit like the memory palace. One memory leads to another. Not random-access.
palmfacehn
You only need the initial seed to restore the full state, provided you can reason your way from there. If you haven't applied yourself to problem solving, then perhaps you might need to memorize the full state.
mmargenot
Executing on meaningful knowledge work also might require many different paths, depending on the context and the environment. To me it's more about the method of inquiry and how you begin than it is the specific content. Sure, more individual facts help to guide that inquiry, but at any given moment you're only truly going to be able to recall a subset of those.
mars009
[dead]
tikhonj
> You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.
If humans did not have any facilities for abstraction, sure. But then "knowledge work" would be impossible.
You need to remember some set of concrete facts for knowledge work, sure, but it's just one—necessary but small—component. More important than specific factual knowledge, you need two things: strong conceptual models for whatever you're doing and tacit knowledge.
You need to know some facts to build up strong conceptual models but you don't need to remember them all at once and, once you've built up that strong conceptual understanding, you'll need specifics even less.
Tacit knowledge—which, in knowledge work, manifests as intuition and taste—can only be built up through experience and feedback. Again, you need some specific knowledge to get started but, once you have some real experience, factual knowledge stops being a bottleneck.
Once you've built up a strong foundation, the way you learn and retain facts changes too. Memorization might be a powerful tool to get you started but, once you've made some real progress, it becomes unnecessary if not counterproductive. You can pick bits of info up as you go along and slot them into your existing mental frameworks.
My theory is that the folks who hate memorization are the ones who were able to force their way through the beginner stages of whatever they were doing without dull rote memorization, and then, once there, really do not need it any more. Which would at least partly explain why there are such vehement disagreements about whether memorization is crucial or not.
Ethee
I've been having conversations about this topic with friends recently and I keep coming back to this idea that most engineering work, which I will define as work that begins with a question and without a clear solution, requires a lot of foundational understanding of the previous layer of abstraction. If you imagine knowledge as a pyramid, you can work at the top of the pyramid as long as you understand the foundation that makes up your level, however to jump a level above or below that would require building that foundation yet again. Computer science fits well into this model where you have people at many layers of abstractions who all work very well within their layer but might not understand as much about the other layers. But regardless of where you are in the pyramid, understanding ALL the layers underneath will lead to better intuition about the problems of your layer. To farm out the understanding for these things will obviously end up having negative impact not just on overall critical thinking, but on the way we intuit how the world works.
keiferski
I am sympathetic to memory-focused tools like Anki and Zettelkasten (haven't used the latter myself, though) but I think this post is a bit oversimplified.
I think there are at least two models of work that require knowledge:
1. Work when you need to be able to refer to everything instantly. I don't know if this is actually necessary for most scenarios other than live debates, or some form of hyper-productivity in which you need to have extremely high-quality results near-instantaneously.
(HN comments are, amusingly, also an example – comments that are in-depth but come days later aren't relevant. So if you want to make a comment that references a wide variety of knowledge, you'll probably need to already know it, in toto.)
2. Work when you need to "know a small piece of what you don't remember as a whole", or in other terms, know the map, but not necessarily the entire territory. This is essentially most knowledge work: research, writing, and other tasks that require you to create output, but that output doesn't need to be right now, like in a debate.
For example, you can know that X person say something important about Y topic, but not need to know precisely what it was – just look it up later. However, you do still need to know what you're looking for, which is a kind of reference knowledge.
--
What is actually new lately, in my experience, is that AI tools are a huge help for situations where you don't have either Type 1 or Type 2 knowledge of something, and only have a kind of vague sense of the thing you're looking for.
Google and traditional search engines are functionally useless for this, but asking ChatGPT a question like, "I am looking for people that said something like XYZ." This previously required someone to have asked the exact same question on Reddit/a forum, but now you can get a pretty good answer from AI.
throwway120385
The AI can also give you pretty good examples of "kind" that you can then evaluate. I've had it find companies that "do X" and then used those companies to understand enough about what I am or am not looking for to research it myself using a search engine. The last time I did this I didn't end up surfacing any of what the AI provided. It's more like talking to the guy in the next cubicle, hearing some suggestions from them, and using those suggestions to form my own opinion about what's important and digging in on that. You do still have to do the work of forming an opinion. The ML model is just much better at recognizing relationships between different words and between features of a category of statements, and in my case they were statements that companies in a particular field tended to make on their websites.
skybrian
Live performance (like conversation or playing music) often relies on memory to do it well.
That might be a good criteria for how much to memorize: do you want to be able to do it live?
rzzzt
Pilots have both checklists that they can follow without memorizing, but also memory items that have to be performed almost instinctively if they encounter the precondition events.
Etheryte
> If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.
While the article makes some reasonable points, this is too far gone. You don't need to know how to "weigh each minute spend on flexibility against the minutes spent on aerobic capacity and strength" to put together a reasonable workout plan. Sure, your workouts might not be as minmaxed as they possibly could be, but that really doesn't matter. So long as the plan is not downright bad, the main thing is that you keep at it regularly. The same idea extends to nearly every other domain, you don't need to be a deep expert to get reasonably good results.
cyanydeez
The US is, however, learning exactly what happens when rationality is not part of the equation. This is all a dance around what is a "fact" and how to string facts into a reasoning model that lets you predict or confirm other potential facts, etc...
It's simply different people we're talking about. Certain personalities are always going to gravitate to the "search for reason" model in life rather than "reason about facts".
nradov
At least in the field of sports science and exercise physiology, we have very little in the way of facts. Much of what we once thought were facts have been disproven or at least called into question by later research. So we need to be humble, and very circumspect in what we label as a "fact".
tibbar
Yeah, so much of this is not about facts as much as judgment: knowing what are the most themes and factors to making a system work well. What are the parts that, if you get right, everything else will follow. Knowing what areas it's okay, even beneficial, to rediscover instead of trying to plan ahead of time.
j_bum
Do you have any go-to examples of “facts” that were disproven in this field?
throwway120385
Like what? There are some basics that have been studied and represent useful approximations. The general statement that your body "makes specific adaptations to imposed demands" seems to hold no matter what you do. There seems to always be some debate about what demands to impose to get a specific adaptation. For example, people have a very diverse array of opinions about how and when to stretch to achieve certain kinds of flexibility and if you do some reading you will find that these opinions follow from a body of work in a specific activity and that they don't always translate well to other activities.
ergonaught
The actual central point is that the brain requires conditioning via experience. That shouldn't be controversial, and I can't decide if the general replies here are an extended and ironic elaboration of his point or not.
If you never memorize anything, but are highly adept at searching for that information, your brain has only learned how to search for things. Any work it needs to do in the absence of searching will be compromised due to the lack of conditioning/experience. Maybe that works for you, or maybe that works in the world that's being built currently, but it doesn't change the basic premise at all.
vjvjvjvjghv
It’s the same with math. A lot of people say they don’t need to be able to do basic arithmetic because they can use a calculator. But I think that you can process the world much better and faster if at a minimum you have some intuition about numbers and arithmetic.
It’s the same with a lot of other things. AI and search engines help a lot but you are at an advantage if at least you have some ability to gauge what should be possible and how to do it.
hennell
I used to find it weird how many people would make an excel formula on data they couldn't intuitively check. Like even basic level 'what percentage increase is a8 from a7' - they enter a formula then don't know if it's correct. I always wrote formulas on numbers I can reason with. If a8 is 120, and a7 is 100 you can immediately tell if you've gone wrong. Then you change for 1,387 and 1,252 and know it's going to be accurate.
People do the same with AI, ask it about something they know little about then assume it is correct, rather than checking their ideas with known values or concepts they might be able to error check.
RicoElectrico
With or without calculator some people have an aversion to calculation and that's the problem in my opinion. How much bullshit you can refute with back of the envelope calculations is remarkable.
This, and knowing by heart all the simple formulas/rules for area/volume/density and energy measurements.
The classic example being pizza diameter.
crims0n
I agree with the point being made, even if it is taken to an extreme. I would say you don't need to remember everything, but you do need to have been exposed to it. Not knowing what you don't know is a huge handicap in knowledge work.
“Try to learn something about everything and everything about something.”
AndyNemmity
Before the internet we asked people around us in our sphere. If we wanted to know the answer to a question, we asked, they made up an answer, and we believed it and moved on.
Then the internet came, and we asked the internet. The internet wasn't correct, but it was a far higher % correct than asking a random person who was near you.
Now AI comes. It isn't correct, but it's far higher % correct than asking a random person near you, and often asking the internet which is a random blog page which is another random person who may or may not have done any research to come up with an answer.
The idea that any of this needs to be 100% correct is weird to me. I lived a long period in my life where everyone accepted what a random person near them said, and we all believed it.
buellerbueller
If you are asking random people, then your approach is incorrect. You should be asking the domain experts. Not gonna ask my wife about video games. Not gonna ask my dad about computer programming.
There, I've shaved a ton of the spread off of your argument. Possibly enough to moot the value of the AI, depending on the domain.
skybrian
This all assumes you have experts that you can talk to. But they might be difficult to find or expensive to hire. You wouldn't want to waste your lawyer's time on trivia.
skydhash
That is why experts often publish books and articles, which is then corrected by other experts (or random people if it’s a typo). I’ve read a lot of books and I haven’t met any of their authors. But I’ve still learned stuff.
AndyNemmity
Before the internet, I didn't have the phone number of domain experts to just call and ask these questions. perhaps you did. For a lot of us, it was an entirely foreign experience to have domain experts at your finger tips.
skydhash
Didn’t you have books? And teachers?
tolerance
The author makes a lot of bold claims and I don’t take his main one serious re: remembering everything. I think he’s being intentionally hyperbolic. But the gist is sound to me, if you can put one together. He needs an editor.
> To find what you need online, you require a solid general education and, above all, prior knowledge in the area related to your search. > > [...] > > If you can’t produce a comprehensive answer with confidence and on the whim [...] you don’t have the sufficient background knowledge. > > [...] > > This drives us to one of the most important conclusions of the entire field of note-taking, knowledge work, critical thinking and alike: You, not AI, not your PKM or whatever need to build the knowledge because only then it is in your brain and you can go the next step. > > [...] > > The advertised benefits of all these tools come with a specific hidden cost: Your ability to think. [This passage actually appears ahead of the previous one–ed.]
This is best read alongside: https://news.ycombinator.com/item?id=45154088
bwfan123
Descartes' brief rules for the direction of the mind [1] is pertinent here, as it articulates beautifully what it means to do "thinking" and how that relates to "memory".
Concepts have to be "internalized" into intuition for much of our thinking, and if they are externalized, we become a meme-copy machine as opposed to a thinking machine.
[1] https://en.wikipedia.org/wiki/Rules_for_the_Direction_of_the...
I was talking with somebody about their migration recently [0], and we got to speculating about AI and how it might have helped. There were basically 2 paths:
- Use the AI and ask for answers. It'll generate something! It'll also be pleasant, because it'll replace the thinking you were planning on doing.
- Use the AI to automate away the dumb stuff, like writing a bespoke test suite or new infra to run those tests. It'll almost certainly succeed, and be faster than you. And you'll move onto the next hard problem quickly.
It's funny, because these two things represent wildly different vibes. The first one, work is so much easier. AI is doing the job. In the second one, work is harder. You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing, because all the easy work happens in the background via LLM.
If you're in a position where there's any amount of competition (like at work, typically), it's hard to imagine where the people operating in the 2nd mode don't wildly outpace the people operating in the first, both in quality and volume of output.
But also, it's exhausting. Thinking always is, I guess.
[0] Rijnard, about https://sourcegraph.com/blog/how-not-to-break-a-search-engin...