Accumulation of cognitive debt when using an AI assistant for essay writing task
112 comments
·June 16, 2025jsrozner
vishnugupta
> You can't just skim a math textbook and know all the math. You have to stop and think.
And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.
Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.
larodi
The impact of writing is immensely undervalued. Even writing with a keyboard or screen is a lot more than non writing. Exercising writing on any topic is still beneficial, and you can find many psychologists recommend having a daily blog of some sort to help people observe themselves from a side. The same goes for speaking, public speech if u want, and therapeutic daily acting-playing which is also overlooked.
I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.
If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.
Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.
supriyo-biswas
> And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.
There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.
> it'll be interesting to see the effect of LLMs on our cognitive skills.
These discussions remind me a lot about this comic[1].
p_v_doom
Writing is pure magic.It allows so much reflection and so many insights, that you wouldnt otherwise get. And writing as part of the reading process allows you to directly integrate what you are reading as you are doing it. Like cant recommend it enough. Only downside is that its slow, compared to what people are used and want to do, especially in the work environment.
Aeolun
> And most importantly you have to write. A lot.
I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.
Davidzheng
I disagree with this take. I'd say often when exploring new math problems, often it's possible explore the possible solutions paths at lower technical levels first in your mind before anything down--when actually going into details of an approach. I don't think not writing is that limiting if all of your approaches already fail before going into details, which is often the case in early stages of math research.
hamdouni
I can also explore by writing. Writing drafts can help structure my thinking.
dr_dshiv
Prompting involves more than an insignificant amount of writing.
delusional
But it is not at all the same _type_ of writing. Most of the prompts I've seen and written are shorter, less organized, and most importantly not actually considered a piece of writing. When you are writing a prompt you are considering how the machine will "interpret" it and what it will spit back, you're not constructing and argument. Vagueness or dialectics in a prompt will often just confuse the machine.
Hitting the keys is not always writing.
this_steve_j
The terms “Cognitive decline” or “brain rot” may have sounded too sensational, and to be fair the authors note the limitations of the small sample size.
Indeed the paper doesn’t provide a reference or citation for the term “cognitive debt” so it is a strange title. Maybe a last minute swap.
Fascinating research out of MIT. Like all psychology studies it deserves healthy scrutiny and independent verification. Bit of a kitchen sink with the imaging and psychometric assessments, but who doesn’t love a picture of “this is your brain on LLMs” amirite?
teekert
I would call it cognitive debt. Have you ever tried writing a large report with an LLM?
It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.
But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.
I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.
I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.
In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.
Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.
As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."
pilif
> The brain does not retain information that it does not need.
Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?
I haven’t done this in 2 decades and I’m reasonably sure I never again will
fennecfoxy
The last fast food place you went to, what does the ceiling look like? The exact colour/pattern?
The last phone conversation you had with a utility company, how did they greet you exactly?
There's lots that we do remember, sometimes odd things like your example, though I'm sure you must have repeated it a few times as well. But there's so much detail that we don't remember at all, and even our childhood memories just become memories of memories - we remember some event, but we slowly forget the exact details, they become fuzzy.
dotancohen
Probably because you learned it during that brief period in your development in which humans are most impressionable.
Now think about the effect on those humans currently using LLMs at that stage of their development.
nottorp
To nitpick, your subconscious is aware computers have memory constraints even now and you write better code because of it even if you do javascript...
rusk
Because these are core memories that provide stepping stones to later knowledge. It is a part of the story of you. It is very hard to integrate all knowledge in this way.
flomo
Probably because there was some reward that you felt at the time was important (most likely playing a DOS game).
I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.
lelele
Agreed. We remember many things that don't serve us anymore.
15123123
I think because some experiences are so profound to your brain ( first impression, moments that you are proud of ) that you just replay them over and over again.
eru
> The brain does not retain information that it does not need.
Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?
wahern
Closer to the truth is that the brain never completely forgets something, in the sense that there are always vestiges left over, even after the ability to recall or instantly draw upon it is long gone. Studies show, for example, that after one has "forgotten" a language, they're quicker to pick up it again later on compared to someone without that prior experience; how quickly being time dependent, but more quickly nonetheless.
OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.
Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.
gwd
I think a better way to say it is that the brain doesn't commit to long term memory things that it doesn't need.
I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:
1. One group watches the entire series over the course of a week
2. A second group watches a series one episode per week
Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.
Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.
I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.
Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.
pempem
Such a good question - I hope someone answers with more than an anecdote (which is all I can provide) - I've found the skills that don't leave you like riding a bike, swimming, cooking are all physical skills. Tangible.
The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds
hn_throwaway_99
Google "procedural memory". Procedural memory is more resistant to forgetting than other types of memory.
rusk
Riding a bike is a skill rather than what we would call a “memory” per se. It’s a skill that develops a new neural pathway throughout your extended nervous system bringing together the lesser senses of proprioception and balance. Once you bring these things together you then go on to use them for other things. You “know” (grok), rather than “understand” how a bike stays upright on a very deep physical level.
eru
Sure. But speaking a language is also (at least partially) a skill, ain't it?
jancsika
> And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need.
Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.
null
null
niemandhier
AI is the anti-Zettelkasten.
Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.
Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.
I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.
energy123
I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.
To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.
namaria
Maybe, much like we invented gyms to exercise after civilization made most physical labor redundant (at least in developed countries), we will see a rise of 'creative writing gyms' of some sort in the future.
nottorp
You tend to remember trouble more than things going smoothly, so I'd say you remember the parts you had to fix manually.
atoav
Most intelligent people are aware of the fact that writing is about thinking as much as it is about getting the written text.
LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).
tkgally
The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.
But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.
The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.
SchemaLoad
I use AI tools for amusement and asking random questions, but for actual work, I basically don't use them at all. I wonder if I'll be part of the increasingly rare group who is actually able to do anything while the rest become progressively more incompetent.
barrenko
My nickel - we are in the primary stages of being given something like the famed "bicycle for the mind", an exoskeleton for the brain. At first when someone gives you a mech, you're like "woah, cool", let's see what it can do. And then you zip around, smash rocks, buildings, go try to lift the Eiffel.
After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.
(highly personal perspective)
audunw
“Bicycle for the mind” analogy is actually really good here. Since bicycles and other transportation technology has made us increasingly weak, which has a negative impact on physical health. At this point it has reached such a critical point that people are taking seriously the fact that we need physical exercise to be in good health. My company recently introduced 60 minutes a week of activity during work hours. It’s probably a good investment since physical health affects performance and mental health.
Coming back to AI, maybe in the future we will need to explicitly take mental exercise as seriously as we do with physical exercise now. Perhaps people will go to mental gyms. (That’s just a school you may say, but I think the focus could be different: Not having a goal to complete a class and then finish, but continuous mental exercises..)
Todd
This is called cognitive offloading. Anyone who’s spent enough time working with coding assistants will recognize it.
esafak
Or working as an engineering manager.
It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...
15123123
I don't think not using assembly is going to affect my brain / my life quality in any significant way, but not speaking / chatting with someone is.
tankenmate
But this is a strawman argument, it's not what the research is talking about.
nothrabannosir
If LLMs were as reliable as compilers we wouldn’t be checking in their output, and I’d be happy to forget all programming lore.
The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.
until that changes, you’re playing a dangerous game letting those skills atrophy.
jameson
> The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].
eru
> What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].
As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.
> However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).
Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)
dotancohen
Plato's sock puppet Socrates? I think that you and I have read different history books, or at least different books regarding the history of philosophy. That said, I would love to hear your perspective on this.
eru
> Plato's sock puppet Socrates?
See https://en.wikipedia.org/wiki/Socratic_problem
> Socrates was the main character in most of Plato's dialogues and was a genuine historical figure. It is widely understood that in later dialogues, Plato used the character Socrates to give voice to views that were his own.
However, have a look at the Wikipedia article itself for a more nuanced view. We also have some other writers with accounts of Socrates.
null
Sharlin
I presume they refer to the fact that Socrates is basically used as a rhetorical device in Plato’s writings, and it’s not entirely clear how much of the dialogues were Socrates’s thoughts and how much was Plato’s own.
namaria
> Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)
Nope.
Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.
In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.
Kiyo-Lynn
When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.
Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.
energy123
The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.
a_bonobo
I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.
Noelia-
After using ChatGPT a lot, I’ve definitely noticed myself skipping the thinking part and just waiting for it to give me something. This article on cognitive debt really hit home. Now I try to write an outline first before bringing in the AI. I do not want to give up all the control.
seanmcdirmid
My hand writing has suffered since I’ve heavily relied on keyboards for the last few decades. I can’t even produce a consistent signature anymore. My stick shift skills also suffered when I used an automatic for so long (and now I have an EV, I’m forgetting what gears are at all).
Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.
xorokongo
Will we end up with a world where the only experts are LLM companies, having a monopoly on thinking. Will future humans ever be as smart as us or are we the peak of human intelligence and can AI make progress without smart humans to provide training data, getting new insights and increasing its intelligence?
rgoulter
From the summary:
"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.
"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""
Frummy
I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse. Sure a horserider wouldn’t want to practice the wrong way, but anyone else just wants to get somewhere
OhNotAPaper
> I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse.
Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.
Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?
EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.
wcoenen
The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".
OhNotAPaper
> The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".
Do you have any evidence of this?
null
apsurd
i didn't read the article but come on riding a horse to get to a destination is not remotely similar to writing an essay.
if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.
writing is for thinking.
adeon
The task of riding a horse can be almost entirely offsourced to the professional horse riders. If they take your carriage from point A to point B, sure, you care about just getting somewhere.
Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?
eru
> If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?
They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.
> Taking the article's task of essay writing: someone presumably is supposed to read them.
Soon enough, that someone is gonna be another LLM more often than not.
bakugo
You know the AI-induced cognitive decline is already well under way when people start comparing writing an essay to riding a horse.
namaria
Horse riding was invented much later than carriages, and it revolutionized warfare.
gnabgib
Can you point at some references? Horse riding started around 3500 BC[0], while horse carriages started around 100BC [1], oxen/buffalo drawn devices around 3000 BC[1].
namaria
From the article [0] you linked:
"However, the most unequivocal early archaeological evidence of equines put to working use was of horses being driven. Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry."
Long discussion in History Exchange about dating the cave paintings mentioned in the wikipedia article above:
https://history.stackexchange.com/questions/68935/when-did-h...
eesmith
I think you are reading "carriage" too specifically, when I suspect it's meant as a wider term for any horse-drawn wheeled vehicle.
Your [0] says "Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry.", just after "the most unequivocal early archaeological evidence of equines put to working use was of horses being driven."
That suggests the evidence is stronger for cart use before riding.
If you follow your [1] link to "bullock cart" at https://en.wikipedia.org/wiki/Bullock_cart you'll see: "The first indications of the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC[citation needed]. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC.[1]"
That is older than 3000 BC.
I tried but failed to find something more definite. I did learn from "Wheeled Vehicles and Their Development in Ancient Egypt – Technical Innovations and Their (Non-) Acceptance in Pharaonic Times" (2021) that:
> The earliest depiction of a rider on horseback in Egypt belongs to the reign of Thutmose III.80 Therefore, in ancient Egypt the horse is attested for pulling chariots81 before it was used as a riding animal, which is only rarely shown throughout Pharaonic times.
I also found "The prehistoric origins of the domestic horse and horseback riding" (2023) referring to this as the "cart before the horse" vs. "horse before the cart" debate, with the position that there's "strong support for the “horse before the cart” view by finding diagnostic traits associated with habitual horseback riding in human skeletons that considerably pre-date the earliest wheeled vehicles pulled by horses." https://journals.openedition.org/bmsap/11881
On the other hand, "Tracing horseback riding and transport in the human skeleton" (2024) points out "the methodological hurdles and analytical risks of using this approach in the absence of valid comparative datasets", and also mentions how "the expansion of biomolecular tools over the past two decades has undercut many of the core assumptions of the kurgan hypothesis and has destabilized consensus belief in the Botai model." https://www.science.org/doi/pdf/10.1126/sciadv.ado9774
Quite a fascinating topic. It's no wonder that Wikipedia can't give a definite answer!
I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.
And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.
Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.