MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline
574 comments
·September 3, 2025goalieca
benterix
We are literally witnessing the skills split right in front of our eyes: (1) people who are able to understand the concepts deeply, build a mental model of it and implement them in code at any level, and (2) people who outsource it to a machine and slowly, slowly lose that capability.
For now the difference between these two populations is not that pronounced yet but give it a couple of years.
CuriouslyC
We're just moving up the abstraction ladder, like we did with compilers. I don't care about the individual lines of code, I care about architecture, code structure, rigorous automated e2e tests, contracts with comprehensive validation, etc. Rather than waste a bunch of time pouring over agent PRs I just make them jump over extremely high static/analytic hurdles that guarantee functionality, then my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.
e3bc54b2
As the other comment said, LLMs are not an abstraction.
An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder.
LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place.
blibble
I can count the amount of times in my 20 year career that I've had to look at compiler generated assembly on one finger
and I've never looked at the machine code produced by an assembler (other than when I wrote my own as a toy project)
is the same true of LLM usage? absolutely not
and it never will be, because it's not an abstraction
chii
> my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.
and to be able to do this efficiently or even "correctly", you'd need to have had mountains of experience evaluating an implementation, and be able to imagine the consequences of that implementation against the desired outcome.
Doing this requires experience that would get eroded by the use of an LLM. It's very similar to higher level maths (stuff like calculus) being much more difficult if you had poor arithmetic/algebra skills.
soraminazuki
Why is every discussion about AI like this? We get study after study and example after example showing that AI is unreliable [1], hurts productivity [2], and causes cognitive decline [3]. Yet, time and time again, a group of highly motivated individuals show up and dismiss these findings, not with counter-evidence, but through sheer sophistry. Heck, this is a thread meant to discuss evidence about LLM reliance contributing to cognitive decline. Instead, the conversation quickly derailed into an absurd debate about whether AI coding tools resemble compilers. It's exhausting.
[1]: https://arxiv.org/abs/2401.11817
[2]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
[3]: https://publichealthpolicyjournal.com/mit-study-finds-artifi...
notTooFarGone
If you had google maps and you knew the directions it gives are 80% gonna be correct, would you still need navigation skills?
You could also tweak it by going like "Lead me to the US" -> "Lead me to the state of New York" -> "Lead me to New York City" -> "Lead me to Manhattan" -> "Lead me to the museum of new arts" and it would give you 86% accurate directions, would you still need to be able to navigate?
How about when you go over roads that are very frequently used you push to 92% accuracy, would you still need to be able to navigate?
Yes of course because in 1/10 trips you'd get fucking lost.
My point is: unless you get to that 99% mark, you still need the underlying skill and the abstraction is only a helper and always has to be checked by someone who has that underlying skill.
I don't see LLMs as that 99% solution in the next years to come.
lr4444lr
I'd be very cautious about calling LLM output "an abstraction layer" in software.
matthewdgreen
It is possible to use LLMs this way. If you're careful. But at every place where you can use LLMs to outsource mechanical tasks, there will also be a temptation to outsource the full stack of conceptual tasks that allow you to be part of what's going on. This will create gaps where you're not sitting at any level of the abstraction hierarchy, you're just skipping parts of the system. That temptation will be greater for less experienced people, and for folks still learning the field. That's what I'm scared about.
bdhcuidbebe
> We're just moving up the abstraction ladder, like we did with compilers.
Got any studies about reasoning decline from using compilers to go with your claim?
defgeneric
A lot of the boosterism seems to come from those who never had the ability in the first place, and never really will, but can now hack a demo together a little faster than before. But I'm mostly concerned about those going through school who don't even realize they're undermining themselves by reaching for AI so quickly.
NoGravitas
Perhaps more importantly, those boosters may never have had the ability to really model a problem in the first place, and didn't miss it, because muddling through worked well enough for them. Many such cases.
mooreds
I've posted this Asimov short story before, but this comment inspires me to post it again.
http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
"Somewhere there must be men and women with capacity for original thought."
He wrote that in 1957. 1957!
Terr_
The first sentence made me expect Asimov's "The Feeling of Power", which--avoiding spoilers--regards the (over-)use of calculators by a society.
However, since I brought up calculators, I'd like to pre-emphasize something: They aren't analogous to today's LLMs. Most people don't offload their "what and why" executive decision-making to a calculator, calculators are orders of magnitude more trustworthy, and they don't emit plausible lies to cover their errors... Though that last does sound like another short-story premise.
zem
perhaps my favourite of his short stories. there's also the more satirical "the feeling of power", which touches on the same theme https://ia800806.us.archive.org/20/items/TheFeelingOfPower/T...
ducttapecrown
It turns out incremental thought is much better than original thought. I guess.
el_benhameen
What about people who use the machines to augment their learning process? I find that being able to ask questions, particularly “dumb” questions that I don’t want to bother someone else with and niche questions that might not be answered well in the corpus, helps me better understand new concepts. If you just take the answers and move on, then sure, you’re going to have a bad time. But if you critically interrogate the answers and synthesize the information, I don’t see how this isn’t a _better_ era for people who want to develop a deep understanding of something.
Peritract
There's a difference between learning and the perception/appearance of learning; teachers need to manage this in classrooms, but how do you manage it on your own?
benterix
I fully agree. Using LLMs for learning concepts is great if you combine it with actively using/testing your knowledge. But outsourcing your tasks to an LLM makes your inner muscles weaker.
aprilthird2021
> I don’t see how this isn’t a _better_ era for people who want to develop a deep understanding of something.
Same way a phone in your pocket gives you the world's compiled information available in a moment. But that's generally led to loneliness, isolation, social upheaval, polarization, and huge spread of wrong information.
If you can handle the negatives is a big if. Even the smartest of our professional class are addicted to doomscrolling these days. You think they will get the positives of AI use only and avoid the negatives?
Eawrig05
I think this is a fun problem: Say LLMs really do allow people to be 2X or 3X more productive. Then those people that understand concepts and rely on LLMs should be more successful and productive. But if those people slowly decline in ability they will become less successful in the long term. But how much societal damage is done in the meantime? Such as creating insecure software or laying off/not hiring junior developers.
keepamovin
That reminds me of back when 12,500 years ago I could really shape a flint into a spear head in no time. Took me seasons to learn that skill from the Masters. Then Sky Gods came and taught people metal-fire. Nobody knows how to really chip and carve a point any more. They just cast them in moulds. We are seeing a literal split of skills in front of our eyes. People who know how to shape rocks deeply. And people who just know how to buy a metal tool with berries. I like berries as much as the next person, but give it a couple of years, they will be begging for flint tips back. I guarantee it. Those metal users will have no idea how to survive without a collective industry.
intended
Ah you are an old one. I was born later, and missed this phase of our history.
This reminds me of back 11,500 years ago, when people used to worship the sharper or bigger pieces of obsidian. They felt the biggest piece would win them the biggest hunt.
They forgot that the size of the tool mattered less than mastery of the hunt. Why the best hunter could take down a moving mattress with just the right words, string and a cliff.
stocksinsmocks
(3) people who never had that capability.
Remember we aren’t all above average. You shouldn’t worry. Now that we have widespread literacy, nobody needs to and few even could recite Norse Sagas or the Illiad from memory. Basically nobody has useful skills for nomadic survival.
We’re about to move on to more interesting problems, and our collective abilities and motivation will still be stratified as it always has been and must be.
rf15
I don't think it's down to ability, I think it's down to the decision not to learn itself. And that scares me.
jplusequalt
>We’re about to move on to more interesting problems, and our collective abilities and motivation will still be stratified as it always has been and must be
Who is "we"? There are more people out there in the world doing hard physical labor, or data entry, than there are software engineers.
skissane
The way I’ve been coding recently: I often ask the AI to write the function for me, because lazy. And then I don’t like the code it wrote, so I rewrite it. But still it is less mental effort to rewrite “mostly correct but slightly ugly code” than write the whole thing from scratch
Also I even though I have Copilot extension in VSCode I rarely use it… because I find it interrupts my flow with constant useless or incorrect or unwanted suggestions. Instead, when I want AI help, I type out my request by hand into a Gemini gem which contains a prompt describing my preferred coding style - but even with extra guidance as to how I want it to write code, I still often don’t like what it does and end up rewriting it
theptip
Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.
If you stop thinking, then of course you will learn less.
If instead you think about the next level of abstraction up, then perhaps the details don’t always matter.
The whole problem with college is that there is no “next level up”, it’s a hand-curated sequence of ideas that have been demonstrated to induce some knowledge transfer. It’s not the same as starting a company and trying to build something, where freeing up your time will let you tackle bigger problems.
And of course this might not work for all PhDs; maybe learning the details is what matters in some fields - though with how specialized we’ve become, I could easily see this being a net win.
PhantomHour
> Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.
One of the other replies alludes to it, but I want to say it explicitly:
The key difference is that you can generally drill down to assembly, there is infinitely precise control to be had.
It'd be a giant pain in the ass, and not particularly fast, but if you want to invoke some assembly code in your Java, you can just do that. You want to see the JIT compiler's assembly? You can just do that. JIT Compiler acting up? Disable it entirely if you wish for more predictable & understandable execution of the code.
And while people used to higher level languages don't know the finer details of assembly or even C's memory management, they can incrementally learn. Assembly programming is hard, but it is still programming and the foundations you learn from other programming do help you there.
Yet AI is corrosive to those foundations.
theptip
I don't follow; you can read the code that your LLM produces as well.
It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide.
Jensson
> Just beware the “real programmers hand-write assembly” fallacy
All previous programming abstractions kept correctness, a python program produce no less reliable results than a C program running the same algorithm, it just took more time.
LLM doesn't keep correctness, I can write a correct prompt and get incorrect results. Then you are no longer programming, you are a manager over a senior programmer suffering from extreme dementia so they forget what they were doing a few minutes ago and you try to convince him to write what you want before he forgets about that as well and restart the argument.
invalidptr
>All previous programming abstractions kept correctness
That's not strictly speaking true, since most (all?) high level languages have undefined behaviors, and their behavior varies between compilers/architectures in unexpected ways. We did lose a level of fidelity. It's still smaller than the loss of fidelity from LLMs but it is there.
nitwit005
I'd caution that the people not familiar with working at the low level are often missing a bunch of associated knowledge which is useful in the day to day.
You run into Python/Javascript/etc programers who have no concept of what operations might execute quickly or slowly. There isn't a mental model of what the interpreter is doing.
We're often insulated from the problem because the older generation often used fairly low level languages on very limited computers, and remember lessons from that era. That's not true of younger developers.
daemin
I would agree with the statement that you don't need to know or write in assembly to build programs, but what you end up with is usually slow and inefficient.
Having curiosity to examine the platform that your software is running on and taking a look into what the compilers generate is a skill worth having. Even if you never write raw assembly yourself, being able to see what the compiler generated and how data is laid out does matter. This then helps you make better decisions about what patterns of code to use in your higher level language.
MobiusHorizons
I have never needed to write assembly in a professional context because of the changes you describe, but this does not mean I don't have a need to understand what is going on at that level of abstraction. I _have_ had occasion to look at disassembly in the process of debugging before, and it was important that I was not completely lost when I had to do this. You don't have to do something all the time for the capacity to do something to be useful. At the end of the day engineering is about choosing the correct tradeoffs given constraints, and in a professional environment, cost is almost always one of the constraints.
hintymad
This is in a way like doing math. I can read a math book all day and even appreciate the ideas in the book, but I'd practically learn little if I don't actually attempt to work out examples for the definitions, the theorems, and some exercises in the book.
TheNewsIsHere
I fall into this trap more than I’d care to admit.
I love learning by reading, to the point that I’ll read the available documentation for something before I decide to use it. This consumes a lot of time, and there’s a tradeoff.
Eventually if I do use the thing, I’m well suited to learning it quickly because I know where to go when I get stuck.
But by the same token I read a lot of documentation I never again need to use. Sometimes it’s useful for learning about how others have done things.
bdelmas
Yes in data science there is say: “there is no free lunch”. With ChatGPT and others becoming so prevalent even at PhD level people that will work hard and avoid to use these tools will be more and more seen as magicians. I already see this in coding where people can’t code medium to hard things and their intuitions like you said is wacky. It’s not the imposter syndrome anymore it’s people not being able to get their job done without some AI involved.
What I do personally is for every subject that matters to me I take the time to first think about it. To explore ideas, concepts, etc… and answer questions that would ask to ChatGPT. Only once I get a good idea I start to ask chapgpt about it.
geye1234
Interesting, thanks. Do you mean he would write the code out by hand on pen and paper? That has often struck me as a very good way of understanding things (granted, I don't code for my job).
Similar thing in the historian's profession (which I also don't do for my job but have some knowledge of). Historians who spend all day immersed in physical archives tend, over time, to be great at synthesizing ideas and building up an intuition about their subject. But those who just Google for quotes and documents on whatever they want to write about tend to have more a static and crude view of their topic; they are less likely to consider things from different angles, or see how one things affects another, or see the same phenomenon arising in different ways; they are more likely to become monomaniacal (exaggerated word but it gets the point across) about their own thesis.
martingalex2
Assuming this observation applies generally, give one point to the embodiment crowd.
WalterBright
I learned this in college long before AI. If I didn't do the work to solve the homework problems, I didn't learn the material, no matter how much I imagined I understood it.
thisisit
This seems like the age old discussion of how does new technology changes our lives and makes us "lazy" or "lack of learning".
Before the advent of smartphones people needed to remember phone numbers of their loved ones and maybe do some small calculations on the fly. Now people sometimes don't even remember their own numbers and have it saved on their phones.
Now some might want to debate how smartphones are different from LLMs and it is not the same. But we have to remember for better or worse LLM adoption has been fast and it has become consumer technology. That is the area being discussed in the article. People using it to write essays. And those who might be using the label of "prompt bros" might be missing the full picture. There are people, however small, being helped by LLMs as there were people helped by smartphones.
This is by no means a defense for using LLMs for learning tasks. If you write code by yourself, you learn coding. If you write your essays yourself, you learn how to make a solid points.
marcofloriano
It's not the same with LLMs. What the study finds out is actually much more serious. When you use a phone or a calculator, you don't lose cognitive faculties. But when you delegate the thinking process to an LLM, your Brain gets phisically changed, witch leads to a cognitive damage. It's a completely different league.
fragmede
> When you use a phone or a calculator, you don't lose cognitive faculties.
Of course you do. I used to be able to multiply two two-digit numbers in my head. Now, my brain freezes and I reach for a calculator.
zingababba
I just decided to take a break from LLMs for coding assistance a couple days ago. Feels really good. It's funny how fast I am when I just understand the code myself instead of not understanding it and proooompting.
marcofloriano
Same here, i just finished my subscription at ChatGPT, i feel free again.
tomrod
A few things to note.
1. This is arxiv - before publication or peer review. Grain of salt.[0]
2. 18 participants per cohort
3. 54 participants total
Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.
Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).
> We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory, requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.
i_am_proteus
>These 54 participants were between the ages of 18 to 39 years old (age M = 22.9, SD = 1.69) and all recruited from the following 5 universities in greater Boston area: MIT (14F, 5M), Wellesley (18F), Harvard (1N/A, 7M, 2 Non-Binary), Tufts (5M), and Northeastern (2M) (Figure 3). 35 participants reported pursuing undergraduate studies and 14 postgraduate studies. 6 participants either finished their studies with MSc or PhD degrees, and were currently working at the universities as post-docs (2), research scientists (2), software engineers (2)
I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation (or lack thereof), rather than a reason to expect an "uphill battle" for replication and so forth.
tomrod
> I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation, rather than a reason to expect an "uphill battle" for replication and so forth.
Maybe. I believe we both agree it is a critical gap in the research as-is, but whether it is a neutral item or an albatross is an open question. Much of psychology and neuroscience research doesn't replicate, often because of the limited sample size / composition as well as unrealistic experimental design. Your approach of deepening and broadening the demographics would attack generalizability, but not necessarily replication.
My prior puts this on an uphill battle.
genewitch
do you feel this way about every study with N~=54? For instance the GLP-1 brain cancer one?
hedora
The experimental setup is hopelessly flawed. It assumes that people’s tasks will remain unchanged in the presence of an LLM.
If the computer writes the essay, then the human that’s responsible for producing good essays is going to pick up new (probably broader) skills really fast.
efnx
Sounds like a hypothesis! You should do a study on that.
stackskipton
I'd love to see much more diverse selection of schools. All of these schools are extremely selective so you are looking at extremely selective slice of the population.
sarchertech
Is your hypothesis that very smart people are much, much less likely to be able to remember quotes from essays they wrote with LLM assistance than dumber people.
I wouldn’t bet on that being the case.
jdietrich
Most studies don't replicate. Unless a study is exceptionally large and rigorous, your expectation should be that it won't replicate.
sarchertech
That isn’t correct it has to do with the likelihood that the study produced an effect that was actually just random chance. Both the sample size and the effect size are equally important.
This study showed an enormous effect size for some effects, so large that there is a 99.9% chance that it’s a real effect.
mnky9800n
I feel like saying papers pre peer review should be taken with a grain of salt should be stopped. Peer review is not some idealistic scientific endeavour it often leads to bullshit comments, slows down release, is free work for companies that have massive profit margins, etc. From my experience publishing 30+ papers I have received as many bad or useless comments as I have good ones. We should at least default to open peer review and editorial communication.
Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.
chaps
Please no. Remember that room temperature superconductor nonsense that went on for way too long? Let's please collectively try to avoid that..
physarum_salad
That paper was debunked as a result of the open peer review enabled by preprints! Its astonishing how many people miss that and assume that closed peer review even performs that function well in the first place. For the absolute top journals or those with really motivated editors closed peer review is good. However, often it's worse...way worse (i.e. reams of correct seeming and surface level research without proper methods or review of protocols).
The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.
P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.
mwigdahl
And cold fusion. A friend's father (a chemistry professor) back in the early 90s wasted a bunch of time trying variants on Pons and Fleischmann looking to unlock tabletop fusion.
srkirk
I believe LLMs have the potential to (for good or ill, depending on your view) destroy academic journals.
The scenario I am thinking of is academic A submitting a manuscript to an academic journal, which gets passed on by the journal editor to a number of reviewers, one of whom is academic B. B has a lot on their plate at the moment, but sees a way to quickly dispose of the reviewing task, thus maintaining a possibly illusory 'good standing' in the journal's eyes, by simply throwing the manuscript to an LLM to review. There are (at least) two negative scenarios here: 1. The paper contains embedded (think white text on a white background) instructions left by academic A to any LLM reading the manuscript to view it in a positive light, regardless of how well the described work has been conducted. This has already happened IRL, by the way. 2. Academic A didn't embed LLM instructions, but receives the review report, which show clear signs that the reviewer either didn't understand the paper, gave unspecific comments, highlighted only typos or simply used phrasing that seems artifically-generated. A now feels aggrieved that their paper was not given the attention and consideration it deserved by an academic peer and now has a negative opinion of the journal for (seemingly) allowing the paper to be LLM-reviewed. And just as journals will have great difficulty filtering for LLM-generated manuscripts, it will also find it very difficult to filter for LLM-generated reviewers reports.
Granted, scenario 2 already happens with only humans in the loop (the dreaded 'Reviewer 2' academic meme). But LLMs can only make this much much worse.
Both scenarios destroy trust in the whole idea of peer-reviewed science journals.
tomrod
> I feel like saying papers pre peer review should be taken with a grain of salt should be stopped.
Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.
> Science should become a marketplace of ideas.
This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.
That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.
[0] https://x.com/JustinWolfers/status/591280547898462209?lang=e... if a car were a manuscript
srkirk
What happens if (a) the scholarly sphere is continually expanding and (b) no researcher has time to be ripping apart anything? That also suggests (c) Researchers delegate reviewing duties to LLMs.
stonemetal12
Rather given the reproducibility crisis, how much salt does peer review nock off that grain? How often does peer review catch fraud or just bad science?
Bender
I would also add, how often are peer reviews the same group of buddy-bro back-scratchers that know if they help that person with a positive peer review that person will return the favor. How many peer reviewers actually reproduce the results? How many peer reviewers would approve a paper if their credentials were on the line?
Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.
perrygeo
There's two questions at play. First, does the research pass the most rigorous criteria to become widely-accepted scientific fact? Second, does the research present enough evidence to tip your priors and change your personal decisions?
So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.
memco
It’s also worth noting that this was specifically about the effects of ChatGPT on users’s ability to write essays: which means that if you don’t practice your writing skills, then your writing skills decline. This doesn’t seem to show that it is harmful just that it does not induce the same brain activity that is observed in other essay writing methods.
Additionally, the original paper uses the term “cognitive debt“ not cognitive decline, which may have an important ramifications for interpretation and conclusions.
I wouldn’t be surprised to see similar results in other similar types of studies, but it does feel a bit premature to broadly conclude that all LLM/AI use is harmful to your brain. In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.
bjourne
> In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.
In much the same way chess engines make competitive chess accessible to a broader audience. :)
sarchertech
It also showed that people couldn’t successfully recall information about what they’d just written when they used LLM assistance.
Writing is an important form of learning and this clearly shows LLM assisted writing doesn’t provide that benefit.
giancarlostoro
The other thing to note is "AI" is being used in place of LLMs. AI is a lot of things, I would be surprised to find out that generating images, video and audio would lead to cognitive decline. What I think LLMs might lead to is intellectual laziness, why memorize or remember something if the LLM can remember it type of thing.
KoolKat23
I'd say the framing is wrong. Do we call delivery drivers lazy because they take the highway rather than the backroads? Or because they drive the goods there rather than walk? They're missing out on all that traffic intersection experience.
Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.
Although my experience has been perhaps different using LLM's, my mind still tires at work. I'm still having to think on the bigger questions, it's just less time spent on the grunt work.
jplusequalt
>Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.
The push for these tools is to increase productivity. What spare time is there to be had if now you're expected to produce 2-3X the amount of code in the same time frame?
Also, I don't know if you've gotten outside of the software/tech bubble, but most people already spend 90% of their free time glued to a screen. I'd wager the majority of critical thinking people experience on a day to day basis is at work. Now that we may be automating that away, I bet you'll see many people cease to think deeply at all!
mym1990
I would argue that intellectual laziness can and will lead to cognitive decline as much as physical laziness can and will lead to muscle atrophy. It’s akin to using a maps app to get from point a to b but not ever remembering the route, even though someone has done it 100 times.
I don’t know the percentage of people who are still critically thinking while using AI tools, but I can first hand see many students just copy pasting content to their school work.
giancarlostoro
Fully agree, I think the cognitive decline is probably over time. Look at old, retired people, how they go from feeling like a teenager, to barely remembering anything as an example.
rawgabbit
I skimmed the paper and I question the validity of the experiment.
There was a “brain” group who did three sessions of essay writing and on the fourth session, they used ChatGPT. The paper’s authors said during the fourth session, the brain groups EEG was higher than the LLM groups EEG when they also used ChatGPT.
I interpret this as the brain group did things the hard way and when they did things the easy way, their brains were still expecting the same cognitive load.
But isn’t the point of writing an essay is the quality of the essay? The LLM supposedly brain damaged group still produced an essay for session 4 that was graded “high” by both AI and human judges but were faulted for “stood out less” in terms of distance in n-gram usage compared to the other groups? I think this making a mountain out of a very small mole hill.
MobiusHorizons
> But isn’t the point of writing an essay is the quality of the essay
Most of the things you write in an educational context are about learning, not about producing something of value. Productivity in a learning context is usually the wrong lens. The same thing is true IMO for learning on the job, where it is typically expected that productivity will initially be low while experience is low, but should increase over time.
rawgabbit
That may be true if this was measuring an English class. But the experiment was just writing essays, there were no other instruction other write an essay either with no tools, ChatGPT, or a search engine. That is the only variable was tool or without tool.
falconroar
Essential context. So many variables here with very naive experimental procedure. Also "Cognitive Decline" is never mentioned in the paper.
An equally valid conclusion is "People are Lazier at Writing Essays When Provided with LLMs".
somenameforme
In general I agree with you regarding the weakness of the paper, but not the skepticism towards its outcome.
Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so.
I mean it's possible that this, for some reason, might not be true, but that would be quite surprising.
tomrod
Ever read books in the Bobiverse? They provide an pretty functional cognitive model for how human interfaces with tooling like AI will probably work (even though it is fiction) -- lower level actions are pushed into autonomous regions until a certain deviancy threshold is achieved. Much like breathing -- you don't typically think about breathing until it becomes a problem (choking, underwater, etc.) and then it very much hits the high level of the brain.
What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new.
I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter.
ndkap
Why would people publish a research with low population size?
sarchertech
Because some of the effect sizes were so large that the probability of the effect being real is greater than 99.9%.
LocalPCGuy
This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws.
The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it
> Additional vocabulary to avoid using when talking about the paper
> In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".
causal
Yeah I feel like HN is being Reddit-ified with the amount of reposted clickbait that keeps making the front page :(
This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise.
I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate.
Mentlo
Ironically you’re now replicating the reddified response to this paper by attacking the sample size.
The sample size is fine. It’s small, yes, but normal for psychological research which is hard to do at scale.
And the difference between groups is so large that the noise would have to be at unheard levels to taint the finding.
LocalPCGuy
Yup, I even found myself a bit hopeful that maybe it was a follow-up or new study and we'd get either more or at least different information. But that bit of hope is also an example of my bias/sympathy to that idea that it might be harmful.
It should be ok to just say "we don't know yet, we're looking into that", but that isn't the world we live in.
tarsinge
Ironically there should be another study of how not using AI is also leading to cognitive decline on Reddit. On programming subreddits people have lost all sense of engineering and have simply become religious about being against a tool.
GeoAtreides
>I feel like HN is being Reddit-ified
It's september and september never ends
NapGod
yea it's clear no one is actually reading the paper. the study showed the group who used LLMs for the first three sessions then had to do session 4 without them had lower brain connectivity than was recorded for session 3 with all the groups showing some kind of increase from one session to the next. Importantly, this group's brain connectivity didn't reset to the session 1 levels, but somewhere in-between. They were still learning and getting better at the essay writing task. In session 4 they effectively had part of the brain network they were using for the task taken away, so obviously there's a dip in performance. None of this says anyone got dumber. The philosophical concept of the Extended Mind is key here.
imo the most interesting result is that the brains of the group that had done sessions 1-3 without the search engine or LLM aids lit up like christmas trees in session 4 when they were given LLMs to use, and that's what the paper's conclusions really focus on.
marcofloriano
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it
Maybe it's not safe so far, but it has been my experience using chatGPT for eight months to code. My brain is getting slower and slower, and that study makes a hell of a sense to me.
And i don't think that we will see new studies on this subject, because those in lead of society as a whole don't want negative press towards AI.
LocalPCGuy
You are referencing your own personal experience, and while that is an entirely valid opinion for you to have personally about your usage, it's not possible to extrapolate that across an entire population of people. Whether or not you're doing that, part of the point I was making was how people who "think it makes sense" will often then not critically analyze something because it already agrees with their preconceived notion. Super common, I'm just calling it out cause we can all do better.
All we can say right now is "we don't really know how it affects our brains", and we won't until we get some studies (which is what the underlying paper was calling for, more research).
Personally I do think we'll get more studies, but the quality is the question for me - it's really hard to do a study right when by the time it's done, there's been 2 new generations of LLMs released making the study data potentially obsolete. So researchers are going to be tempted to go faster, use less people, be less rigid overall, which in turn may make for bad results.
TheAceOfHearts
Personally, I don't think you should ever allow the LLM to write for you or to modify / update anything you're writing. You can use it to get feedback when editing, to explore an idea-space, and to find any topical gaps. But write everything yourself! It's just too easy to give in and slowly let the LLM take over your brain.
This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.
jbstack
> I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more.
I've had the opposite experience, but my approach is different. I don't just copy/paste errors, accept the AI's answer when it works, and move on. I ask follow up questions to make sure I understand why the AI's answer works. For example, if it suggests running a particular command, I'll ask it to break down the command and all the flags and explain what each part is doing. Only when I'm satisfied that I can see why the suggestion solves the problem do I accept it and move on to the next thing.
The tradeoff for me ends up being that I spend less time learning individual units of knowledge than if I had to figure things out entirely myself e.g. by reading the manual (which perhaps leads to less retention), but I learn a greater quantity of things because I can more rapidly move on to the next problem that needs solving.
mzajc
> I ask follow up questions to make sure I understand why the AI's answer works.
I've tried a similar approach and found it very prone to hallucination[0]. I tend to google things first and ask a LLM as fallback, so maybe it's not a fair comparison, but what do I need a LLM for if a search engine can answer my question.
[0]: Just the other day I asked ChatGPT what a colonn (':') after systemd's ExecStart= means. The correct answer is that it inhibits variable expansion, but it kept giving me convincing yet incorrect answers.
jbstack
It's a tradeoff. After using ChatGPT for a while you develop somewhat of an instinct for when it might be hallucinating, especially when you start probing it for the "why" part and you get a feel for whether its explanations make sense. Having at least some domain knowledge helps too - you're more at risk of being fooled by hallucinations if you are trying to get it to do something you know nothing about.
While not foolproof, when you combine this with some basic fact-checking (e.g. quickly skim read a command's man page to make sure the explanation for each flag sounds right, or read the relevant paragraph from the manual) plus the fact that you see in practice whether the proposed solution fixes the problem, you can reach a reasonably high level of accuracy most of the time.
Even with the risk of hallucinations it's still a great time saver because you short-circuit the process of needing to work out which command is useful and reading the whole of the man page / manual until you understand which component parts do the job you want. It's not perfect but neither is Googling - that can lead to incorrect answers too.
To give an example of my own, the other day I was building a custom Incus virtual machine image from scratch from an ISO. I wanted to be able to provision it with cloud-init (which comes configured by default in cloud-enabled stock Incus images). For some reason, even with cloud-init installed in the guest, the host's provisioning was being ignored. This is a rather obscure problem for which Googling was of little use because hardly anyone makes cloud-init enabled images from ISOs in Incus (or if they do, they don't write about it on the internet).
At this point I could have done one of two things: (a) spend hours or days learning all about how cloud-init works and how Incus interacts with it until I eventually reached the point where I understood what the problem was; or (b) ask ChatGPT. I opted for the latter and quickly figured out the solution and why it worked, thus saving myself a bunch of pointless work.
majewsky
Does it work better when the AI is instructed to describe a method of answering the question, instead of answering the question directly?
For example, in this specific case, I am enough of a domain expert to know that this information is accessible by running `man systemd.service` and looking for the description of command line syntax (findable with grep for "ExecStart=", or, as I have now seen in preparing this answer, more directly with grep for "COMMAND LINES").
dpkirchner
Could you give an example of an ExecStart line that uses a colon? I haven't found any documentation for that while using Google and I don't have examples of it in my systemd unit files.
kjkjadksj
I think the school experience proves that doesn’t work. Reminds me of a teacher carefully breaking down the problem on the board and you nodding along when it is unfolding in front of you in a directed manner. The question is if you can do it yourself come the exam. If all you did to prepare is watch the teacher solve it, with no attempt to solve it from scratch yourself during practice, you will fail the exam.
jbstack
That very much depends on the topic being studied. I've passed plenty of exams of different levels (school, university, professional qualifications) just by reading the textbook and memorising key facts. I'd agree with you if we are talking about something like maths.
Also, there's a huge difference between passively watching a teacher write an explanation on a board, and interactively quizzing the teacher (or in this case, LLM) in order to gain a deeper and personalised understanding.
giancarlostoro
When Firefox added autocorrect, and I started using it, I made it a point to learn what it was telling me was correct, so I could write more accurately. I have since become drastically better at spelling, I still goof, I'm even worse when pronouncing words I've read but never heard. English is my second language mind you.
I think any developer worth their salt would use LLMs to learn quicker, and arrive to conclusions quicker. There's some programming problems I run into when working on a new project that I've run into before but cannot recall what my last solution was and it is frustrating, I could see how an LLM could help with such a resolution coming back quicker. Sometimes its 'first time setup' stuff that you have not had to do for like 5 years, so you forget, and maybe you wrote it down on a wiki, two jobs ago, but an LLM could help you remember.
I think we need to self-evaluate how we use LLMs so that they help us become better Software Engineers, not worse ones.
defgeneric
This is exactly the problem, but there's still a sweet spot where you can get quickly up to speed on a technical areas adjacent to your specialty and not have small gaps in your own knowledge hold you back from the main task. I was quickly able to do some signal processing for underwater acoustics in C, for example, and don't really plan to become highly proficient in it. I was able to get something workable and move on to other tasks while still getting an idea of what was involved if I ever wanted to come back to it. In the past I would have just read a bunch of existing code.
Manik_agg
I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).
I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.
lazide
I’d consider it similar to always using a GPS/Google Maps/Apple Maps to get somewhere without thinking about it first.
It’s really convenient. It also similarly rots the parts of the brain required for spatial reasoning and memory for a geographic area. It can also lead to brain rot with decision making.
Usually it’s good enough. Sometimes it leads to really ridiculous outcomes (especially if you never double check actual addresses and just put in a business name or whatever). In many edge cases depending on the use case, it leads to being stuck, because the maps data is wrong, or doesn’t have updated locations, or can’t consider weather conditions, etc. especially if we’re talking in the mountains or outside of major cities.
Doing it blindly has led to numerous people dying by stupidly getting themselves into more and more dumb situations.
People still got stuck using paper maps. Sometimes they even died. It was much rarer and people were more aware they were lost, instead of persisting thinking they weren’t. So different failure modes.
Paper maps were very inconvenient, so dealt with it using more human interaction and adding more buffer time. Which had it’s own costs.
In areas where there are active bad actors (Eastern Europe now a days, many other areas in that region sometimes) it leads to actively pathological outcomes.
It is now rare for anyone outside of conflict zones to use paper maps except for specific commercial and gov’t uses, and even then they often use digitized ‘paper’ maps.
Jimmc414
Considerable amount of methodology issues here for a study with this much traction. Only 54 participants split three ways into groups of 18, with just 9 people per condition in the crossover. Far too small for claims about "brain reprogramming."
The study shows different brain patterns during AI-assisted writing, not permanent damage. Lower EEG activity when using a tool is expected just as showing less mental math activity when using a calculator.
The study translates temporary, task-specific neural patterns into "cognitive decline" and "severe cognitive harm." The actual study measured brain activity during essay writing, not lasting changes.
Plus, surface electrical measurements can't diagnose "cognitive debt" or deep brain changes. The authors even acknowledge this. Also, "83.3% couldn't quote their essay" equates to 15 out of 18 people?
tim333
Thank you for summarizing that. I guessed there must be some issues but didn't want to read the thing.
sudosteph
Meanwhile my main use cases for AI outside of work:
- Learning how to solder
- Learning how to use a multimeter
- Learning to build basic circuits on breadboxes
- learning about solar panels, mppt, battery management system, and different variations of li-on batteries
- learning about LoRa band / meshtastic / how to build my own antenna
And every single one of these things I've learned I've also applied practically to experiment and learn more. I'm doing things with my brain that I couldn't do before, and it's great. When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.
You could say you can learn all of this from YouTube, but I can't stand watching videos. I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
And to be blunt: I like making mistakes and breaking things to learn. That strategy works great for software (not in prod obviously...), but now I can do it reasonably effectively for cheap electronics too.
nancyminusone
As someone who does these things, I am curious to know how and why you would choose AI.
Working these from text seems to be the hardest way I could think to learn them. I've yet to encounter a written description as to what it feels like to solder, what a good/bad job actually looks like, etc. A well shot video is much better at showing you what you need to do (although finding one is getting more and more difficult)
sudosteph
I just process text information better. Videos are kind of overstimulating and often have unrelated content, and I hate having to rewind back to a part I need while I'm in the middle of something. With LLMs I can get a broad overview of what I'm doing, tell it what materials I already have on hand and get specific ideas for how to practice. Soldering is probably one of the harder ones to learn by text, but the description of the techniques to use were actually really understandable (use flux, be sure the tip is tinned, touch the pad with the tip to warm it up a little, touch again with the iron on one side of the pad and insert the solder in on the other side and it gets drawn in, pull away (timing was trial and error). And then I'd upload a picture of what I did for review and it would point out the ones that had issues and what likely went wrong to cause it (ex: solder sticking to the top of the iron and not the pad), and I would keep practicing and test that it worked and looked like what was described. It may not be the ideal technique or outcome, but it unblocked me relatively quick so I could continue my project.
Being able to ask it stupid questions and edge cases is also something I like with LLMs, like I would propose a design for something (ex: a usb battery pack w/ lifepo4 batts that could charge my phone and be charged by solar at the same time), it would say what it didn't like about my design, counter with its own, then I would try to change aspects of their design to see "what would happen if .." and it would explain why it chose a particular component or design choice and what my change would do and the trade-offs, risks, etc other paths to building it with that, etc. Those types of interactions are probably the best for me actually understanding things, helps me understand limitations and test my assumptions interactively.
efreak
> I just process text information better. Videos are kind of overstimulating and often have unrelated content, and I hate having to rewind back to a part I need while I'm in the middle of something.
Rant:
I _hate_ video tutorials. With a passion. If you can't be bothered to show pictures of how to use your product with a labeled diagram/drawing/photo of the buttons or connections, then I either won't buy it or I'll return it. I hate video reviews. I hate video repair instructions. I hate spending 15 minutes jumping back and forth between two segments of a YouTube video, trying to find the exact correct frame each time so I can see what button the person is touching while listening to their blather so I don't miss the keyword I heard last time, just so I can see what two different sections when I could have had two pictures on screen at the same time (if I was on desktop, this would be a trivial fix, but not so much on mobile). I hate having VPNs and other products being advertised at me in ways that actively disrupt my chain of thought (vs static instead that I can ignore/scroll past). I hate not being able to just copy and paste a few simple instructions and an image for procedures that I'll have to repeat weekly. It would have taken you less effort to create, and I'd be more likely to pay you for your time.
YouTube videos are like flash-based banner ads, but worse. Avoid them like the plague.
End rant.
hn_throw_250903
[dead]
stripe_away
and to be blunt, I learned similar things building analog synths, before the dawn of LLMs.
Like you, I don't like watching videos. However, the web also has text, the same text used to train the LLMs that you used.
> When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.
Likewise, but I would have to ask either the real world or written docs.
I'm glad you've found a way to learn with LLMs. Just remember that people have been learning without LLMs for a long time, and it is not at all clear that LLMs are a better way to learn than other methods.
sudosteph
The asking people part was the hard thing for me, always has been. That honestly was the missing piece for me. I absolutely agree that written docs and online content are sufficient for some people, that's how I learned Linux and sysadmin stuff, but I tried on and off to get into electronics for years that way and never got anywhere.
I think the problem was all of the getting started guides didn't really solve problems I cared about, they're just like "see, a light! isn't that neat?" and then I get bored and impatient and don't internalize anything. The textbooks had theory but so much of it I would forget most of it before I could use it and actually learn. Then when I tried to build something actually interesting to me, I didn't actually understand the fundamentals, it always fails, Google doesn't help me find out why because it could be a million things and no human in my life understands this stuff either, so I would just go back to software.
It could be LLMs are at least possibly better for certain people to learn certain things in certain situations.
chaps
> However, the web also has text, the same text used to train the LLMs that you used.
The person you're responding to isn't denying that other people learn from those. But they're explicit that having the text isn't helpful either: > I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
dns_snek
Your use of LLMs is distinctly different than the use being described here (in a good way).
You might ask "What do I need to pay attention to when designing this type of electronic circuit", the people at risk of cognitive decline instead ask "design this electronic circuit for me".
I firmly believe that the the latter group will suffer observable cognitive decline over the span of a few years unless they continue to exercise their brain in the same ways they used to, and I think the majority won't bother to do that - why spend much effort when little effort do trick?
defgeneric
The physicality of having to actually do things in the real world slows things down to the rate at which our brains actually learn. The "vibe coding" loop is too fast to learn anything, and ends up teaching your brain to avoid the friction of learning.
amelius
Yeah, if you're using LLMs like an apprentice who asks their master, then there's nothing wrong with that, imho.
fxwin
Same here. I've been working through some textbooks without solutions for the contained exercises, and ChatGPT has been invaluable for getting feedback on solutions and hints when I'm stuck
kapone
> - Learning how to solder - Learning how to use a multimeter - Learning to build basic circuits on breadboxes - learning about solar panels, mppt, battery management system, and different variations of li-on batteries - learning about LoRa band / meshtastic / how to build my own antenna
And yet...somehow...humans have been able to learn and do these things (and do them well) for ages, with no LLMs around (or the stupid amount of capital being burned at the LLM stake).
And I want to hit the next person with a broom or something, likely over and over again, who says LLMs = AI.
/facepalm.
aprilthird2021
Cool, but most people will get brain rotted by this. It's the same way we constantly talk about how social media is probably bad for people then some commenter comes and says he's not addicted and there's no other way he could communicate with his high school friends who live overseas and know about their lives. Not everyone will only get the positives out of any technology
planetmcd
This article was probably written by AI, because anyone with half a brain could not read the study and come to the same conclusions.
Basically, participants spent less than half an hour, 4 times, over 4 months, writing some bullcrap SAT type essay. Some participants used AI.
So to accept the premise of the article, using an AI tool once a month for 20 minutes caused noticeable brain rot. It is silly on its face.
What the study actually showed, people don't have an investment or strong memory to output they didn't produce. Again, this is a BS essay written (mostly by undergrads) in 20 minutes, so not likely to be deep in any capacity. So to extrapolate, if you have a task that requires you to understand the output, you are less likely to have a grasp of it if you didn't help produce the output. This would also be true of work some other person did.
marcofloriano
> What the study actually showed, people don't have an investment or strong memory to output they didn't produce.
Problem with LLMs is, when you pass hours feeding prompts to solve a problem, you actually did help (a lot!) to produce the output.
null
planetmcd
I agree, the study didn't do that or have any thoughts on that.
epolanski
I can't but think this has to be tied to _how_ AI is used.
I actively use AI to research, question and argue a lot, this pushes me to reason a lot more than I normally would.
Today's example: - recognize docs are missing for a feature - have AI explore the code to figure out what's happening - back and forth for ours trying to find how to document, rename, refactor, improve, write mermaid charts, stress over naming to be as simple as possible
The only step I'm doing less is the exploration/search one, because an LLM can process a lot more text than I can at the same time. But for every other step I am pushing myself to think more and more profoundly than I would without an LLM because gathering the same amount of information would've bene too exhausting to proceed with this.
Sure, it may have spared me to dig into mermaid too, for what is worth.
So yes, lose some, win others, albeit in reality no work would've been done at all without the LLM enabling it. I would've moved to another mundane task such as "update i18 formatting of date for swiss german customers".
eviks
No, vibe science is not as powerful as to be able to determine "long-term cognitive harm", especially when such "technical wonders" as "measurable through EEG brain scans." are used.
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written
Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.
matwood
I write all the time and couldn't quote anything off hand. What I can talk about are the ideas in the writing. I find LLMs useful as an editor. Here's what I want to say, is it clear or are there better words, etc... And then I never take the output blindly, and depending on how important the writing is I may go back and forth line by line.
The idea that I would say 'write an essay on X' and then never look at the output is kind of wild. I guess that's vibe writing instead of vibe coding.
gandalfgeek
The coverage of this has been so bad that the authors have had to put up an FAQ[1] on their website, where the first question is the following:
Is it safe to say that LLMs are, in essence, making us "dumber"? No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
[1]: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...
marcofloriano
It's actually so safe to say that such a small study like that can point out clearly the fact. But of course, as it is a very sensitive topic, 'the language' and 'the narrative' should be carefully chosen, or you can be 'banned'. Off course we wont see new studies like that anytime soon.
puilp0502
Isn't this a duplicate of https://news.ycombinator.com/item?id=44286277 ?
chychiu
Was going to comment the same but you beat me to it!
On that note, reading the ChatGPT-esque summary in the linked article gave me more brain damage than any AI I've used so far
causal
The irony. It isn't even a new study. Way too much has been written about this flawed study when we should just be doing more studies.
jennyholzer
There are dozens of duplicates for pro-AI dreck, so this post should stand.
ayhanfuat
We can at least change the link to the actual paper instead of a vaccine denier's AI generated summary.
causal
Instead of trying to balance dreck can we just... not upvote any dreck
fortyseven
Being anti-AI drivel is completely fine though.
misswaterfairy
I can't say I'm surprised by this. The brain is, figuratively speaking, a muscle. Learning through successes and (especially) failures is hard work, though not without benefit, in that the trials and exercises your brain works through exercises the 'muscle', making it stronger.
Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.
It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.
onlyrealcuzzo
Sounds bullish for AI.
It's like a drug. You start using it, and think you have super powers, and then you've forgotten how to think, and you need AI just to maybe be as smart as you were before.
Every company will need enterprise AI solutions just to maybe get the same amount of productivity as they got before without it.
jugg1es
This is sad but true.
null
kjkjadksj
And the pipeline is cooked now with some universities now allowing for AI use. It’s like what cliffnotes did for reading comprehension but over all aspects of life and all domains. What a coming tsunami.
Anecdote here, but when I was in grad school, I was talking to a PhD student i respected a lot. Whenever he read a paper, he would try to write the code out and get it working. I would take a couple of months but he could whip it up in a few days. He explained to me that it was just practice and the more you practice the better you become. He not only coded things quickly, he started analyzing papers quicker too and became really good at synthesizing ideas, knowing what worked and didn't, and built up a phenomenal intuition.
These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.