Skip to content(if available)orjump to list(if available)

Your Brain on ChatGPT

Your Brain on ChatGPT

106 comments

·June 18, 2025

Davidzheng

This was discussed only two days ago: https://news.ycombinator.com/item?id=44286277

relaxing

Why did the posting two days ago omit the first part of the title?

jwblackwell

One slightly unexpected side effect of using AI to do most of my coding now is that I find myself a lot less tired and can focus for longer periods. It's enabled me to get work done while faced with other distractions. Essentially, offload some mental capacity towards AI frees up capacity elsewhere.

delegate

I find the opposite to be true. I am a lot more productive, so I work on more things in parallel, which makes me extremely tired by the end of the day, as if my brain worked at 100% capacity..

jwblackwell

Yeah I do feel the pressure to run multiple instances of Claude Code now. Haven't really managed to find a good workflow, I find I just get too distracted swapping between tasks and then probably end up working slower than if I had just stayed in one IDE instance

xiphias2

Codex is the perfect workflow for me: instead of swapping, just accept / reject cls / refine tasks

michelsedgh

Yeah and after a few days of this, I find I can't do anything and stop all the side projects for a few days until im recharged again and can get back to it.

benterix

I bet we will all need a new type of therapy for that at some point in the future.

BoorishBears

On one hand, I've found that it reduces acute fatigue, but on the other I've found there's also an inflection point where it can encourage more fatigue over longer time horizons if you're not careful.

In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.

With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.

And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.

DocTomoe

I like to think of AI as cars:

You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.

chneu

There are detriments to this as well.

Exercise is good.

Being outside is good.

New experiences happen when you're on foot.

You see more things on foot.

Etc etc. We make our lives way too efficient and we atrophy basic skills. There are benefits to doing things manually. Hustle culture is quite bad for us.

Going by foot or bicycle is so healthy for us for a myriad of reasons.

ricardobeat

While this is absolutely true, “walking to Walmart” is a terrible example due to the lack of pedestrian infrastructure and distances involved :)

Etheryte

I think in a way this is a good analogy, because it also includes the downside. If you always drive everywhere and do everything by car, your health will suffer due to lack of physical activity.

xnorswap

And you'll randomly kill people on your way there.

( Of course dear reader, YOU won't randomly kill people because you're a "good driver". )

lm28469

You got it backwards, there wouldn't be a walmart outside of town if there were no cars, you'd walk to the local butcher/baker/whatever in <10min.

xixixao

Where you’d be able to afford much less (not judging the trade-off, but that’s the primary reason why the US is headed in the opposite direction).

DocTomoe

Yes, but there are cars. That genie has already escaped its bottle.

And you pay small local stores with higher prices - which leads to more people, even in such small-towns with local butchers and bakers to get into their ride and go to the Lidl or Aldi on the outskirts.

Much like companies will realise LLM-using devs are more efficient by some random metric (do I hear: Story points and feature counts?), and will require LLM use from their employees.

KronisLV

That’s a nice analogy! Though one might argue that the walk in of itself would be good for your health (as evidenced by me putting on some weight after replacing my 30 minute daily walk to the office with working remotely).

One could also do the drive (use AI) and then get some fresh air after (personal projects, code golf, solving interesting problems), but I don’t thing everyone has the willpower for that or the desire to consider that.

lucideer

Oh man. I really hope AI doesn't do as much harm to us as cars have.

gherkinnn

At the price of a sluggish, atrophied body.

chii

which is why people now pay $100 a month for a gym membership.

dr_dshiv

That’s why we need an AI infrastructure like amsterdam, where you can bike everywhere. It’s faster and more convenient than a car for most trips and keeps everyone fit and happy.

laurentiurad

this analogy is flawed to its core. The car doesn't make you forget how to walk, because you are still forced to walk in certain circumstances. Delegating learning to an llm will increase your reliance on it, and will eventually affect the way you're learning. A better analogy is the usage of GPS. If you use it continuously, you will be dependent on it to get to a place, and lose the capacity to find places on your own.

null

[deleted]

pcwelder

Back when GANs were popular, I'd train generator-discriminator models for image generation.

I thought a lot about it and realised discriminating is much easier than generating.

I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.

I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.

We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.

These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.

tasn

I think this is true for the very simple cases, for example and obviously bad picture vs. a good one.

I don't think this is necessarily true for more complex tasks, especially not in areas that require deep evaluation. For example, reviewing 5 non-trivial PRs is probably harder and more time consuming than writing it yourself.

The reason why it works well for images and short stories is because the filter you are applying is "I like it, vs. I don't like it", rather than "it's good vs. it's not good".

keithwhor

I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.

It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.

Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.

Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.

It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.

falcor84

>Nobody wants to be in a relationship with a language model.

I'm not sure about it. I don't have first or second hand experience with this, but I've been hearing about a lot of cases of people really getting into a sort of relationship with an AI, and I can understand a bit of the appeal. You can "have someone" who's entirely unjudgemental, who's always there for you when you want to chat about your stuff, and isn't ever making demands of you. It's definitely nothing close to a real relationship, big I do think it's objectively better than the worst of human relationships, and is probably better for your psyche than being lonely.

For better or for worse, I imagine that we'll see rapid growth in human-AI relationships over the coming decade, driven by improvements in memory and long-term planning (and possibly robotic bodies) on the one hand, and a growth of the loneliness epidemic on the other.

santiagobasulto

Wasn't THE SAME said when Google came out? That we were not remembering things anymore and we were relying on Google? And also with cellphones before that (even the big dummy brickphones), that we were not remembering phone numbers anymore.

gamerDude

And this is exactly what this study showed too.

"Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."

ansc

Yes, that was true though, wasn't it? If this is also true, what does that imply?

nottorp

Yes but your cell phone contacts don't have a chance to call a completely different number out of thin air once in a while.

At least for now, while Apple and Google haven't put "AI" in the contacts list. Can't guarantee tomorrow.

falcor84

That would actually be an amazing feature. Like in those movie meet-cutes where the person you were supposed to meet doesn't show up, and instead you make a connection with a random person.

nottorp

Those services are available already, but the random person at the other end is "AI" generated :)

bodge5000

A comment on another similar thread pointed out it goes as far back as Socrates saying that writing things down means your not exercising your brain, so you're right, this is the same old argument we've heard for years before.

The question is, were they wrong? I'm not sure I could continue doing my job much as SWE if I lost access to search engines, and I certainly don't remember phone numbers anymore, and as for Socrates, we found that the ability to forget about something (while still maintaining some record of it) was actually a benefit of writing, not a flaw. I think in all these cases we found that to some extent they were right, but either the benefits outweighed the cost of reliance, or that the cost was the benefit.

I'm sure each one had its worst case scenario where we'd all turn into brainless slugs offloading all our critical thinking to the computer or the phone or a piece of paper, and that obviously didn't happen, so it might not here either, but there's a good chance we will lose something as a result of this, and its whether the benefits still outweigh the costs

FranzFerdiNaN

Plato was already worried that the written word caused people to forget things (although his main complaint was that words cant answer like a person can in a dialogue).

dyauspitr

Google was like a faster library. ChatGPT just does most of the work for you.

AnthonyMouse

It's the doing the work for you which is the trouble.

Suppose you want to know how some git command works. If you have to read the manual to find out, you end up reading about four other features you didn't know existed before you get to the thing you set out to look for to begin with, and then you have those things in your brain when you need them later.

If you can just type it into a search box and it spits back a command to paste into the terminal, it's "faster" -- this time -- but then you never actually learn how it works, so what happens when you get to a question the search box can't answer?

alganet

Their results support this. The study has three groups: LLM users, Search Engine users and Brain only.

In terms of connections made, Brain Only beats Search User, Search User beats LLM User.

So, yes. If those measured connections mean something, it's the same but worse.

hkon

I don't remember phone numbers.

I remember where I can get information on the internet, not the information itself. I rely on google for many things, but find myself increasingly using AI instead since the signal/noise ratio on google is getting worse.

fercircularbuf

As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places. It's no surprise that a tool like this, used for the purpose of outsourcing a task that our own brains would otherwise do, would result in a decline in the skills that would be trained if we were performing that task ourselves.

arethuza

The only two times I have made bad navigation mistakes in mountains were in the weeks after I started using my phone and a mapping app - the realisation that using my phone was making me worse at navigation was quite a shock at the time.

khazhoux

But you didn't become worse at navigation. Sounds like you trusted a tool, and it failed you.

arethuza

No - on both occasions it was the same scenario - descending from a peak in bad weather and picking the wrong ridge to descend - I was confident I "knew" which was the right ridge and with the app I use bearings for the right route are pretty difficult to distinguish - so completely my fault.

I'm now aware of that problem and haven't had that problem since but I was pretty shocked in retrospect that I confidently headed off in the wrong direction when the tool I was using was by any objective measure much better.

I agree with this:

"the key to navigating successfully is being able to read and understand a map and how it relates to your surroundings"

https://www.mountaineering.scot/safety-and-skills/essential-...

jajko

This is splitting hair, at the end his navigation skills (him + whatever tool he used) were NOK and could result in dangerous situations (been there so many times in the mountains, although it was mostly about "went too far in a bit wrong direction and don't want to backtrack that far, I am sure I will find a way to that already close point..." and 10 mins later scrambling on all 4 on some slippery wet rock with no room for error)

tehnub

Navigation is a narrow task. For many intents and purposes, LLMs are generally intelligent.

khazhoux

> As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places

Can you point to a study to back this up? Otherwise, it's anecdata.

ineedaj0b

i really tire of people always asking for studies for obvious things.

have sword skills declined since the introduction of guns? surely people still have hands and understand how to move swords, and they use knives to cut food for consumption. the skill level is the same..

but we know on aggregate most people have switched to relying on a technological advancement. there's not the same culture for swords as in the past by sheer numbers despite there being more self proclaimed 'experts'.

100 genz vs. 100 genx you'll likely find a smidgen more of one group than the other finding a location without a phone.

khazhoux

> i really tire of people always asking for studies for obvious things.

I actually agree with you on this!

But... I have very very good directional sense, and as far as I can tell it's innate. My whole life I've been able to remember pathing and maintain proper orientation. I don't think this has anything to do with lack of navigation aids (online or otherwise) during formative years.

But I'm talking about geospatial sense within the brain. If your point is that people no longer learn and improve the skill of map-reading then yes that should be self-evident.

fercircularbuf

https://www.sciencedirect.com/science/article/pii/S027249442...

The first paragraph of the conclusions section is also stimulating and I think aptly applies to this discussion of using AI as a tool.

> it is important to mention the bidirectionality of the relationship between GPS use and navigation abilities: Individuals with poorer ability to learn spatial information and form environmental knowledge tend to use assisted navigation systems more frequently in daily life, thus weakening their navigation abilities. This intriguing link might suggest that individuals who have a weaker “internal” ability to use spatial knowledge to navigate their surroundings are also more prone to rely on “external” devices or systems to navigate successfully. Therefore, other psychological factors (e.g., self-efficacy; Miola et al., 2023) might moderate this bidirectional relationship, and researchers need to further elucidate it.

noname120

@dang Can the unwanted editorialization of this title be removed? Nowhere does the title or article contain the gutter press statement “AI is eating our brains”.

tehnub

I sometimes used to think about things. Now I just ask ChatGPT and it tells me.

juanani

[dead]

empiko

I wonder to what extent this is caused by the writing style LLMs have. They just love beating around the bush, repeat themselves, use fillers, etc. I often find it hard to find the signal in the noise, but I guess that it is inevitable with the way they work. I can easily imagine my brain shutting down when I have to parse this sort of output.

pepa65

It also depends on the LLM.

solumunus

Instruct it to be concise.

unsupp0rted

Also ever since we invented the written word it has been eating our brains by killing our memory

dig1

Quite the opposite, it was shown that reading improves memory and cognitive abilities for children [1] and older adults [2].

[1] https://www.cam.ac.uk/research/news/reading-for-pleasure-ear...

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC8482376

readthenotes1

How does that compare to the population of people who memorize the Old testament or the Quran?

I remember hearing that the entire epics of the Iliad and the Odyssey we're all done via memorization and only spoken... How do you think those poets memories compared to a child who reads it Bob the builder books?

rokkamokka

I watched so many reruns of Community I could recite the episodes by heart. I don't think that made the rest of my memory any better.

elric

For those who don't get the reference, Plato thought that the written word was not a good tool for teaching/learning, because it outsources some of the thinking.

Simiarly (IIRC) Socrates thought the written word wasn't great for communicating, because it lacks the nuance of face-to-face communication.

I wonder if they ever realised that it could also be a giant knowledge amplifier.

moffkalast

They probably did, but still preferred their old way since it took more skill.

I remember some old quote about how people used to ask their parents and grandparents questions, got answers that were just as likely to be bullshit and then believed that for the rest of their life because they had no alternative info to go on. You had to invest so much time to turn a library upside down and search through books to find what you needed, if they even had the right book.

Search engines solved that part, but you still needed to know what to search for and study the subject a little first. LLMs solve the final hurdle of going from the dumbest possible wrongly posed question to directly knowing exactly what to search for in seconds. If this doesn't result in a knowledge explosion I don't know what will.

camillomiller

Such a comment from an AI apologist definitely helps to confirm the findings of the study.

risyachka

Not really . You have to memorise much more in today’s world to be able to do any kind of work.

lostlogin

I’m not sure of my retelling of events from the same day.

Aboriginal storytelling is claimed to pass on events from 7k+ years ago.

https://www.tandfonline.com/doi/abs/10.1080/00049182.2015.10...

chongli

We already know how oral cultures work: they use technologies such as rhyme, meter, music, stock characters, memory palaces, and more. If you want a good example of how powerful this stuff is, think about the last time you had a song stuck in your head.

dyauspitr

Yeah I’ve used ChatGPT as a starting point for so much documentation I dread having to write a product brief from scratch now.