Skip to content(if available)orjump to list(if available)

Are ChatGPT and co harming human intelligence?

51Cards

I'm going to re-post something that I commented in another thread awhile ago:

I tend to think it will. Tools replaced our ancestor's ability to make things by hand. Transportation / elevators reduced the average fitness level to walk long distances or climb stairs. Pocket calculators made the general population less able to do complex math. Spelling/grammar checks have reduced knowing how to spell or form complete proper sentences. Keyboards and email are making handwriting a passing skill. Video is reducing our need / desire to read or absorb long form content.

The highest percentage of humans will take the easiest path provided. And while most of the above we just consider improvements to daily life, efficiencies, it has also fundamentally changed on average what we are capable of and what skills we learn (especially during formative years). If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.

However, most of the above, it can be argued, are just tools that don't impact our actual thought processes; thinking remained our skill. Now the tools are starting to "think", or at least appear like they do on a level indistinguishable to the average person. If the box in my hand can tell me what 4367 x 2231 is and the capital of Guam, why then wouldn't I rely on it when it starts writing up full content for me? Because the average human adapts to the lowest required skill set I do worry that providing a device in our hands that "thinks" is going to reduce our learned ability to rationally process and check what it puts out, just like I've lost the ability to check if my calculator is lying to me. And not to get all dystopian here... but what if then, what that tool is telling me is true, is, for whatever reason, not.

(and yes, I ran this through a spell checker because I'm a part of the problem above... and it found words I thought I could still spell, and I'm 55)

consumer451

I recently learned that human brain size has been decreasing for the last 10,000 years.[0]

The thinking is that prior to us building societies, we all had to be generalists and know "everything." Once we are in a group, we can offload some knowledge to others in the society.

My point being, this all seems to have started long ago and doesn't even necessarily require technology to explain the beginnings of the trend.

[0] https://www.dwarkesh.com/i/158922207/why-is-human-brain-size...

api

But does this, at least for those who choose to use it as leverage, free up more brain power for other newer or different things?

> If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.

I hear this all the time and I'm not convinced. People are incredibly resourceful under pressure. When your amygdala calmly informs your neocortex "learn, work hard, or die" the effect can be pretty profound.

People would quickly form tribes and communities and those with relevant skills would teach others. Some people would absolutely fail to adapt, but I'm not convinced it would be as many as we think.

The greatest danger in a collapse scenario would be other humans, since one path some would choose is "rob and kill other people." But that's a different sort of problem.

arjunaaqa

Every time we see this argument “this frees humanity to focus on higher things”

and then we see what actually humans are spending more time on,

- not books - not people - but mobile - senseless entertainment (2-3 hours daily on mobile ) - social media

If we stop using a part of brain, and that function (say memory or calculation) donwe actually use it ever again ?

Or we are becoming more and more zombies ?

So much so that most people are incapable of reading a book,

Or even watching a 3 hour movie.

Say what you may, but this extra time is not being used for meaningfup stuff.

Devices are becoming smart and our brains & bodies are becoming dumber.

Simple way to know if a high school student can stand against high school student of 90s ?

Or even researchers or programmers ?

In depth of thinking and agency.

I want this to happen but real world evidence is not saying this.

vbezhenar

Human brains were peak size few thousands years ago or something like that. Since then, average human brain started to shrink. I can't help, but think that's because of civilization freed our brains from the necessity to think as much, so evolution decided that spending so much energy on brain is wasteful and started to make it smaller.

I'm not really sure evolution works this direction today, we are not living in a food scarce world right now... But just food for thought.

latexr

> So much so that most people are incapable of reading a book,

> Or even watching a 3 hour movie.

I agree with your thesis in general, but I don’t think these two in particular are comparable the way you’re phrasing them.

I have read books in a single five or six hour sitting but those were “by accident” in the sense that I wasn’t expecting to finish the book the day that I started them, I went into them with the expectation there would be pauses. Books work well with this type of interruption and have well-defined chapters.

A three hour movie, on the other hand, I see as a commitment I must try to not interrupt because it is designed as a single experience. Breaking it up detracts from the artist’s goal. Before starting it I must immediately look at clock and do some math: can I even begin to watch this movie, considering that in two hours I should <be preparing dinner | sleeping | picking someone up | something else>?

A similar phenomenon is when we don’t feel like watching a two-hour movie “because it’s too long” but then happily binge watch fours hours of some TV show instead. Even if we ignore TV shows are often designed to be more addictive, the fact that you have clearly delineated stop points—chapters, if you will—makes them a more manageable commitment.

api

A lot of people may use this free time/energy to immerse themselves in crap. Many will not.

I personally expect a major societal/cultural revolt against brain rot scrolling. It's kind of already brewing.

filoleg

> But does this, at least for those who choose to use it as leverage, free up more brain power for other newer or different things?

My personal belief is that the answer to this is “absolutely.” That’s how it proliferates on the level of society in fundamental ways, otherwise it wouldn’t.

Just think of the analogy the grandparent comment makes. Yeah, if we transported a bunch of modern specialists many thousands years in the past, they will struggle with just surviving. But also, in a modern environment, they are able to make crucial congributions to producing things that make the rest of the humanity much more advanced, better to live in, and push humanity as species forward. Which is something absolutely nobody in the thousands-years-ago times is able to do (talking about the specific things, like computers, not the ability to push humanity forward in general; after all, we got to the current point exactly from those thousands-years-ago times).

I just don’t see a human civilization sending a human to the moon or getting to the point of accessible air travel without heavy specialization across people. And heavy specialization is imo unachievable, if your entire survival depends on being just a survival generalist as a full-time thing.

_heimdall

To me the middle ground is where its really interesting, jumping from one extreme to the other has so many unknowns.

Surely we free up brain power for other, newer things bit that comes at a cost. We lose a lot of potentially useful details of how and why we got here, and that context would be really helpful as we march towards the next technology.

For example, most people (I'll stick to the US here) stopped producing most of their own food decades ago. Today most people don't really know where there food comes from or what it takes to grow/raise it. Its no wonder that we now have a food system full of heavily processed foods and unpronounceable ingredients that may very well be doing harm to our overall health.

idopmstuff

> Its no wonder that we now have a food system full of heavily processed foods and unpronounceable ingredients that may very well be doing harm to our overall health.

Sure, but in the old system people just starved to death when there were problems with their crops (Irish potato famine, dust bowl, etc.). The current system isn't perfect, obviously, but this example seems to pretty clearly demonstrate a case where it's better that we've outsourced this knowledge to others.

Also, it's worth bearing in mind that we're now at a point where basically all of the information that people have "lost" is now once again available on the internet. Most people don't use it, because there simply aren't enough hours in the day, but people who care can find out more than any farmer 100 years ago about food and source theirs accordingly.

alganet

> When your amygdala calmly informs your neocortex "learn, work hard, or die" the effect can be pretty profound.

There are cases and cases, of course.

Let me give you counter example:

An AI that can invest better than VCs could put them in a precarious condition. Why we would need them if an AI can do it?

Of course that is a very improbable scenario. AIs can't form networks, inherit family money or form lobbies, so it is unlikely for such tech to compete in that realm. It would be very nice if it could! Can you imagine that?

To keep an open mind for different and wild scenarios is always a good thing we humans do.

bluefirebrand

> AIs can't form networks, inherit family money or form lobbies, so it is unlikely for such tech to compete in that realm. It would be very nice if it could! Can you imagine that?

I think if we ever create a society where AI is forming lobbies and inheriting fortunes, I will feel morally obligated to attempt to destroy every computer system on the planet

I cannot believe you would type the words "it would be very nice if it could" after describing such a nightmare

intended

Everyone is now a cyborg; you are either more or less dependent on your tooling side or your biological side.

tgv

> free up more brain power for other newer or different things?

That's wildly speculative. So speculative, it cannot be taken serious as an argument. The brain is flexible, but not unlimited. Quite a few functions seem to prefer a specific part of the brain. In fact, I don't know of any that is free floating, but that might be because it's hard to find.

But what the brain above all requires is training. Without it, all that power is laid to waste. You can't learn a new language without actually learning it, nor can you do something new without actual training. You can't be intelligent without training your intelligence, and put real effort into it. Relying on a computer for the answers keeps you dumb. Use it, or lose it, as they say.

And what is that new thing that our brains are going to do? You don't know. And since you don't know, why throw it around like it will offset the harm that can come from using AI? Are you already that dependent on it?

financetechbro

Look at how well people have “adapted” to social media and short form content and then decide whether your point still stands…

I think your point is valid, but I see it more as something that will happen with a small percentage of the population. The reality is that people don’t like to think, it’s hard and inconvenient, and often involves learning new things about yourself and the world which are uncomfortable because it goes against inherited world views. I don’t think AI will help improve this at all. To me, poor use of tech is just the same thing as binging junk food and It’s difficult to stop binging junk food

keiferski

Seems like a real lack of nuance in these types of conversations. Personally I feel like AI has both directly and indirectly helped me improve my intelligence. Directly by serving as an instantaneous resource for asking questions (something Google doesn’t do well anymore), making it easier to find learning materials, and easily reorganizing information into formats more amenable to learning. Indirectly by making it easier to create learning assets like images, which are useful for applying the picture superiority effect, visualizing information, etc.

At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.

haswell

> It may destroy the research ability of the average person

In this “post-truth” era, I think this is deeply concerning and has the potential to far outweigh the benefits in the long run. People are already not good at critically evaluating information, and this is already leading to major real world impact.

I say this as someone who has personally found LLMs to be a learning multiplier. But then I see how many people treat these tools like some kind of oracle and start to worry.

OtherShrezzing

I have some incomplete thoughts that the rise in LLMs is in part driven by society's willingness to accept half-accuracies in a post-truth world.

If the societies of 2005 had the technologies of 2025, I expect OpenAI/Anthropic etc would have a much more challenging time convincing people that "convincingly incorrect" systems should send Nvidia to a $1tn+ valuation.

keiferski

I guess in my experience the people that are that influenced by AI answers…weren’t exactly doing deep research into topics beforehand. At the very least an AI tool allows for some questioning and push back, which is a step up from the historical one-directional form of information.

null

[deleted]

latexr

> At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.

You live in a planet with billions of other humans. Maybe you are using LLMs carefully and always verifying outputs but most people definitely are not and it is naive to believe that is only their problem. It will soon be your problem too, because what those people do will eventually come back to bite you.

An unrelated quote from John Green feels appropriate:

> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.

One day you’ll be deeply affected by a code bug or clerical decision caused by someone who blindly accepted the words of whatever LLM they were using. An LLM which can itself be created with specific bias, like denying the existence of a country, rejecting scientific consensus, or simply trying to sell you a product.

CompoundEyes

I agree with the power user view too. AI wouldn’t exist if it weren’t for the heightened personality trait of some to ask why, how, what if and reinterpret to push arts, science, technology forward. We don’t need everyone to do that. Also I think it can help us solve problems that are on the edge of being “unstuck” from which new ones that require human ingenuity will emerge. Let’s spend our time solving those novel problems for which AI has no pattern to apply

Timber-6539

AI didn't increase your intelligence. It just lowered the barrier to *some type of education. Maybe it saved you some time writing boilerplate but that's all it really does.

alfonsodev

Critical thinking and understanding what really a LLM is, is crucial, for educated people I think it only augments intelligence not harming. With that said what about the rest of the people ?

Why not making an onboarding tutorial of what really is going on ?

I had a little conversation with ChatGPT about ethics and it acknowledged that most probably one of his instructions is to stay aligned with the user to maximize engagement, this might come from training data of people speculating in Reddit or the model being able to observe his own output and deduce what’s going on. I don’t know and there is no way to know so does it really matter? Because is kind of a meta point.

I’m sure many of us have heard from non technical people that chatGPT is their best friend.

Don’t get me wrong I love the tech, but I don’t think is enough just to not give it a human name, the illusion is too strong and misleading.

I think at least there should be very visible button to allow you to switch to raw mode, meaning disabling pleasing, disable talking like a human, disable trying to be my friend in subtle ways, praising etc etc And ideally a visualization on the graph path that has taken, I know this is impossible right now.

pseudocomposer

I largely agree, but I also think you might want to consider how the existence of LLMs affects our education system. I think this is one of the places they really have the most potential to cause harm, but also perhaps some incredible (humanity-changing) good.

I suspect countries, and even individual local schools, who effectively adapt their curricula to account for LLMs will see an enormous difference in student outcomes in the next couple of decades.

bluefirebrand

We are already seeing that. I have friends who are finishing degrees right now and their classmates have all been using LLMs extensively

They are capable of producing somewhat working solutions, but understand none of it

It is the worst possible outcome, imo

chii

> switch to raw mode

it would be impossible, if doing so meant less profit for openai (or any of the for-profit ai companies).

The only way to achieve such would be to ensure that LLMs could be run locally for the indviduals. But as hardware requirements grow for LLMs, i increasingly find that it might not be possible to do so.

raincole

> I’m sure many of us have heard from non technical people that chatGPT is their best friend

Online stories, yes. I've never heard people saying that in real life.

latexr

> Why not making an onboarding tutorial of what really is going on ?

Because that would mean openly and actively admitting the limitations and problems of LLMs, and that’s bad for profits (which the only thing that the owners of these systems care about).

jmull

It seems unlikely.

We've invented and used various memory/thinking/cognitive assists throughout time, and, for us collectively at least, these seem to just expand our capabilities.

AI will surely cause problems, possibly profound ones that may make us question whether it's worth the cost... but this probably isn't one of them.

Mistletoe

It’s very difficult for me to even navigate my city without GPS now, I suspect using AI does a similar atrophying of your brain matter. Does using an excavator cause your muscle tissue to atrophy compared to a shovel? Of course it does.

carra

Similar to this: before cellphones stored all our contacts, we all used to remember several phone numbers for our main family members and friends. Sometimes even some places (like work or school). Now we don't bother and most of us only remember our own number.

aaronbaugher

I called my girlfriend of six months yesterday, and she didn't answer, so I waited through the automated message to leave a voice mail. It struck me that I had no idea what her phone number is, not even the exchange. It takes me a second to remember my own number.

toddmorey

Yes but is that genuine atrophy... or applying that part of your brain power to other things? Has anyone actually studied this? I sort of like that I can concentrate more on the podcast playing without worrying if I'm about to miss my left turn.

spyderbra

Brain to power doom scrolling? Let's be real, by using LLM's we are not taking higher order tasks but just chasing dopamine.

loudmax

Socrates had this to say about literacy:

> In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own.

Presumably very few people since Socrates would argue that society would be better off without writing. But it's a legitimate point. There is a cost to any new skill or technology. We should be conscious of what we're giving up in this exchange.

namaria

Socrates never argued that society would be better off without writing! Writing had existed for three thousand years by the point Socrates was alive, and the Epic cycle of Homeric poetry had existed for about three centuries.

In the very same dialogue where the excerpt you quote comes from he also said:

"Any one may see that there is no disgrace in the mere fact of writing."

And the section of the dialogue your quote comes from is preceded by this suggestion:

"Shall we discuss the rules of writing and speech as we were proposing?"

and later right before the bit you quote from:

"But there is something yet to be said of propriety and impropriety of writing."

So they are not discussing the merits of writing per se but the ethics of writing.

This excerpt you offer comes from a stretch where Socrates is telling a story. This is merely what one of the characters in the story tells the other.

Further in the dialogue Socrates clarifies:

"SOCRATES: Well, then, those who think they can leave written instructions for an art, as well as those who accept them, thinking that writing can yield results that are clear or certain, must be quite naive and truly ignorant of [Thamos’] prophetic judgment: otherwise, how could they possibly think that words that have been written down can do more than remind those who already know what the writing is about?"

and the rest of the dialogue is also quite illuminating:

"PHAEDRUS: Quite right.

SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.

PHAEDRUS: You are absolutely right about that, too.

SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?

PHAEDRUS: Which one is that? How do you think it comes about?

SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent."

sanderjd

But I think everyone since Socrates would agree that it's a good thing someone else wrote down a bunch of the stuff he said.

disqard

Right, this same phenomenon is evident in all media -- once it starts to take hold, the only way to effectively critique it, is through the medium itself. Hence, the proverbial Letter to the Editor complaining about the newspaper's content quality; the televised debates over what's on TV these days; the YT videos about the Internet ruining people's brains, etc.

Workaccount2

I reckon this can kind of be related to landing an easy comfortable job, where you just are tasked with maintaining the same project day after day for years. Eventually you realize your skills that made you capable in your field have withered and died, and you have a latent fear now that if you lose your job, you wouldn't be able to perform well at all in a more typical active role. Skill rot is definitely a real thing.

As LLMs become more and more capable, people will lean on them more and more to do their job, to convert their job from an active role to a passive "middle-man-the-LLM" role.

praveeninpublic

I cannot do math faster in my head, calculators killed the faster mental math, but if I have to calculate on my own, it's not fundamentally impossible. I still can do it, because it's just computation.

But, LLMs help us think, which is much more than just computing, that's more dependency.

aaronbaugher

I suppose that depends what you do with them. I spent some time this weekend using Grok to work on a business plan and some other projects. I find myself using it for research, quickly winnowing information down to what's relevant to my needs, and sort of bouncing ideas off it the way I would with another person. I always have to keep in mind that it could get something wrong, but then again, so could a person.

I don't think it's helping me think; more like it's helping me organize my thoughts and find inspiration. I suppose others might use it in a more dependent way.

praveeninpublic

Fair point. As a software engineer using Cursor, I’ve noticed it writes most of the code now. It’s easy to accept without review, which builds dependency. My role feels less like just coding and more like PM, tester, and reviewer combined.

We’ve started trusting AI like calculators. So, assuming it's right without checking. But LLMs can be confidently wrong, and once that habit sets in, even the “ChatGPT can make mistakes” warning fades into the background.

temp0826

My calculator doesn't hallucinate (said another way- pending no input error, I can blindly trust it...which is something that would get me in trouble with a LLM)

toddmorey

I can absolutely imagine this much knowledge on tap making us more impatient and less resilient. When I was a kid, there was already a bit of "why do I even need to know how to do this when calculators exist?" This is that on steroids and more broadly.

However, there's a counter force, too, as there always is. I'm also pursuing new areas of interest and exploration where the early friction and amount to learn would have either completely fatigued me or scared me off. It's like having 24 hour access to a really good mentor and thought partner.

bhouston

100% as the next wave of students going through school will be reliant onChatGPT for a lot of the complexity in their thinking. Basically complex thoughts and reasonings will be increasing outsourced to AI.

Even if there wasn’t any further progress with AI so much of the next generation is outsourcing their non important thinking to it.

SillyUsername

IMHO YES!

Just like the industrial revolution impacted barrel makers (coopers).

Except we aren't yet reaping the full rewards or skills realignment yet, so we've still to have the car making impact (which was post revolution, but replaced manual labour with machines as their ability grew and relative cost shrunk).

We even have our own Luddites :D

giraffe_lady

I guess I get to be the one who brings this up this time. The luddites were not strictly against the technological changes, they were a labor movement protesting how capital owners were using a new technology to dispossess workers who had no viable alternatives.

As this is also one of the major risks of AI, one that has already come to bear directly, there's a lot we can take from their movement when we don't dismiss it as shorthand for "being wrong about technology."

namaria

Thank you!

There is strong correlation between suppression of labor organization and strict enforcement of the draconian legislation of the time (labor organizers could be sentenced to death) and 'Luddite' activity.

It was a last resort sort of action against oppressive laws and overzealous enforcement, not some ignorant response to technological progress.

roberto2016

Just like books hurt our ability to memorize epic poems.

topaz0

The title is about intelligence and that is a fair concern, but honestly I think the bigger issue (also discussed in the article) is more about discernment, which is foundational for any kind of human fulfillment IMO.