Skip to content(if available)orjump to list(if available)

The Impact of Generative AI on Critical Thinking [pdf]

Maro

A good model for understanding what happens to people as they delegate tasks to AI is to think about what happens to managers who delegate tasks to their subordinates. Sure, there are some managers who can remain sharp, hands-on and relevant, but many gradually lose their connection to the area they're managing and become pure process/project/people managers and politicians.

Ie. most managers can't help their team find a hard bug that is causing a massive outage.

Note: I'm a manager, and I spend a lot of time pondering how to spend my time, how to be useful, how to remain relevant, especially in this age of AI.

BeetleB

Indeed. I started using Sonnet for coding only about a month ago. It's been great in that I've finally written scripts I had floating around my brain for years, and very rapidly.

But the feeling of skill atrophy is very real. The other day I needed to write a function that recursively traverses a Python dictionary, making modifications along the way. It's the perfect task to give an LLM. I can easily see that if I always give this task to an LLM, a few years down the road I'll be really slow in writing it on my own, and will fail most coding interviews.

Also, while there is a high in producing a lot in a short amount of time, there is no feeling of satisfaction that you get from programming. And debugging its bugs is as much fun as debugging your coworkers' bugs - except at least with the coworkers you can have a more intelligent conversation on what they were trying to do.

hosh

This is exactly why I still play Go, practice martial arts, archery and use the command line for my dev workflow. Those are all arguably less efficient and obsolete. The AlphaGo series can defeat the strongest human Go players and firearms are more effective than unarmed martial arts and archery. GUI is easier for most people than the command line. Yet, I practice all these to develop my mind and body. I don't have to be a world-class Go player to benefit from learning to play Go.

This happens a lot in the natural world in ecosystems. For example, many people plant trees and add a drip system. The trees grow to depend on the drip system, and never stretch and develop their roots -- and the relationship they have with the soil microbiome. They make the trees prone to get knocked down when an unusually strong gust of wind come through.

BeetleB

> This is exactly why I still play Go, practice martial arts, archery and use the command line for my dev workflow.

And this is what worries me. Pre-LLM, I'd get all my practice done during work hours. I fear that with LLMs, I'll need to do all this practice on my free time, effectively reducing my truly "free" time.

MrMcCall

You might have my favorite profile page here, with the exception of DonHopkins.

Just beautiful concepts, inspiring, and well-represented in this comment.

Peace be with you, my friend.

CamperBob2

I started using a C compiler for coding about 30 years ago. It's been great, but the feeling of skill atrophy is very real. I probably couldn't write useful code in x86 assembly any more, at least not without refreshing my memory first.

And you know what? That's just peachy keen. I don't need to write x86 assembly anymore. In fact, these days I do a lot of coding for ARM platforms, but I never learned ARM assembly at all. So it would take more than just a refresher course to bring me up to speed in that. I don't anticipate any such need, fortunately.

So... if I still need to write C in 10 years, why in the world would I consider that a good thing? Things in this business are supposed to progress toward higher and higher levels of abstraction.

BeetleB

I have extremely high faith that the C compiler will produce very reliable x86 assembly code.

I am extremely pessimistic that LLMs will ever reach that level of reliability. Or even close. They are a great helper, but that's all they'll be.

MoonGhost

Coding is actually a hard skill which requires practice. Regular litcoding for the sake of it should help. The problem with that is it takes the whole brain and breaks other thoughts chain. I'm thinking about to dedicate full days for small tasks like this.

BeetleB

> Regular litcoding for the sake of it should help.

I got where I am without regular leetcoding. For me, regular leetcoding will only marginally improve my skills. And I definitely don't want to do regular leetcoding just to maintain my skills.

You know how many people love jogging outdoors but hate treadmills? Leetcoding is like treadmills. You may have to do it, but it sucks. Yes, some people love treadmills, but they're in the minority.

medhir

the flip side to that “high” that comes with working super quickly for me has been a crash associated with a sinking feeling that I’ve outsourced too much.

mattgreenrocks

I think this is the primary reason I have difficulty with management: my brain simply doesn't really work out (in the fitness sense) in the cognitive way it is used to, but instead has to track all sorts of problems, many of which are emotional/political. Those problems are often emotional/political and thus not really something I can readily solve by thinking real hard about things.

farts_mckensy

That's what the money's for!

causal

I think this is a really good analogy. Delegating problems to others is nothing novel or new to human experience.

Perhaps the biggest difference is the lack of feedback AI gives that humans can give: a subordinate can communicate if they feel like their manager is being too hands off. AI never questions management style.

natebc

It's also like trying to manage the most prolific bullshit artist the world has ever produced.

w10-1

This is a good analogy, and not just because of skills atrophy.

Managers grow the skills needed for their organization. Their team affects them.

A process-oriented team with quality/validation mindset has replaceable roles; the action is in the process. An expert team has people with tremendous skills and discretion doing what needs doing; the action is in selection and incentives. Managers adapt to their team requirements, in ways positive and negative: becoming rule-bound, privileging favorites, etc.

With AI this might be a positive insofar as it forces people to state the issues clearly, identifying relevant context, constraints, objectives, etc.

Agile benefitted software development by changing the granularity of delivery and planning -- essentially, helping people avoid getting lost in planning fantasies.

Similarly, I believe that the winner of the AI-for-development race (copilot et al) will not just produce good code, but build good developers by driving them to state requirements clearly and simply. A good measure here is the number of iterations to complete code.

An anti-pattern here, as with agile, is where planning devolves into exploring and exploring into incremental changes for no real benefit - polishing turds. Again, a good measure is the number of sessions to completion; too many and you know you don't know what you're doing, or the AI cannot grasp the trees for the forest.

mullingitover

> Ie. most managers can't help their team find a hard bug that is causing a massive outage.

Strategy vs tactics. Managers aren't there to teach their reports skills, they hire them because they already have them. They're there to set priorities and overall direction.

BeetleB

He's not disputing that. The difference, though, is that if you're using LLMs as code assistants, your skills will atrophy the way the manager's skills will, but you are still a SW engineer.

While the manager doesn't need those skills, you still do.

mullingitover

LLMs and agents are going to make labels like 'sw engineer', 'qa tester', marketer, CFO, and CEO very squishy.

I think in the coming decade if you put yourself in a box and do a specialized task, and nothing more, you'll have a bad time. This is going to be an era where strategy is far more important than tactics.

farts_mckensy

What if you use it to handle boring BS work you don't care about and focus on what you actually want to do? I offload my work tasks to GPT and do other stuff during work hours. Play with my dog. Stretch. Paint. Work on other creative projects. I don't give a fuck about work as long as they keep sending me a paycheck. Oh no! My brain is going to atrophy from not manually synthesizing info from this report that no one reads anyway.

gotoeleven

Startup idea: Farts_Mckensyly

Do all the work Farts_Mckensy does at half the price using AI.

farts_mckensy

I assure you, it wouldn't be very lucrative.

balamatom

Fellow manager, it will do you good to realize that we are not in an "age of AI". Out there it's the age of disinformation.

vunderba

I've been calling this out since ChatGPT went mainstream.

The seductive promise of solving all your problems is the issue. By reaching for it to solve any problem at an almost instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning.

That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.

This is why it's pretty baffling to me when I see attempts at comparing LLMs to the invention of the calculator. A calculator is still used IN SERVICE of a larger problem you are trying to solve.

turnsout

Yeah, but the calculator analogy is apt. In the past, anyone who went to grade school could answer 6 * 7 off the top of their head, and do basic mental math. We've pretty much lost that.

With that said, I do worry that losing the ability to craft sentences (or code) is more problematic than losing the ability to do mental math.

bluefirebrand

Losing the ability to do mental math is probably not actually a big deal

Losing the ability to do calculations by hand on a piece of paper with a pencil probably actually is a big deal

When I went to school we still had to do a lot of calculations by hand on paper. Thus, if I use a calculator to get an answer, I'm capable of reproducing the answer by hand if necessary

With math, at least when I was learning it, we seemed to understand that the calculator is a useful tool that doesn't replace the need to develop underlying skills

I'm seeing the exact opposite behavior and mentality from the AI crowd. "You don't need to learn how to do that correctly anymore, you can just have the AI do it"

"Vibe Coding", literally the attitude that you don't need to understand your code anymore. You just have the AI generate it and go off vibes to decide if it's right or not

Yeah, I don't know how my car engine works. But I trust that the people who engineered it do, and the mechanics that fix it when it breaks do. There's no room for "Vibe Bridge Building" in reality

Anyone advocating for "Vibe coding" is an admission that it doesn't actually matter if the thing they build works or not

Unfortunately that seems to be a growing portion of software

rqtwteye

“ Losing the ability to do mental math is probably not actually a big deal”

I think it’s a huge deal. I see a lot of people do some financing stuff and they have no idea what a 20% interest rate really means. So they just go ahead and do it because taking out a calculator is too tedious. I find it pretty crazy how many can’t figure how much a 20% or even 10% is. A lot of financial offerings take advantage of the fact that people can’t do even basic math.

throwway120385

Not to mention the absolute pile of "mathematics problems" that can't be solved except by pushing symbols around a page, which a calculator is absolutely useless at. So sure I can have a calculator "calculate" an approximation for 4/3 but it can't help me manipulate the symbols around the improper fraction that i need to manipulate to calculate the radius given the surface area of a sphere. And it's of zero help in understanding the relevance of that "pattern" to whatever phenomenon I'm using the mathematics to reason about. That all requires human intelligence.

There are a lot of calculators and other tools that can push the symbols around and many that can even apply some special high-level mathematical rules to the symbols. But whether or not those rules are relevant to the task at hand is entirely a matter for a human to decide.

marinmania

I wonder if people that were writing code in assembly complained that people learning more modern languages didn't really know how the 0s and 1s work.

I'm not sure where the line is, but there is a point where the abstraction works so well you really don't need to know how it works underneath.

I'm also not sure if a car mechanic needs to know how an engine works. I'm assuming almost none of them could design a car engine from scratch. They know just enough to know which parts needs to be replaced.

turnsout

My feeling is… just give it 6-12 months. All the low-quality apps that were "vibe-coded" will start to break down, have massive security breaches, or generally fall apart as new features are added.

Brace yourself for a wave of think pieces a year from now, all wringing their hands about the death of vibe coding, and the return of artisanal handmade software.

BeetleB

> We've pretty much lost that.

In the US :-)

And those skills are entirely context dependent. You're likely saying this from a SW engineer's point of view. Whereas I've worked in teams with physicists and electrical engineers. When you're in a meeting, and there is a technical discussion, and everyone can calculate in their head the effects after integrating a well known function and how that will impact the physical system, while you have to pull out a calculator/computer, you'll be totally lost.

You can argue that you could be as productive as the others if you were given the time (e.g. doing this on your own at your own cubicle), but no one will give you that time in meetings.

askonomm

Must be a place with a pretty bad education system if people can't answer 6 * 7 off the top of their head.

belter

Would love to see eighth grade students of today, try this test from 1912...I sense disaster....

https://www.reddit.com/r/interestingasfuck/comments/13jhckh/...

fwip

Why? I'm pretty sure my public school education prepared me for all of these questions by 8th grade, excepting some notation that we no longer use and some specific history questions that are now less relevant.

BeetleB

These would have been quite doable by my 8th grade education.

But I am older than many. :-)

wiseowise

> In the past, anyone who went to grade school could answer 6 * 7 off the top of their head, and do basic mental math. We've pretty much lost that.

Source?

turnsout

Observing people around me

turnsout

PS, if you're downvoting this, can you explain why?

jonahx

For those who read only the headline or article:

> In this paper, we aim to address this gap by conducting a survey of a professionally diverse set of knowledge workers ( = 319), eliciting detailed real-world examples of tasks (936) for which they use GenAI, and directly measuring their perceptions of critical thinking during these tasks

So, they asked people to remember times they used AI, and then asked them about their own perceptions about their critical thinking when they did.

How are we even pretending there is serious scientific discussion to be had about these "results"?

Tossrock

"The Impact Of Taking A Survey About AI On Answers To A Survey About AI" doesn't have the same ring to it.

oneofyourtoys

The year is 2035, the age of mental labor automation. People subscribe to memberships for "brain gyms", places that offer various means of mental stimulation to train cognitive skills like critical thinking and memory retention.

Common activities provided by these gyms include fixing misconfigured printers, telling a virtual support customer to turn their PC off and back on again, and troubleshooting mysterious NVIDIA driver issues (the company has gone bankrupt 5 years ago, but their hardware is still in great demand for frustration tolerance training).

sitkack

Thanks, the paper is very readable.

> Abstract The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

It is be presented at CHI Conference https://chi2025.acm.org/

https://en.wikipedia.org/wiki/Conference_on_Human_Factors_in...

lenerdenator

So it does what Google searching did: it made retaining of information an optional cognitive burden, and optional cognitive burdens are usually jettisoned.

Fortunately, my ADHD-addled brain doesn't need some fancy AI to make its cognition "Atrophied and Unprepared"; I can do that all on my own, thank you very much.

JohnMakin

It really doesn't though. Even when google was at its best, and showed you relevant non-spammy results, a degree of critical thinking was required when sifting through the results, evaluating the credibility of the author, website, etc. Do you fact check everything the AI spits out? The ability for people to critically think is basically gone. That's been trending since before AI, but it's really clear to me at this moment in time how bad it has gotten. It's a laziness of thinking that I don't think was the same with Google.

lenerdenator

> It really doesn't though. Even when google was at its best, and showed you relevant non-spammy results, a degree of critical thinking was required when sifting through the results, evaluating the credibility of the author, website, etc. Do you fact check everything the AI spits out? That's been trending since before AI, but it's really clear to me at this moment in time how bad it has gotten. It's a laziness of thinking that I don't think was the same with Google.

Nah, it was already at zero before ChatGPT came to public attention.

drbojingle

may not have been the same degree but reducing cognitive burden trends in the same direction. This might be bad but it might be very good. Is the AI competing with you to get a promotion and replace you? Is it going to lie to you knowingly cause it doesn't like you?

risyachka

No, not even close.

Google helps you find things that you process later on with your brain.

With AI your brain shuts off as you offload all thinking to asking questions. And asking questions is not thinking. Answering them is.

lenerdenator

> Google helps you find things that you process later on with your brain.

I'm willing to bet that there were a lot of Google searches, pre-ChatGPT, that effectively were questions. Lots of "huh, I wonder" during conversations and the first result was taken as "the truth".

risyachka

Sure, and google gave you info that possibly contains your answer. You had to read it at least and analyze.

Now you are being spoon-fed.

greybox

Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”

“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.

jldugger

Well, that's just a summary of a much, much older paper. Still a relevant paper, but somewhat disingenuous to attribute it to MS researchers.

[1]: https://en.wikipedia.org/wiki/Ironies_of_Automation

nukem222

Certainly a common enough concern in people critiquing use of ChatGPT here. I'm more worried about "softer" problems, though—morality, values, persuasion, including deciding which of two arguments is more convincing and why.

But these have always been issues that humans commonly struggle with so idk.

pseudocomposer

It seems like something like medical/legal professionals’ annual/otherwise periodic credential exams might make sense in fields where AI is very usable.

Basically, we might need to standardize 10-20% of work time being used to “keep up” automatable skills that once took up 80+% of work time in fields where AI-based automation is making things more efficient.

This could even be done within automation platforms themselves, and sold to their customers as an additional feature. I suspect/hope that most employers do not want to see these automatable skills atrophy in their employees, for the sake of long-term efficiency, even if that means a small reduction in short-term efficiency gains from automation.

bluefirebrand

> suspect/hope that most employers do not want to see these automatable skills atrophy in their employees, for the sake of long-term efficiency, even if that means a small reduction in short-term efficiency gains from automation.

I wish you were right, but I don't think any industry is realistically trending towards thinking about long term efficiency or sustainability.

Maybe it's just me, but I see the opposite, constantly. Everything is focused on the next quarter, always. Companies want massive short term gains and will trade almost anything for that.

And the whole system is set up to support this behavior, because if you can squeeze enough money to retire out of a company in as short a time as possible, you can be long gone before it implodes

nopelynopington

I feel like my critical thinking has taken a nosedive recently, I changed jobs and the work in the new job is monotonous and relies on automation like copilot. Most of my day is figuring out why the ai code didn't work this time rather than solving actually problems. It feels like we're a year away from the me part being obsolete.

I've also turned to AI in side projects, and it's allowed me to create some very fast MVPs, but the code is worse than spaghetti - it's spaghetti mixed with the hair from the shower drain.

None of the things I've built are beyond my understanding, but I'm lazy and it doesn't seem worth the effort to use my brain to code.

Probably the most use my brain gets every day is wordle

bentt

Is this any different than saying that nowadays most people in the USA are physically weaker and less able to work on a farm than their predecessors? Sure, it's not optimal through certain lenses, but through other lenses it is an improvement. We are by any rights dependent on new systems to procure food, which is even more fundamental than other types of human cognition being preserved.

aithrowawaycomm

This would be a valid POV if there was any solid evidence that LLMs truly increased worker productivity or reliability - at best it is a mixed bag. To stretch the food analogy, it seems like LLMs could be pure corn syrup, without any disease-resistant fruits and unnaturally plump chickens that actually make modern agriculture worthwhile.

Or, since LLMs seem to be addictive, it's like getting rid of the spinach farms and replacing them with opium poppies. (I really hate this tech.)

coffeefirst

People pay a fortune and expend endless hours to replace the basic physical activity that used to be a default part of the human experience. Also a huge chunk of the population that doesn't suffers from life-altering metabolic disorders.

Let's... not do that for brainrot.

MrMcCall

> Is this any different than saying that nowadays most people in the USA are physically weaker and less able to work on a farm than their predecessors?

Yes, far different, because we can still go to the gym and throw medicine balls around or swing kettle bells and do dead lifts and squats if we want to stay fit.

There is no substitute for exercising our ability to logically construct deterministic, hardened, efficient data flow networks that process specific inputs in specific environments to produce specific changes and outputs.

Maybe I'm the only one who understood the most important factor the eminent Leslie Lamport explained in grisly detail the other day, that, namely, logical thinking is both irreplaceable and essential. I'll add that that nerdiest of skillsets is also withering on the vine.

"Enjoy." --Daniel Tosh

orangecat

There is no substitute for exercising our ability to logically construct deterministic, hardened, efficient data flow networks that process specific inputs in specific environments to produce specific changes and outputs.

Factorio?

MrMcCall

Of course.

And every single microprocessor and their encompassing support systems and the systems they host and execute.

Every single system, even analog ones, because it's all just information flowing through switched systems, even if it's solely measured in something involving coulombs.

Also, fundamentally, living cells and the organisms that encompass them, because they all have a logically variable information flow both within them and between them, measured in molecules and energy.

They're extraordinary and beautiful.

sollewitt

One thing I've tried using Gemini for, and been really impressed with, is practicing languages. I find Duolingo doesn't really translate to fluency, because it doesn't really get you to struggle to express yourself - the topics are constrained.

Whereas, you can ask an LLM to speak to you in e.g. Spanish, about whatever topic you're interested in, and be able to stop and ask it to explain any idioms or vocabulary or grammar in English at any time.

I found this to be more like a "cognitive gym". Maybe we're just not using the tools beneficially.

kokanee

I remain perplexed that everyone is so focused on using LLMs to automate software engineering, when there are language-based professions (like Spanish tutor, in your example) that seem more directly threatened by language models. The only explanation I've heard is that the industry is so excited about reducing spend on software engineering salaries that they're trying to fit a square peg into a round hole, and largely ignoring the square holes.

thewebguyd

> The only explanation I've heard is that the industry is so excited about reducing spend on software engineering salaries that they're trying to fit a square peg into a round hole, and largely ignoring the square holes.

I think that's really just it, and I agree with you. There are many other areas LLMs can, and should, be more useful and effort put toward both assisting and automating.

Instead, the industry is focusing on creative arts and software development because human talent for that is both limited and expensive, with a third factor of humans generally being able to resist doing morally questionable things (e.g., what if hiring for weapons systems software becomes increasingly difficult due to a lack of willingness to work on those projects, likewise for increasingly invasive surveillance tech, etc.)

We're rushing into basically the opposite of what AI should do for us. Automation should work to free us up to focus more on the arts and sciences, not take it away from us.

Greed at its finest.

zzbzq

I think it's because software engineers are the only group that can unanimously operate LLMs effectively and build them into larger systems. They'll automate their own jobs first and move on to building the toolkits to automate the others.

adelie

language-based professions like translation have been dying for years and no one has cared; they're not about to start now that the final nail's been put in the coffin.

rahimnathwani

  the topics are constrained
Is this true even if you have Duolingo Max and use the video calling feature?

divtiwari

As a part of Gen Z, I feel that with regards to critical thinking skills, our generation got obliterated twice, first with Social media (made worse with affordable data plans) then followed by GenAI tools. You truly need a monk level mind control to come out unscathed from their impact.