Are we living in a golden age of stupidity?
62 comments
·October 18, 2025zkmon
raincole
I know it might be an offensive way to put it, but I honestly believe if AI ends up making people no longer need to use their brains as much, it's a great thing.
Think about it: do we rather to live in a world where heavy labor is a necessity to make a living, or a world where we go to gym to maintain our physique?
If mental labor isn't (as) necessary and people just play Scrabble or build weird Rude Goldberg machines in Minecraft to keep their minds somewhat fit, is this future really that bleak?
ofrzeta
I'd say it is a constant fight against laziness. Sure it is convenient to drive everywhere with a car but at some point you might understand that it makes more sense to walk somewhere once in a while or regularly. Sure, escalators are convenient but better take the stairs so you don't need to go to the gym and save some money. If you ask me should all do more physical labour and the same goes for the mental labor. If we give that up as well the future is really bleak, to answer your question.
everdrive
The analogy here is probably physical exercise. Lack of exertion sounds great until your body falls apart and destroys itself without frequent exercise.
raincole
> where we go to gym to maintain our physique
latexr
It is paramount to not ignore the state of the world. Poverty, wars, inequality in the distribution of resources, accelerated natural disasters, political instability… Those aren’t going to be solved by a machine thoughtlessly regurgitating words from random text sources.
Even if a world where people don’t use their brains were desirable (that’s a humungous if), the present is definitely not the time to start. If anything, we’re in dire need of the exact opposite: people using their brains to not be conned by all the bullshit being constantly streamed into our eyes and ears.
And in your world, what happens when a natural disaster which wasn’t predicted takes out the AI and no one knows how to fix it? Or when the AI is blatantly and dangerously wrong but no one questions it?
raincole
I said "if." In case you're not aware, the statement following "if" is usually a hypothetical situation that might not actually happen in reality.
Froztnova
> We might get some gym and sports for the mind too to give it some fake activity.
The Factorio devs are ahead of the curve on that front I guess.
chrisjj
> an AI waste-sorting assistant named Oscar can tell you where to put your used coffee cup.
A printed sign can do the same.
Try harder, A"I".
JKCalhoun
I get your point, but I confess I have sometimes had to pause for some time to decide if I was holding a recyclable, a compostable or landfill — looking at the little pictures in fact, hoping I can find the thing I am holding.
Yeah, but otherwise, the whole MIT Media Lab thing is increasingly tasting a little bitter, not the glamorous, enviable place it seemed like in decades past.
Rather than looking for the next internet-connected wearable, for some reason, increasingly, I keep thinking about Bruce Dern's character in the film Silent Running.
doix
It's much worse in South Korea, I think there were at least 5 different bins with different signs. Most things you bought had a label on them and you could try to match the letters to what was on the bin. Except it wasn't perfectly matched up, and we didn't have some bins with signs that matched up with whatever was written on the package.
I eventually gave up and only ate to avoid having to deal with it.
loloquwowndueo
Yes but… As an example In some cities, the signs specifying whether parking is allowed can be impossible to decipher. Sometimes feels like an AI would be needed to tell you “can I park this particular vehicle here right now, and for how long?”
Not that I’d trust an AI to get it right - but people already don’t.
chrisjj
> the signs specifying whether parking is allowed can be impossible to decipher.
In UK, works as designed... to maximise penalty earnings.
raincole
If figuring out which used item belongs to which category is easy, it means your community/city/state/country is doing recycling wrong.
CaptainOfCoit
Until you realize there are more objects that people aren't able to sort correctly, than there are space on signs to put next to the bins.
chrisjj
Recycling labels are a thing.
CaptainOfCoit
Not everything people throw has those labels on them, now what?
chrisjj
> Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
Key point. The top use case for "Artificial Intelligence" is lack of natural intelligence.
PS Cute choice of sample size.
keybored
There are many on HN that claim that only programmers that are already really good can leverage AI. And then the sky’s the limit basically.
Maybe both are correct because most people are not using AI to generate their next SAAS passive income whatever.
chrisjj
> There are many on HN that claim that only programmers that are already really good can leverage AI.
So all we need is a ban on every other programmer's employment of it.
I'll wait :)
fuzzfactor
More than just both can be correct.
>top use case for "Artificial Intelligence" is lack of natural intelligence.
Also true if you think about a situation where there is just not enough natural intelligence to accomplish something within its scope.
Maybe there never was enough natural intelligence for something or other, or maybe not enough any more.
It could be a lot more acceptable to settle for artificial in those cases more so than average, especially if there is a dire need,
But first you have to admit the dire lack of natural intelligence :\
chrisjj
> It could be a lot more acceptable to settle for artificial in those cases more so than average, especially if there is a dire need...
...from the lacking intelligence, sure.
But from anyone else?
bamboozled
It's initially true but I think there is a human tendency to outsource, that's where the dragons lie.
tsoukase
Until recently every technological advancement replaced manual work, like in agriculture, transportation, industry. Even the tiniest car amenity, like electric windows, hydraulic breaks or touch screen entertainment is aiming to replace a limb movement. With AI it is the first time the tech offloads directly cognitive tasks, leading inevitably to mental atrophy. The hopeful scenario is to repurpose the brain for new activities and not rotting, like replacing labor work gives the opportunity for sports and not getting fat.
softwaredoug
I find when I do a lot of AI coding, or any "blocking" task with AI, I inevitably end up scrolling social media. And I feel dumber.
I'm left wondering whether I should have just hand-coded what I was doing a bit slower, but kept my attention focused on the task
skylurk
Waiting for the LLM is the best time to do the deeper review of the last output.
I like to fire the model off to do exploratory implementations as I refine the existing work.
alright2565
This sounds nice, but what I've run into is that the model fails to write changes if the code has changed under it. A better tool, where it takes a snapshot at the start of each non-interactive segment, and then resolves merge conflicts with my manual changes automatically, would make this much easier.
skylurk
I run the model against its own dev branch and either cherry-pick commits or do merges the old-fashioned way.
I'm using Aider though, which makes this easy: it's just another tab in the terminal.
raincole
Do (did) you feel the same when you wait for code to compile? Or wait for CI/CD?
The earlier days of programming had more "blocking" since compilation was quite slow. So the issue is obviously that "blocking", but social media.
softwaredoug
I used to work on a project that could take 30 mins+ to compile the entire project.
Nearly every time, your problems were detected _early_ in the process. Because build systems exist, they don't take 30 minutes on average. They focus on what's changed and you'll see problems instantly.
It's _WAY_ more efficient for human attentional flow than waiting for AI to reason about some change while I tap my fingers.
vdupras
That is both a sweeping generalization and plainly wrong. The "much earlier" days of programming had blazing fast compilers, like Turbo Pascal. The "earlier" days had C compiler that were plenty fast. Only languages like C++ had this kind of problem.
Worst offenders like Rust are "today", not "earlier".
JPLeRouzic
I am old enough to remember a time when compilation could last minutes or even one hour, depending on what you compiled, and it was in the late 1980s.
prerok
In C we had to resort to tricks like precompiled headers to get any sort of sensible compilation and it still lasted a minute for a decent library.
C++ was/is even worse what with generation of all the templated code and then through the roof link times for linker to sort out all the duplicate template implementations (ok, Solaris had a different approach but I guess that's a nitpick).
I have not worked on any large project in Pascal, but friends worked with Delphi and I remember them complaining how slow it was.
So, in my experience, it really was slow.
Scene_Cast2
If you need an attention sink, try chess! Pick a time control if it's over 2 minutes of waiting, and do puzzles if it's under. I find that there's not much of a context switch when I get back to work.
everdrive
I'm having the same problem. LLMs really take me out of the task mentally. It feels like studying as a kid. I need to really make a concerted effort; the task is no longer engaging on its own.
softwaredoug
As someone with focus problems, I find it more productive to have a conversation with ChatGPT (or Claude) about code. And avoid letting it make major changes. And hand code a lot with Copilot.
bamboozled
yet you kind of feel "too slow" if you don't use one?
everdrive
It depends. For a task I know well the LLM is often much worse. If I'm being asked to do something brand new, the LLM does speed me up quite a bit and let me build something I might have gotten stuck on otherwise. The problem is that although I did "build the thing," it's not clear I really gained any meaningful skills. It feels analogous to watching a documentary vs. reading a book. You learned _something_, but it's honestly pretty superficial.
Larrikin
How slow the AIs respond provides some opportunity to work on two task at once. Things like investigate a bug, think about the implementation for something larger, edit code that experience has told you it will take just as much or more typing to have the LLM do it.
It's less cool than having a future robot do it for you while you relax, but if you enjoy programming it brings some of the joy back.
softwaredoug
Human attention doesn't work this way. This type of task switching makes both tasks fairly inefficient.
"What was I doing again!?" is a big problem
null
basisword
They're not that slow! You want me to believe we've gone from programmers being so fragile that disturbing their 'flow state' will lose them hours of productivity, to programmers being the ultimate multitaskers who can think and code while their LLM takes 10 seconds to respond? /s
loloquwowndueo
You could do other work while the AI does one task :)
That’ll likely degenerate into “I want my AI to do dishes and laundry so I can code, not code so I can do my dishes and laundry”
softwaredoug
That's even worse, because I'm task switching between two intensive tasks
loloquwowndueo
Sorry no, nothing is worse than doomscrolling social media.
bamboozled
The worst part is when you find out your vibe coded stuff didn't actually work properly in production and you introduced a bug while being lazy. It's really easy to do.
terminalshort
Judging by the comments on any HN post that has to do with finance, yes.
zvmaz
Insulting title. Sorry, but no thanks.
DavidPiper
Genuine question: What do you find insulting about the title?
While it's not presenting anything new, the article does cover a number of important talking points in an accessible way.
zvmaz
> What do you find insulting about the title?
The title itself. Without reading the article, I can sense the "we are living in a stupid age" arrogant trope characteristic of the "winning" social classes.
jmclnx
I do not think so when compared to past ages. The only difference now is we get to see "stupid" due to the internet.
In the past, the majority of people who could be heard by the "masses" tended to be educated and wealthy. Now, everyone gets a voice.
But seems the article is more about AI and how it may make us stupider. Which I have no opinion on.
The article sounds a cliche. The progression was always happening, nothing sudden. Just like the continuous movement of tectonic plates through the earth quakes. When tension between the plates reach a level, rupture happens. But it is not the rupture causing the tectonic movement. It is the opposite.
Things like electricity, computers, internet, smartphones and AI are those earthquakes caused by the tectonic movement towards dominance of the machine.
The goal of human progress was to make everything easier. Tools came up to augment human abilities, both physical and mental, so that humans can free themselves from all hard work of physical labor and thinking.
We do gym and sports as the body needs some fake activity to fool it into believing that we still need all that muscle strength. We might get some gym and sports for the mind too to give it some fake activity.
Remember, the goal is to preserve ourselves as physical beings while not really doing any hard work.