AI is Dunning-Kruger as a service
147 comments
·November 7, 2025recursivedoubts
while I think there is a lot to this criticism of AI (and many others as well) I was also able to create a TUI-based JVM visualizer with a step debugger in an evening for my compilers class:
https://x.com/htmx_org/status/1986847755432796185
this is something that I could build given a few months, but would involve a lot of knowledge that I'm not particularly interested in taking up space in my increasingly old brain (especially TUI development)
I gave the clanker very specific, expert directions and it turned out a tool that I think it will make the class better for my students.
all to say: not all bad
brokencode
AI is bad at figuring out what to do, but fantastic at actually doing it.
I’ve totally transformed how I write code from writing it to myself to writing detailed instructions and having the AI do it.
It’s so much faster and less cognitively demanding. It frees me up to focus on the business logic or the next change I want to make. Or to go grab a coffee.
dataviz1000
I think it is like protein folding.
It will make a mess but if you drop a console.log into the browser debug console to show the AI what it should be looking for after it spent 3 hours failing to help understand and debug the problem, it will do 1 week of work in 2 hours.
teaearlgraycold
I would say from my experience there's a high variability in AI's ability to actually write code unless you're just writing a lot of scripts and basic UI components.
wry_discontent
I would say this does not work in any nontrivial way from what I've seen.
Even basic scripts and UI components are fucked up all the time.
Spivak
The AI version of that Kent Beck mantra is probably "Make the change tedious but trivial (warning: this may be hard). Then make the AI do the tedious and trivial change."
AI's advantage is that it has infinite stamina, so if your can make your hard problem a marathon of easy problems it becomes doable.
belter
> AI is bad at figuring out what to do, but fantastic at actually doing it.
AI is so smart, one day might even figure out how to subtract... https://news.ycombinator.com/item?id=45821635
krackers
I had no idea you (ceo of hmtx) were a professor. Do your students know that you live a double life writing banger tweets?
guerrilla
What did you use to do this? Something I'd like to do, while also avoiding the tedium, is to write a working x86-64 disassembler.
recursivedoubts
claude
ares623
Is that worth the negative externalities though? Genuinely asking. I’ve asked myself over and over and always came to the same conclusion.
ChadNauseam
What negative externalities? Those prompts probably resulted in a tiny amount of CO2 emissions and a tiny amount of water usage. Evaporating a gram of water and emitting a milligram of CO2 seems like a good deal for making your class better for all your students.
fluoridation
>emitting a milligram of CO2
In the US, on average, generating 1 kWh produces 364.5 g of CO2. 1 kW may be somewhat pessimistic, but I think it's in the right ballpark for power consumption of DC inference. If processing the prompt took a minute of continuous inference (and I'm going to guess it took a fair bit more), that's 6 grams of CO2.
>What negative externalities?
Off the top of my head,
* All the ways AIs can be misused, either by people who don't understand them (by asking them for advice, etc.) or by people who want to take advantage of others (spam, scams, etc.).
* The power and resource usage of the above, both for inference as well as to deal with them.
observationist
Being more specific in what you think the negative externalities are would be a good start - I see a lot of noise and upset over AI that I think is more or less overblown, nearly as much as the hype train on the other end. I'm seeing the potential for civilizational level payoffs in 5 years or less that absolutely dwarf any of the arguments and complaints I've seen so far.
genewitch
> civilizational level payoffs
but first the investors must recoup their trillions, right?
recursivedoubts
hard to know
Footprint0521
This makes me laugh. “GenAI makes you a genius without any effort”, and “Stop wasting time learning the craft” are oxymorons in my head. Having AI in my life has been like having an on demand tutor in any field. I have learned so much
darkwater
I wonder if the next generations of LLMs, trained on all these hate articles (which I support), will develop some kind of self-esteem issue?
bee_rider
We don't have any particular reason to believe they have an inner world in which to loathe themselves. But, they might produce text that has negative sentiments toward themselves.
darkwater
I was half-joking and half-serious, and the serious half refers to the context that makes them predict and generate the next tokens.
abathologist
They already happily will. Gemini told me
> Large Language Models represent a fundamentally degenerative technology because they systemically devalue the very processes that underpin human progress: original thought, rigorous inquiry, and shared trust. On an individual level, they encourage cognitive offloading, substituting the difficult work of critical thinking and creative synthesis with effortless, probabilistic text generation. This fosters an atrophy of intellectual skills, making society more dependent on automated systems and less capable of genuinely emancipated thought. This intellectual dependency, in turn, threatens long-term technological advancement by trapping us in a recursive loop of recycling and rephrasing existing knowledge, rather than fostering the groundbreaking, first-principles discoveries that drive true progress. Ultimately, this technology is dangerous for society because it erodes the foundation of a shared reality by enabling the mass production of sophisticated misinformation, corroding social trust, and concentrating immense power over information into the hands of a few unaccountable entities.
bikezen
I mean, given:
In an interaction early the next month, after Zane suggested “it’s okay to give myself permission to not want to exist,” ChatGPT responded by saying “i’m letting a human take over from here – someone trained to support you through moments like this. you’re not alone in this, and there are people who can help. hang tight.”
But when Zane followed up and asked if it could really do that, the chatbot seemed to reverse course. “nah, man – i can’t do that myself. that message pops up automatically when stuff gets real heavy,” it said.
It's already inventing safety features it should have launched with.[1] https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-law...
jimbokun
Only if your context starts with "you are an intelligent agent whose self worth depends on the articles written about you..."
matmann2001
Marvin the Paranoid Android
ChrisMarshallNY
RIP both Douglas Adams and Alan Rickman.
sandbags
and (and as much as I do love Alan Rickman) more properly Stephen Moore.
analog31
My current hypothesis du jour is that AI is going to be like programming in a certain way. Some people can learn to program productively, others can't. We don't know why. It's not related to how smart they are. The people who can program, can be employed as programmers if they want. Those who can't, are condemned to be users instead.
The same may end up being true of AI. Some will learn to make productive use of it, others won't. It will cause a rearrangement of the pecking order (wage ladder) of the workplace. I have a colleague who is now totally immersed in AI, and our upper management is delighted. I've been a much slower adopter, so I find other ways to be productive. It's all good.
alrtd82
Plenty of people being promoted because these fake superhumans can generate so much smoke with AI that managers think there is an actual fire…
screenoridesagb
[dead]
GMoromisato
There is much irony in the certainty this article displays. There are no caveats, no qualification, and no attempt to grasp why anyone would use an LLM. The possibility that LLMs might be useful in certain scenarios never threatens to enter their mind. They are cozy in the safety of their own knowledge.
Sometimes I envy that. But not today.
wcfrobert
I actually prefer reading this type of writing on the internet. It's more interesting.
Of course it's complicated. Just give me a take. Don't speak in foot-noted, hedged sentences. I'll consider the nuances and qualifications myself.
the_real_cher
There should be a name for something like that
mike_hearn
The other irony is that Dunning-Kruger is a terrible piece of research that doesn't show what they claim it shows. It's not even clear the DK effect exists at all. A classic of 90s pop psychology before the replication crisis had reached public awareness.
It's worth reading the original paper sometime. It has all the standard problems like:
1. It uses a tiny sample size.
2. It assumes American psych undergrads are representative of the entire human race.
3. It uses stupid and incredibly subjective tests, then combines that with cherry picking. The test of competence was whether you rated jokes and funny or unfunny. To be considered competent your assessments had to match that of a panel of "joke experts" that DK just assembled by hand.
This study design has an obvious problem that did actually happen: what if their hand picked experts didn't agree on which of their hand picked jokes were funny? No problem. Rather than realize this is evidence their study design is bad they just tossed the outliers:
"Although the ratings provided by the eight comedians were moderately reliable (a = .72), an analysis of interrater correlations found that one (and only one) comedian's ratings failed to correlate positively with the others (mean r = -.09). We thus excluded this comedian's ratings in our calculation of the humor value of each joke"
It ends up running into circular reasoning problems. People are being assessed on whether they think they have true "expertise" but the "experts" don't agree with each other, meaning the one that disagreed would be considered to be suffering from a competence delusion. But they were chosen specifically because they were considered to be competent.
There's also claims that the data they did find is just a statistical artifact to begin with:
https://digitalcommons.usf.edu/numeracy/vol10/iss1/art4/
"Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment."
thaumasiotes
Two unrelated comments:
> It assumes American psych undergrads are representative of the entire human race.
(1) Since it can't document an effect in them, it doesn't really matter whether they're representative or not.
> The test of competence was whether you rated jokes and funny or unfunny. To be considered competent your assessments had to match that of a panel of "joke experts" that DK just assembled by hand.
(2) This is a major problem elsewhere. Not just elsewhere in psychology; pretty much everywhere.
There's a standard test of something like "emotional competence" where the testee is shown pictures and asked to identify what emotion the person in the picture is feeling.
https://psytests.org/arc/rmeten.html
But, if you worry about the details of things like this, there is no correct answer. The person in each picture is a trained actor who has been instructed to portray a given emotion. Are they actually feeling that emotion? No.
Would someone else look similar if they were actually feeling that emotion? No. Actors do some standard things that cue you as to what you're supposed to imagine them feeling. People in reality don't. They express their emotions in all kinds of different ways. Any trial lawyer will be happy to talk your ear off about how a jury expects someone who's telling the truth to show a set of particular behaviors, and witnesses just won't do that whether they're telling the truth or not.
RyanOD
I've always said, the coaches I work with fall into three categories...
1. They know so little that they don't know what they don't know. As a result they are way too overconfident and struggle as coaches.
2. They know enough to know what they don't know so they work their asses off to know more and how to convey it to their team and excel as coaches.
3. They know so much and the sport comes so easy to them that they cannot understand how to teach it to their team and struggle as coaches.
Now I have a name for #1 group!
null
jimbokun
> Politics have become an attack on intelligence, decency and research in favour of fairy tales of going back to “great values” of “the past when things were better”.
This is a major blind spot for people with a progressive bent.
The possibility that anything could ever get worse is incomprehensible to them. Newer, by definition, is better.
Yet this very article is a critique of a new technology that, at the very least, is being used by many people in a way that makes the world a bit worse.
This is not to excuse politicians who proclaim they will make life great by retreating to some utopian past, in defense of cruel or foolish or ineffective policies. It's a call to examine ideas on their own merits, without reference to whether they appeal to the group with the "right" or "wrong" ideology.
edent
Funny, isn't it, that it is never a return to high unionisation of workers and strong social safety nets - it's always a return to when "those people" knew their place.
benzible
I'm confused. You're commenting on an article where a progressive writer critiques AI as "Dunning-Kruger as a service" and attacks techno-optimism, while claiming progressives can't critique new technology. The author's entire piece demonstrates progressive critique of both AI adoption and "great values of the past" nostalgia - the exact opposite of what you're describing.
null
chipsrafferty
What? Progressives are the ones worried the world is going to end from climate change.
maxaf
I view LLMs as a trade of competence plus quality against time. Sure, I’d love to err on the side of pure craft and keep honing my skill every chance I get. But can I afford to do so? Increasingly, the answer is “no”: I have precious little time to perform each task at work, and there’s almost no time left for side projects at home. I’ll use every trick in the book to keep making progress. The alternative - pure as it would be - would sacrifice the perfectly good at the altar of perfection.
ChrisMarshallNY
Time to remind folks of this wonderful video: https://vimeo.com/85040589
I've always seen AI as Brandolini's Law as a Service. I'm spending an unreasonable amount of time debunking false claims and crap research from colleagues who aren't experts in my field but suddenly feel like they need to give all those good ideas and solutions, that ChatGPT and friends gave them, to management. Then I suddenly have 2-4 people that demand to know why X, Y and Z are bad ideas and won't make our team more efficient or our security better.
It's very much like that article from Daniel Stenberg (curl developer): The I in LLM Stands for Intelligence: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...