ChatGPT Is a Gimmick
115 comments
·May 22, 2025keiferski
lm28469
> These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination.
Either that or different people have different views on life, tech, &c. If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning. I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...
keiferski
That’s perfectly fine, but the article is making a broad statement, not an individual opinion.
lm28469
For the vast majority of people LLMs are deep in gimmick territory, it's the funny thing you use to generate your Ghibli style profile image or the HR email you can't bother writing.
If you not part of a very small subset of tech enthusiasts or companies directly profiting from it it really isn't that big of a deal
nsteel
> These “AI is a gimmick that does nothing” articles
I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?
> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
keiferski
Not sure how that definition of a gimmick applies to what I wrote. Labeling AI tools as gimmicks would imply that they both save labor and inflate it and therefore offer no real fundamental improvements or value.
In my own experience, that is absolute nonsense, and I have gotten immense amounts of value from it. Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.
-__---____-ZXyw
Another commenter on here talked about AI's ability to "impress an idiot". I see lots of this. Your usage sounds decidedly unidiotic, and I'm not saying you are an idiot - but it sounds like your view of the criticism is based on the idea that everyone who isn't using it as cleverly as you essentially is an idiot who simply hasn't realised how to get to a "deeper understanding" in the "exploration" of these tools.
Please consider that there are some very clever people out there. I can respond to your point about languages personally - I speak three, and have lived and operated for extended periods in two others which I wouldn't call myself "fluent" in as it's been a number of years. I would not use an LLM to generate images for each word, as I have methods that I like already that work for me, and I would consider that a wasteful use of resources. I am into permacomputing, minimising resources, etc.
When I see you put the idea forward, I think, oh, neat, but surely it'd be much more effective if you did a 30s sketch for each word, and improved your drawing as you went.
In summary - do read the article, it's very good! You're responding to an imagined argument based on a headline, ignoring a nuanced and serious argument, by saying: "yeah, but I use it well, so?! It's not a gimmick then, for me!"
kumarvvr
> value out of AI (specifically ChatGPT and Midjourney)
The one area I would agree that AI and ML tools have been surprisingly good, art generation.
But then, I see the flood of AI generated pictures and overall, feel it has made a already troublesome world, even more troublesome. I am starting to see the "the picture is AI made, or AI modified" excuses coming into mainstream.
A picture now, has lost all meaning.
> be useful for “thinking” or analyzing a piece of writing
This, I am highly skeptical of. If you train an LLM with words of "trains can fly", then it spits that out. They may be good as summarizing or search tools, but to claim them to be "thinking" and "analyzing", nah.
keiferski
The fact that most ai art is generic garbage just reflects the lack of imagination most people have when making it. Sad but true. The actual tools themselves are incredible.
And I meant myself thinking and analyzing a piece of writing with the help of ChatGPT, not ChatGPT itself “thinking.” (Although I frankly think this is somewhat of an irrelevant point, if the machine is thinking.) Because I have absolutely gained tons of new insights and knowledge by asking ChatGPT to analyze an idea and suggest similar concepts.
namaria
> Because I have absolutely gained tons of new insights and knowledge by asking ChatGPT to analyze an idea and suggest similar concepts.
Are you going to test them by building something or using these concepts in conversation with specialists?
professor_v
Your examples are both quite gimmicky and not a fundamental value shift.
danlitt
It is refreshing to see I am not the only person who cannot get LLMs to say anything valuable. I have tried several times, but the cycle "You're right to question this. I actually didn't do anything you asked for. Here is some more garbage!" gets really old really fast.
It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.
loveparade
I use LLMs to check solutions for graduate level math and physics problem I'm working on. Can I 100% trust their final output? Of course not, but I know enough about the domain to tell whether they discovered mistakes in my solutions or not. And they do a pretty good job and have found mistakes in my reasoning many times.
I also use them for various coding tasks and they, together with agent frameworks, regularly do refactoring or small feature implementations in 1-2 minutes that would've taken me 10-20 minutes. They've probably increased my developer productivity by 2-3x overall, and by a lot more when I'm working with technology stacks that I'm not so familiar with or haven't worked with for a while. And I've been an engineer for almost 30 years.
So yea, I think you're just using them wrong.
bsaul
i could have written all of this myself. I use it exactly for the same purposes ( except i don't do undergrad physics, just maths) and with the same outcome.
It's also pretty useful for brainstorming : talking to AI helps you refine your thoughts. It probably won't give you any innovative idea, only a survey of mainstream ones, but it's a pretty good start for thinking about a problem.
null
alkonaut
I think this is the key. If you have a problem where it's slow to produce a plausible answer but quick to check if it's correct (writing a shell script, solving an equation, making up a verse for a song) then you have a good tool. It's the Prime-factorization category of problems. Recognizing when you have one and going to an LLM when you do, is key.
But what if you _don't_ have that kind of problem? Yes LLMs can be useful to solve the above. But for many problems you ask for a solution and what you get is a suggested solution which takes a long to verify. Meaning: unless you are somewhat sure it will solve the problem you don't want to do it. You need some estimate of confidence. LLMs are useless for this. As a developer I find my problems are very rarely in the first category and more often in the second.
Yes it's "using them wrong". It's doing what they struggle with. But it's also what I struggle with. It's hard to stop yourself when you have a difficult problem and you are weighing googling it for an hour or chatgpt-ing it for an hour. But I often regret going the ChatGPT route after several hours.
1a527dd5
I think it's starting to change.
I'm an AI sceptic (and generally disregard most AI announcements). I don't think it's going to replace SWE at all.
I've been chunking the same questions both to Gemini and GPT and I'd say about until ~8 months ago they were both as bad as each other and basically useless.
However, recently Gemini has gotten noticeable better and has never hallucinated.
I don't let it write any code for me. Instead I treat Gemini as a 10+ YoE on {{subject}}.
Working as platform engineer, my subjects are broad so it's very useful to have a rubber duck ready to go on almost any topic.
I don't use copilot or any other AI. So I can't compare it to those.
-__---____-ZXyw
YoE means "Years of Experience", for anyone interested. I had to look it up, and perhaps I can save a different me some time.
null
badmintonbaseba
I mostly use it as a replacement of a search engine and exploration, mostly for subjects that I'm learning from scratch, and I don't have a good grasp of the official documentation and good keywords yet. It competes with searching for guides in traditional search engines, but that's easy to beat with today's SEO infested web.
Its quality seems to vary wildly between various subjects, but annoyingly it presents itself with uniform confidence.
mnky9800n
This why I like how perplexity forces citations. I use it more like I’m googling then I care about what the LLM writes. The LLM simply acts as a sometimes unreasonable interface to the search engine. So really, I’m more focused on if whatever embeddings the LLM is trained on found some correlations between different documents, etc that were not obvious to a different kind of search engine.
wazoox
Perplexity often quotes references that simply don't exist. Recent examples provided by perplexity :
Google Cloud. (2024). "Broadcast Transformation with Google Cloud." https://cloud.google.com/solutions/media-entertainment/broad...
Microsoft Azure. (2024). "Azure for Media and Entertainment." https://azure.microsoft.com/en-us/solutions/media-entertainm...
IBC365. (2023). "The Future of Broadcast Engineering: Skills and Training." https://www.ibc.org/tech-advances/the-future-of-broadcast-en...
Broadcast Bridge. (2023). "Cloud Skills for Broadcast Engineers." https://www.thebroadcastbridge.com/content/entry/18744/cloud...
SVG Europe. (2023). "OTT and Cloud: The New Normal for Broadcast." https://www.svgeurope.org/blog/headlines/ott-and-cloud-the-n...
None of these exist, neither at the provided URLs or elsewhere.
pishpash
You're over-representing the usefulness here. On topics where traditional search reaches a dead end, you will find the AI citations to be the same ones you might have found, except that upon checking, they were clearly misread or misrepresented. Dangerous and a waste of time.
It's much more helpful on popular topics where summarization itself is already high quality and sufficient.
mnky9800n
I dunno. I think of it like a recommendation engine on Netflix. I don’t like everything Netflix tells me to watch. Same with perplexity. I don’t agree with everything it suggests me. People need to stop expecting the computer to think for them and instead see it as a tool to amplify their own thinking.
terhechte
Can you give some examples where it didn't work for you? I'm curious because I derive a lot of value from it and my guess is that we're trying very different things with it.
wazoox
Not OP, but yesterday I was working on NFS server tuning on Linux, a typically quite difficult thing to find relevant info about through search engines. I asked Claude 3.5 to suggest some kernel settings or compile-time tweaks, and it provided me with entirely made up answers about kernel variables that don't exist, and makefile options that don't exist.
So maybe another LLM would have fared better, but still, so far it's mostly wasted time. It works quite well to summarise texts and creating filler images, but overall I still find them not reliable enough to care out of these two limited use cases.
Yiin
I mean you answered yourself why it didn't work, if there is no useful data in its training corpus, it would be a miracle if it could correctly guess unknown information.
exe34
From my experience so far, most "AI skeptics" seem to be trying to catch the LLM in an error of reasoning or asking it to turn a vague description into a polished product in one shot. To make the latter worse, they often try to add context after the first wrong answer, which tends to make the LLM continue to be wrong - stop thinking about the pink elephant. No, I said don't think about the pink elephant! Why do you keep mentioning the pink elephant? I said I don't want a pink elephant in the text!
sausagefeet
I've had the same feeling for awhile. I tried to articulate it last night actually, I don't know to how much success: https://pid1.dev/posts/ai-skeptic/
guappa
I use them to troll. Like when I want to obviously make an annoying coworker angry I tell chatgpt to write an overly long and very AI sounding reply saying what I need to say.
otabdeveloper4
It works better if you treat it like a compressed database of Google queries. (Which is kind of actually is.)
Ask it something where the Google SERP is full of trash and you might have a more sane result from the LLM.
ddxv
I personally feel like some of the AI hype is driven by it's ability to create flashy demos which become dead end projects.
It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?
I think in the same way that image generation is akin to clipart (wildly useful, but lacking in depth and meaning) the AI code generation projects are akin to webpage templates. They can help get you started, and take you further than you could on your own, but ultimately you have to decide "now what" after you take that first (AI) step.
cess11
"It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?"
Which we already had, it's just a 'git clone https://github.com/whatevs/huh' away, or doing one of millions of tutorials on whatever topic. Pretty much everyone who can build something out of Elixir/Phoenix has a chat app, an e-commerce store and a scraping platform just laying around.
th0ma5
The demos I see all make compromises in order to work that hobble you from hardening them or otherwise lock you to very specific conceptualizations that you simply wouldn't have building from the smallest low level building blocks or even starting at a super high level state machine placeholder. In my experience no matter how hard I try it will be guided by the weights of the total generated output towards something that doesn't understand the value of compartmentalization, and will add tokens that make its probabilities work internally above all.
wiseowise
AI is gimmick, smartphones are gimmick, computers are gimmick, automation is gimmick, books are gimmick, only %MY_ENLIGHTMENT% is not.
Seriously, I understand saying something lime this about crypto or whatever meme of the day, but even current LLMs are literal magic. Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as
> Malesic argues that AI hype—especially in education—is a shallow gimmick: it overpromises revolutionary change but delivers banal, low-value outputs. True teaching thrives on slow, sacrificial human labor and deep discussion, which no AI shortcut can replicate.
Hardly any revolutionary thought.
AndrewDucker
Turns out that AI is not good at summarising things:
Gud
Ok, I disagree.
Out of curiosity, I used ChatGPT to make a summary of “FreeBSD vs Linux comparison”, and if came out as extremely fair and to the point, in my opinion.
wiseowise
It was good enough with summarizing this empty rant.
johnb231
As usual the paper is dead on arrival. They tested with obsolete models and non-reasoning models.
Try again with any SOTA reasoning model (GPT-o3, Gemini 2.5 Pro, Grok 3).
lm28469
> Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as
Definitely worth investing billions and wasting insane amount of energy... idk how people merge the "this is a revolution!" and "it kinda summed up a 10 pages pdf that I couldn't bother to read in the first place" without noticing the insane amount of mental gymnastic you have to go through to reconcile these two ideas.
Not even mentioning the millions of new LLM generated pages that are now polluting the web
mort96
Where did they say smartphones and computers are gimmicks?
wiseowise
https://www.theatlantic.com/technology/archive/2015/03/when-...
https://www.today.com/money/are-smartphones-making-us-lazy-t...
Etc., etc.
lm28469
I don't think you can "fear" something you consider a gimmick. Also good fucking luck arguing smartphones don't have massive negative effects.
If LLMs were even 50% as good as they're pretending to be we'd see huge productivity increase across the board, we simply don't, and it's been almost 3 years since chatgpt was released now. Where is the productivity increase ? Where is the extra wealth generated ?
elric
> “Human interaction is not as important to today’s students,” Latham claims
Goodness that's depressing. Is this going to crank individualism up to 11?
I remember hating having to do group projects in school. Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack. But even with lazy gits, the interactions were what made it valuable.
Maybe human/-I cooperation is an important skill for people to learn, but it shouldn't come at the cost of losing even more human-human cooperation and interaction.
DaSHacka
> Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack.
Never fear, nowadays 3/5 do squat with the 4th sending you largely-incoherent GPT sludge, before dropping off the face of the earth until 11:30PM on the night the assignment's due.
I've seen it said college is supposed to teach you the skills to navigate working with others moreso than your specific field of study. Glad to see they've still got it.
BlindEyeHalo
For me the usefulness of LLMs is proportional to how shitty google has become. When searching for something you get a bunch of blog spam or other SSO optimised shit results to pages that open dozens of popups asking you to subscribe or make an account. ChatGPT gives you the answer immediately and I must say I find it helpful 90% of the time.
For simple coding questions it is also very good because it takes your current context into account. It is basically a smarter "copy paste from stack overflow".
At least for now LLMs do not replace any meaningful work for me, but they replace google more and more.
lexandstuff
One of the realisations I've had recently is that the AI hype feels like another level from what's come before because AI itself is creating the "hype" content fed to me (and my bosses and colleagues) all over social media.
The FOMO tech people are having with AI is out of control - everyone assumes that everyone else is having way more success with it than they are.
namaria
A product that hypes itself. What a world. That does explain a lot of the cognitive dissonance going around.
pzo
I used AI to summarize this whole article and give me takeaways - it already saved me like 0.5h of reading something that in the end I would disagree with since the article is IMHO to harsh on AI.
I found AI extremely useful and easy sell for me to spend $20/m even if not used professionally for coding and I'm the person who avoid any type of subscription as a plague.
Even in educational setting that this article mostly focus about it can be super useful. Not everyone has access to mentors and scholars. I saved a lot of time helping family with typical tech questions and troubleshooting by teaching them how to use it and trying to solve their tech problem themselves.
alkonaut
I always found myself to be very good at Googling/Searching. Or asking: like emailing an expert or colleague. I'm good at condensing what I'm trying to ask and good at knowing what they could be misunderstanding, or what follow up questions they might have, to save some back- and forth. The corresponding thing on google is predicting what I might see, and adding negative search terms for them.
BUT, and this is I think why some of us feel ChatGPT is poor: asking in this way that guides a human or a search engine, makes ChatGPT produce worse answers(!).
If you say "What can be wrong with X? I'm pretty sure it's not Y or Z which I ruled out, could it be Q or perhaps W"? Then ChatGPT and other language models quickly reinforce your belief instead of challenging them. It would rather give you an incorrect reason why you are right, than provide you an additional problem, or challenge your assumptions. If LLMs could get over the bullshit problem, it would be so much better. Having a confidence and being able to express it is invaluable. But somehow I doubt it's possible - if it was, then they would be doing it already as it's a killer feature. So I fear that it's somehow not achievable with LLMs? In which case the title is correct.
blixt
I think it's in human nature to force any topic to be all "good" or "bad". I agree with most criticisms this author has about the performance of AI -- it _is_ very bad at writing essays, and dare I say most things (including code), based on a single prompt. But to say it is a gimmick and compare it with technologies that died or are dying seems to me like a visceral response, perhaps after experiencing the overflow of AI-generated homework (a use of AI that ultimately just wastes everyone's time).
I think most people in here know at least a few ways they can use AI that is genuinely useful to them. I suppose if you're _very_ positive about AI, then it's good to have a polarized negative article to make us remember all the ways AI is being overpromised. I'm definitely very excited about finding new ways to apply AI, and that explorative phase can come off as trying to sell snake oil. We have to be realistic and acknowledge this is a technology that can produce content faster than we can consume it. Content that takes effort to distinguish useful vs. not.
All that said I disagree with the idea that the only way "to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds" is via teaching and not via technologies such as AI. The education system certainly failed me and I found a lot of joy in technology instead. For me it was the start of the internet, but I can only imagine for many today it will be the start of AI.
mort96
> I think most people in here know at least a few ways they can use AI that is genuinely useful to them
The only thing that really comes to mind is making something in a domain where I have almost no prior expertise.
But then ChatGPT is so frequently wrong, and so frequently repeatedly wrong when it tries to "correct" problems when pointed out, that even then I always have to go and read relevant documentation and re-write the thing regardless. Maybe there's some slight usefulness here in giving me a starting point, but it's marginal.
blixt
My list of uses of AI includes:
- Turning a lot of data into a small amount of data, such as extracting facts from a text, translating and querying a PDF, cleaning up a data dump such as getting a clean Markdown table from a copy/pasted HTML source of a web page etc (IMO it often goes wrong when you go the other way and try to turn a small prompt into a lot of data)
- Creating illustrations representing ephemeral data (eg my daily weather report illustration which I enjoy looking at every day even if the data it produces is not super useful: https://github.com/blixt/sol-mate-eink)
- Using Cursor to perform coding tasks that are tedious but I know what the end result should look like (so I can spend low effort verifying it) -- it has an 80% success rate and I deem it to save time but it's not perfect
- Exploration of a topic I'm not familiar with (I've used o3 extensively while double checking facts, learning about laws, answering random questions that would be too difficult to Google, etc etc) -- o3 is good at giving sources so I can double check important things
Beyond this, AI is also a form of entertainment for me, like using realtime voice chat, or video/image generation to explore random ideas and seeing what comes out. Or turning my ugly sketches into nicer drawings, and so forth.
isaacfrond
> The claim of inevitability is crucial to technology hype cycles, from the railroad to television to AI.
Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.
bushbaba
About 60% of my job is writing. Writing slack, writing code, writing design docs, writing strategies, writing calibrations.
ChatGPT has allowed me to write 50%+ faster with 50%+ better quality. It’s been one of the largest productivity boosts in the last 10+ years.
windowshopping
One of?? Please tell me what other tools have been more impactful for you, I want to use them.
These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination. I have gotten so much value out of AI (specifically ChatGPT and Midjourney) that it’s hard to imagine that a few years ago this was not even remotely possible.
The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.