"ChatGPT said this" Is Lazy
65 comments
·October 24, 2025lrvick
beached_whale
Like any reference or other person, one needs to question whether those ideas fit into their mental models and verify things anyhow. One never could just trust that something is true without at least quick mental tests. AI is no different than other sources here. As was drilled into us in high school, use multiple sources and verify them.
sanswork
I will not use Jr developers for engineering work and never will, because doing the work of a Jr.....
You don't have to outsource your thinking to find value in AI tools you just have to find the right tasks for them. The same as you would with any developer jr to you.
I'm not going to use AI to engineer some new complex feature of my system but you can bet I'm going to use it to help with refactoring or test writing or a second opinion on possible problems with a module.
> unlikely to have a future in this industry as they are so easily replaceable.
The reality is that you will be unlikely to compete with people who use these tools effectively. Same as the productivity difference between a developer with a good LSP and one without or a good IDE or a good search engine.
When I was a kid I had a text editor and a book and it worked. But now that better tools are around I'm certainly going to make use of them.
markfeathers
I do not use books for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no writer has seen before.
If anyone gives me an opinion from a book, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
(You can replace AI with any resource and it sounds just as silly :P)
theamk
Yes, if you find a book that is as bad as AI advice, you should definitely throw it away and never read it. If someone is quoting a known-bad book, you should ignore their advice (and as a courtesy, tell them their book is bad)
It's so strange that pro-AI people don't see this obvious fact and keep trying to compare AI with things that are actually correct.
simonw
It's so strange that anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book.
throwaway-0001
So if anyone with below 120 iq gives you their opinion is disrespectful because they are stupid?
—-
It’s interesting that we have to respect human “stupid” opinions but anything from AI is discarded immediately.
I’d advocate for respecting any opinion. And consider good or at least good willed opinion.
null
bgwalter
What is this new breed of interactive books that give you half baked opinions and incorrect facts in response to a prompt?
conartist6
Yeah except it's not quite the same thing, is it?
The fact that you're presenting this as a comically absurd comparison tells me that you know well that it's an absurd comparison.
throwaway-0001
At least you can counter with an argument. You just seem to agree both are absurd.
rileymat2
For me, I am not sure it has eliminated thinking.
I have recently started to use codex on the command line. Before I put the prompt in, I get an idea in my head of what should happen.
Then I give it the instructions, sometimes clarifying my own thoughts while doing it. These are high level instructions, not "change this file". Then it bumps away for minutes at a time, after which I diff the results and consider if it matches up to what I would expect. At that point lower level instructions if appropriate.
Consider whether it was a better solution or not, then ask questions around the edges that I thought were wrong.
It turns my work from typing code in to pretty much code design and review. These are the hard tasks.
GaryBluto
Is copy-pasting from Wikipedia an "opinion" from Wikipedia?
XorNot
No, but it's also equally not a useful contribution. If wikipedia says something then I'm going to link the article, then give a quick summary of what in the article relates to whatever my point is.
Not write "Wikipedia says..." and paste the entire article verbatim.
lrvick
Even that annoys me because who knows how accurate that is at any moment. Wikipedia is great for getting a general intro to a thing, but it is not a source.
I would rather people go find the actual whitepaper or source in the footnotes and give me that, and/or give me their own opinion on it.
ebb_earl_co
I’m in the same boat, and what tipped me there is the ethical non-starter that OpenAI and Anthropic represent. They strip-mined the Web, ripped off copyrighted works in neat space, admitting that going through the proper channels was a waste of business resources.
They believe that the entirety of human ingenuity should be theirs at no cost, and then they have the audacity to SELL their ill-gotten collation of that knowledge back to you? All the while persuading world governments that their technology is the new operating system of the 21st century.
Give me a dystopian break, honestly.
paulcole
> If this pisses you off, ask yourself why.
Why would it piss me off that you’re so closed minded about an incredible technology?
lrvick
Using an AI to think for me would be like going to a gym and paying a robot to lift weights for me.
Like sure that is cool that is possible, but if I do not do the work myself I will not get stronger.
Our brains are the same way.
I also do not use a GPS because there are literally studies with MRI scans proving it makes an entire section of our brain go dark compared to London taxi drivers required by law to navigate with their brains.
I also navigate life without a smartphone at all, and it has given me what feels like focus super powers compared to those around me, when in reality probably most people had that level of focus before smartphones were a thing.
All said AI is super interesting when doing specialized work at scale no human has time for, like identifying cancer by training on massive datasets.
All tools have uses and abuses.
paulcole
Sounds fun!
einsteinx2
I’ve noticed this trend in comments across the internet. Someone will ask or say something, the someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”
ChatGPT is free and available to everyone, and so are a dozen other LLMs. If the person making the comment wanted to know what ChatGPT had to say, they could just ask it themselves. I guess people feel like they’re being helpful, but I just don’t get it.
Though with that said, I’m happy when they at least say it’s from an LLM. At least then I know I can ignore It. Worse is replying as if it’s their own answer, but really it’s just copy pasted from an LLM. Those are more insidious.
Leherenn
Isn't it the modern equivalent of "let me Google that for you"?
My experience is that the vast majority of people do 0 research (AI assisted or not) before asking questions online. Questions that could have usually been answered in a few seconds if they had tried.
If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
nitwit005
There's seemingly a difference in motive. The people sharing AI responses seem to be from people fascinated by AI generally, and want to share the response.
The "let me Google that for you" was more trying to get people to look up trivial things on their own, rather than query some forum repeatedly.
thousand_nights
exactly, the "i asked chatgpt" people give off 'im helping' vibes but in reality they are just annoying and clogging up the internet with spam that nobody asked for
they're more clueless than condescending
plorkyeran
It is the modern equivalent of "let me Google that for you" except for that most of the people doing it don't seem to realize that they're telling the person to fuck off, while that absolutely was the intent with lmfgtfy.
kbelder
>Isn't it the modern equivalent of "let me Google that for you"?
Which was just as irritating.
einsteinx2
> Isn't it the modern equivalent of "let me Google that for you"?
When you put it that way I guess it kind of is.
> If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
100% agree with you there
pessimizer
Let me google that for you was when a person e.g. asked "what's a tomato?", and you'd paste in the link http://www.google.com/search?q=what's+a+tomato
That's not like pasting in a screenshot or a copy/paste of an AI answer, it's being intentionally dismissive. You weren't actually doing the "work" for them, you were calling them lazy.
The way I usually see the AI paste being used is from people trying to refute something somebody said, but about a subject that they don't know anything about.
noir_lord
To modifying a hitchism.
> What can be asserted without evidence can also be dismissed without evidence.
Becomes
> That which can be asserted without thought can be dismissed without thought.
Since no current AI thinks but humans do I’m just going to dismiss anything an AI says out of hand because you are pushing the cost of parsing what it said onto me and off you and nah, ain’t accepting that.
greazy
That's wonderfully succinct argument.
JumpCrisscross
> someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”
I had a consultant I’m working with have an employee do that to me. I immediately insisted that every hour they’ve billed on that person’s name be refunded.
minimaxir
The irony is that the disclosure of “I asked ChatGPT and it says…” is done as a courtesy to let the reader be informed. Given the increasing backlash against that disclosure, people will just stop disclosing which is worse for everyone.
The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.
StrandedKitty
I think it's fine to not disclose it. Like, don't you find "Sent from my iPhone" that iPhones automatically add to emails annoying? Technicalities like that don't bring anything to the conversation.
I think typically, the reason people are disclosing their usage of LLMs is that they want offload responsibility. To me it's important to see them taking responsibility for their words. You wouldn't blame Google for bad search results, would you? You can only blame the entity that you can actually influence.
einsteinx2
That’s true. Unfortunately the ideal takeaway from that sentiment should be “don’t reply with copy pasted LLM answers”, but I know that what you’re saying will happen instead.
XorNot
Except it isn't. It's a disclosure to say "If I'm wrong, it's not my fault".
Because if they'd actually read the output, then cross-checked it and developed some confidence in the opinion, they wouldn't put what they perceive as the most important part up front ("I used ChatGPT") - they'd put the conclusion.
tonyspiff
Indeed. On the other hand, there's a difference between "I one-prompted some mini LLM" and "A deep-thinking LLM aided me through research with fact-checking, agents, tools and lots of input from me." While both can be phrased with “I asked ChatGPT and it says…” or “According to AI…”, the latter would not annoy me.
globular-toast
It must be the randomness built into LLMs that makes people think it's something worth sharing. I guess it's no different from sharing a cool Minecraft map with your friends or something. The difference is Minecraft is fun, reading LLM content is not.
ottah
Relying heavily on information supplied by LLMs is a problem, but so is this toxic negativity towards technology. It's a tool, sometimes useful, and other times crap. Critical thinking and literacy is the key skill that helps you tell the difference, and a blanket rejection (just like absolute reliance) is the opposite of critical thinking.
spot5010
The scenario the author describes is bound to happen more and more frequently, and IMO the way to address it is by evolving the culture and best practices for code reviews.
A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.
They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.
Acceptable comments could be: - I agree with the AI for xyz reasons, please fix. - I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.
If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.
insin
I'm starting to run into the other end of this as a reviewer, and I hate it.
Stories full of nonsensical, clearly LLM-generated acceptance requirements containing implementation details which are completely unrelated to how the feature actually needs to work in our product. Fine, I didn't need them anyway.
PRs with those useless, uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about. But fine, I guess, being able to read, understand and evaluate the code is part of my job as a reviewer.
---- < the line
PRs littered with obvious LLM comments you didn't care enough to take out, where something minor and harmless, but _completely pointless_ has been added (as in if you'd read and understood what this code does, you'd have removed it), with an LLM comment left in above it AND at the end of the line, where it feels like I'm the first person to have tried to read and understand the code, and I feel like asking open-ended questions like "Why was this line added?" to get you to actually read and think about what's supposed to be your code, rather than a review comment explaining why it's not needed acting as a direct conduit from me to your LLM's "You're absolutely right!" response.
627467
We - humans - are getting ready for A"G"I
pavel_lishin
I've come down pretty hard on friends who, when I ask for advice about something, come back with a ChatGPT snippet (mostly D&D-related, not work-related).
I know ChatGPT exists. I could have fucking copied-and-pasted my question myself. I'm not asking you to be the interface between me and it. I'm asking you, what you think, what your thoughts and opinions are.
globular-toast
It's kinda hilarious to watch people make themselves redundant. Like you're essentially saying "you don't need me, you could have just asked ChatGPT for a review".
I wrote before about just sending me the prompt[0], but if your prompt is literally my code then I don't need you at all.
blitzar
"Google said this" ... "Wikipedia said this" ... "Encyclopedia Britannica said this"
ahofmann
It is not the same. It needs some searching, reading and comprehension to cite Google etc. Copying a LLM output "costs" almost no energy.
FlameRobot
It is similar enough. People would just find the first thing in a disagreement that had headline that corroborated their opinion, this was often Wikipedia or the Summary on google.
People did this with code as well. DDG used to show you the first Stackoverflow post that was close to what you searched. However sometimes this was obviously wrong, people have just copied and pasted that wholesale.
Groxx
well. "Google said this" is pretty close nowadays.
the other two are still incomparably better in practice though.
KalMann
I think the difference is people use those as citations for specific facts, not to logically analyze your code. If you're asked how technical detail of C++ works then simply citing Google is acceptable. If you're asked about broader details that depend on certain technicalities specific to your codebase, Googling would be silly.
uberman
This is an honest question. Did you try pasting your PR and the ChatGPT feedback into Claude and asking it for an analysis of the code and feedback?
pavel_lishin
Does that particularly matter in the context of this post? Either way, it sounds like OP was handed homework by the responder, and farming that out to yet another LLM seems kind of pointless, when OP could just ask the LLM for its opinion directly.
uberman
While LLM code feedback might be wordy and dubious, I have personally found that asking Claude to review a PR and related feedback to provide some value. From my perspective anyways, Claude seems able to cut through the BS and say if a recommendation is worth the squeeze or in what contexts the feedback has merit or is just pedantic. Of course, your mileage my vary as they say.
pavel_lishin
Sure. But again, that's not what OP's post is about.
verdverm
Careful with this idea, I had someone take a thread we were engaged in and feed it to an LLM, asking it to confirm his feelings about the conversation, only to post it back to the group thread. It was used to attack me personally in a public space.
Fortunately
1. The person was transparent about it, even posting a link to the chat session
2. They had to follow on prompt to really engage the sycophancy
3. The forum admins stepped in to speak to this individual even before I was aware of it
I actually did what you suggested, fed everything back into another LLM, but did so with various prompts to test things out. The responses where... interesting, the positive prompt did return something quite good. A (paraphrased) quote from it
"LLMs are a powerful rhetorical tool. Bringing one to a online discussion is like bringing a gun to a knife fight."
That being said, how you prompt will get you wildly different responses from the same (other) inputs. I was able to get it to sycophant my (not actually) hurt feelings.
stickfigure
Counterpoint: "Chatgpt said this" is an entirely legitimate approach in many contexts and this attitude is toxic.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.
I do not use AI for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no AI has seen before.
If anyone gives me an opinion from an AI, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.