Skip to content(if available)orjump to list(if available)

ChatGPT can't kill anything worth preserving

simonw

The original story here was published on December 11th 2022, just 12 days after the launch of ChatGPT (running on GPT-3.5).

I feel most of this document (the bit that evaluates examples of ChatGPT output) is more of historic interest than a useful critique of the state of LLMs in 2025.

jamil7

I feel very differently about the article. What I took away from it was the authors long-held critique of how high school literature and writing is taught and evaluated. Or how teaching in general has become increasingly outcome focused rather than process driven. ChatGPT is used to illustrate this but I feel like the article could have been written using another example like plagiarism or paying someone to write for you or any current LLM.

ludicrousdispla

This document is not a critique on the state of LLMs.

simonw

It includes some very prominent examples of 2022 ChatGPT output.

I am not saying that the rest of the document does not include valuable content, just providing a warning that some of it uses 2+ year old examples.

thomashop

"I cannot emphasize this enough: ChatGPT is not generating meaning. It is arranging word patterns. I could tell GPT to add in an anomaly for the 1970s - like the girl looking at Billy’s Instagram - and it would introduce it into the text without a comment about being anomalous."

I asked ChatGPT to introduce the girl looking at Billy's instagram. The response:

"Instagram didn't exist in the 1970s. Do you want to keep the setting authentic to the '70s or update the story to a contemporary timeframe where Instagram fits naturally?"

LeoPanthera

To be fair, the very first version of ChatGPT, which this post is about when it was written two years ago, probably would not have queried it.

moffkalast

Needs (2022) then.

null

[deleted]

knowknow

It’s always baffling how people take a technology that wasn’t even thought as feasible a decade ago and try to dismiss it as trivial and stagnant. It’s pretty clear that LLMs have improved rapidly and have successfully become better writers than the majority of people alive. To try and present this as just random pattern matching seems as just a way to assuage fears of being replaced.

It’s also amusing that people minimize it by just calling it pattern matching, as if human reasoning isn’t built upon recognizing patterns in past experiences.

wruza

I’m conflicted about your comment. On one hand, I agree, useless reductions are boring. But on the other, we are living in the “overselling all up in your ears” epoch, which is known to sell pffts as badabooms. So it isn’t baffling to me that a new tech gets old quickly, because it’s not really what was advertised. Our decades-old ideas of AI weren’t feasible a decade ago, but neither are these now. Those who believe in that too much become hu.ma.ne founders and similar self-bullshitters.

TeMPOraL

You're right about "living in the “overselling all up in your ears” epoch", but a good first defense against "being sold pffts as badabooms" is to blanket distrust all the marketing copy and whatever the salespeople say, and rely on your own understanding or experience. You may lose out on becoming an early adopter of some good things, but you'll also be spared wasting money on most garbage.

With that in mind, I still don't get the dismissal. LLMs are broadly accessible - ever since the first ChatGPT, anyone could easily get access to a SOTA LLM and evaluate it for free; even the limited number of requests on free tiers were then, and now are, sufficient to throw your own personal and professional problems at models and see how they do. Everyone can see for themselves this is not hot air - this is an unexpected technological breakthrough that's already overturning way people approach work, research and living, and it's not slowing down.

I'd say: ignore what the companies are selling you - especially those who are just building products on top of LLMs and promising pie in the sky. At this point in time, they aren't doing anything you couldn't do for yourself with ChatGPT or Claude access[0]. We are also beginning to map out the possibilities - two years since the field exploded is very little time. So in short, anything a business does, you could hack yourself - and any speculative idea for AI applications you can imagine, there's likely some research team working on it too. The field is moving both absurdly fast and absurdly slow[1]. So your own personal experience over applying LLMs to your own problems, and watching people around you do the same, is really all you need to tell whether LLMs are hot air or not.

My own perspective from doing that: it's not hot air. The layer of hype is thin, and in some areas the hype is downplaying the impact.

--

[0] - Yes, obviously a bunch of full-time professionals are doing much more work than you or me over couple evenings of playing with ChatGPT. But they're building a marketable product, and 99% of work that goes into that is something you do not need to do, if you just want to replicate the core functionality for yourself.

[1] - I mean, Anthropic just published a report on how exposing "thinking" capability to the model in form of a tool call leads to improvement of performance. On the one hand, kudos to them for testing this properly and publishing. On the other hand, that this was something to do was stupidly obvious ever since 1) OpenAI introduced function calling and 2) people figured out "Let's think step by step" improves model performance - which was back in 2022[2]. It's as clear example as ever that both hype and productization lag behind what anyone paying attention can do themselves at home.

[2] - https://arxiv.org/abs/2205.11916

wruza

Idk, I find their output mediocre and sometimes misleading. But that’s not the worst part and is often cheaper than doing things yourself.

The worst part is https://news.ycombinator.com/item?id=43314958 . We may be still blind to this, but new generations may find themselves on the other side of the fence, so to say.

mrweasel

> It’s pretty clear that LLMs have improved rapidly and have successfully become better writers than the majority of people alive

But that doesn't help much, because for the majority of our writing that's not the issue. LLMs might out do professional writers some day, but without imagination and the desire to sent a message I doubt it will happen any time soon.

Where LLMs are useful is in professional communication, because you're right, an LLM can technically write better than most. The issue is that you need to tell the LLM what message you want to convey, and that's where most people fall down. You also need to be able to critically read back your text, place yourself in the position of the receiver of your text, LLM generated or not. Most people, even those in professional communication can't do that.

I believe the author is on to something, ChatGPT (and others) are perfectly good tools, and can replace most if not all of the shitty communication and writing, including journalism, present in the world. What it can't do is replace the few really good communicators. Communication is hard, not a lot of people can do it and no LLM is going to help you if you can't formulate your message clearly on your own.

There is a darker side of this argument. If something can be killed of by an LLM, is it even worth having? This argument throws a great number of "professionals" under the bus, instantly reducing the value of their jobs to zero. I have family members who do communication for the city, they spend an exorbitant amount of time on flyers, newsletters, Facebook posts and so on, detailing city work, future plan, past events, all that stuff. I doubt anyone really reads it, most of it could probably be written by an LLM, because it always excludes the "why are we doing this", "why should you care" or even "this is expected to impact you in this way". I get a million update from the school about restructuring, hiring of administrators, org-reshuffling... it could all be done by an LLM, but the fact is: No one gives a shit. They just want regular updates from the teachers, and the LLM can't do that.

Communication is hard, and LLMs don't change that, they can only help you write out your message, you have to come up with that yourself, and that's the hard part.

bulatb

You can't interrogate your qualia. A lot of people think (or really, feel) that makes them magic.

It doesn't.

But if you feel they're magic, you'll never believe that a "random" mechanical process, which you think you can interrogate, could ever really have that spark.

wolvesechoes

Welcome on hackernews, where thousands years old philosophical problems are solved through plain assertions!

KeplerBoy

Turns out we have technologies and experiences with technology which weren't possible until very recently. Some things just look very different in hindsight.

bulatb

Qualia might still be magic. Maybe we have souls or something, I don't know.

I'm comfortable with saying that this way of answering the question doesn't work, because the argument is simply, "How can it be false if I believe it's true?"

baq

I subscribe to the ‘qualia is atoms in a trenchcoat’ school of philosophy personally, but I understand that it might be hard to accept and even harder to not be depressed about it.

amanaplanacanal

Well sure. As best we can tell the whole universe is a mechanical cause and effect chain, and the illusion of free will is just that, an illusion.

Or possibly... There is something else going on, and we just haven't figured it what it is yet. I'm not betting either way at this point.

eru

Well, it doesn't matter whether the AI has qualia, as long as it produces the right output.

CarRamrod

Define 'magic'.

null

[deleted]

pizza

Hang on a sec... the qualia research institute might take umbrage at that first bit.

dingnuts

LMAO really? Who said anything about qualia?

LLMs lack the capacity for invention and metacognition. We're a long way from needing to talk about qualia; these things are NOT conscious and there is no question.

This website is absurd. "I don't think LLMs are all they're cracked up to be" "YOU'RE JUST MAD BECAUSE QUALIA AND YOU WANT TO BE MAGIC"

no the "magic" text generator just writes bad code, my dude

Only the people on r/singularity care about qualia in this context and let us remember this is a mechanism without even a memory

bulatb

Nobody is thinking about qualia. That's only how I framed it. They just know the machine will never replace them. They're a person, doing person things, and it's a machine.

So every time it proves it can, they move the goalposts and invent a truer Scotsman who can never be replaced by a machine. Because they know it can't do person things. It's a machine.

dist-epoch

> amusing that people minimize it by just calling it pattern matching

Even funnier when the typical IQ test, Raven's Progressive Matrices, are 100% about pattern matching.

wruza

I once tried myself on an official Mensa test which was very similar to that. It got very boring after I realized they test for mundane & | ^ in varying ways. Dropped it halfway and passed, lol. But I guess you have to be pretty smart to detect logical ops without a hacker background.

globnomulous

> To try and present this as just random pattern matching seems as just a way to assuage fears of being replaced.

LLMs are token predictors. That's all they do. This certainly isn't "random," but insisting that that's what the technology does isn't some sad psychological defense mechanism. It's a statement of fact.

> as if human reasoning isn’t built upon recognizing patterns in past experiences.

Everybody will be relieved to know you've finally solved this philosophical problem and answered the related scientific questions. Please make sure you cc psychology, behavioral economics, and neuroscience when you share the news with philosophy.

raducu

> LLMs are token predictors.

And neural nets are "just" matrix multiplications, human brains are "just" chemical reactions

globnomulous

I don't know why you're quoting, or stressing, a word I didn't use.

I once got into an argument with a date about the existence of God and the soul. She asked whether I really think we're "just" physical stuff. I told her, "no, I'm not saying we're 'just' physical stuff. I'm saying we're physical stuff." That 'just' is simply a statement of how she feels about the idea, not a criticism of what I was claiming. I don't accept that there's anything missing if we're atoms rather than atoms plus magic, because I don't feel any need for there to be magic.

Your brain is indeed physical stuff. You also have a mind. You experience your own mind and perceive yourself as an intending being with a guiding intelligence. How that relates to or arises from the physical stuff is not well understood, but everything we know about the properties and capabilities of the mind tells us that it is inextricably related to the body that gives rise to it.

Neural nets are indeed matrix multiplication. If you're implying that there is a guiding intelligence, just as there is in the mind, I think you're off in La La Land.

throw310822

> LLMs are token predictors. That's all they do.

You certainly understand that if they can successfully predict the tokens that a great poet, a great scientist or a great philosopher would write, then everything changes- starting from our status of sole, and rare, generators of intelligent thoughts and clever artifacts.

globnomulous

Congratulations, you've successfully solved the Chinese Room problem by paving over and ignoring it.

IanCal

Prediction is not the same as pattern matching.

null

[deleted]

grumbel

> LLMs are token predictors. That's all they do.

That's a disingenuous statement, since it implies there is a limit to what LLMs can do, when in reality an LLM is just a form of Universal Turing Machine[1] that can compute everything that is commutable. The "all they do" is literately everything we know to be doable.

[1] Memory limits do apply as with any other form of real world computation.

globnomulous

I'll ignore the silly claim that I'm somehow dishonest or insincere.

I like the way Pon-a put it elsewhere in this thread:

> LLMs are a language calculator, yes, but don't share much with their analog. Natural language isn't a translation from input to output, it's a manifestation of thought.

LLMs translate input to output. They are, indeed, calculators. If you don't already see that that's different from having a thought and expressing it in language, I don't think I'm going to convince you otherwise here.

pona-a

> Giving 10 month old children permanent crutches may harm the development of walking, but since the crutches make unassisted walking obsolete, maybe it wasn't worth it anyway?

LLMs are a language calculator, yes, but don't share much with their analog. Natural language isn't a translation from input to output, it's a manifestation of thought.

If deaf children not taught sign language suffer from non-language learning disabilities, it's not a far stretch to say failing to practice quality writing will have a similar effect. But even with direct translation, if a school "translated" Shakespeare to a lower reading level to account for the class's lower literacy, it may still affect their development. If you had ChatGPT do every exercise in CS, you wouldn't have learned much from just the explanations.

OgsyedIE

If this argument is true it implies an unintended but very bleak corollary about what things aren't really worth preserving, if you start to think about what things aren't thriving in the economy with competition from LLMs. Trusted phone communication, visual art, rule of law, academic peer review, the news, search engines, question and answer platforms, etc

XorNot

How is ChatGPT or anything else killing visual art? The entire zeitgeist at the moment is dismissive and bored of AI generated material already.

OgsyedIE

The zeitgeist of trendy under-40s and their cultural opinions is all well and good but it has almost nothing to do with hiring

__loam

I think this is an astute observation. It feels like everyone except the people with the money have realized this stuff is garbage.

briandear

Academic peer review? That system can go away. I care about facts and proof of those facts, I don’t care about some academic’s opinion of another academic’s paper. We have t solved for the alternative yet, but peer review has led us to some ridiculous unscientific dishonesty, a reproducibility crises, and other adventurers because we assume the peers have an incentive for truth.

trash_cat

> The reason the appearance of this tech is so shocking is because it forces us to confront what we value, rather than letting the status quo churn along unexamined.

I think this is the most valuable part of the article. It's the writing process itself, which isn't valued in schools.

zedascouves

> I could tell GPT to add in an anomaly for the 1970s - like the girl looking at Billy’s Instagram - and it would introduce it into the text without a comment about being anomalous

Wrong. I tried it. it wrote it as "instagram". Then i asked it to explain this instagram:

In Timmy’s mind, it was simple. First, he’d snap a Polaroid of some passing dog or a field of sunflowers. Next, he’d run home, sit at his rickety desk, and carefully slip the Polaroid into a school notebook he called his “feed.” He’d scribble a title at the top—something he swore was called a “caption”—and pretend he was beaming the image across some invisible network into the hands of friends he’d never actually met.

Awesome to me

drpossum

"Worth" doing a lot of heavy lifting here.

ctxc

To note - 2022

drpossum

Not sure why that is in any way relevant, but thanks.

pests

It was 12 days after 3.5 was released. A lot has changed since then.

The point being take anything in with a view of the times, just like if I were to show you quotes from people 60 years ago saying computers are useless and no one will use them.

bryanrasmussen

can it damage though?

fragmede

(2022)

milesrout

The author is a poor writer for someone that is supposed to be an expert on writing. Those who can't do, teach? Prolix, excessive commas, takes too long to get to the point, and talks about himself too much.