Skip to content(if available)orjump to list(if available)

GPT might be an information virus (2023)

dang

Discussed (a bit) at the time:

GPT Might Be an Information Virus - https://news.ycombinator.com/item?id=36675335 - July 2023 (31 comments)

GPT might be an information virus - https://news.ycombinator.com/item?id=35218078 - March 2023 (1 comment)

andy99

I think it is an information virus, but differently - it's homogenized everything, and made people dumber and lazier. It's poisoned public and professional discourse by reducing writing and thinking from the richness of humanity to one narrow style with a tiny latent space, and simultaneously convinced people that this is what good writing looks like. And it's erased thought from board classes of endeavor. This virus is much worse than the relatively benign symptoms described in the article.

majormajor

https://www.media.mit.edu/publications/your-brain-on-chatgpt... This seems relevant here in a "the results agree" way.

A4ET8a8uTh0_v2

Like most progress, it made some things easier ( and some things worse as a result ). What I do find particularly fascinating is that it is doing that even in professions that should know better ( lawyers, doctors ). That my boss uses it is no surprise me though. I always suspected he never really read my emails.

kldg

I've definitely been surprised by how it's being used; it's replacing people in places I don't think (even as a closet AI/LLM enthusiast) AI should ever be used: elder care, customer support (even on phone lines), for homework grading. -But I shouldn't have been so surprised, because some were already using robots for these tasks (or maybe not robots explicitly, but making CSRs/similar stick to scripts); my daughter was taking college placement tests recently -- even the essay questions were graded by software, and she's watched by software as she writes it. These things still seem to me like jobs which fundamentally require a human touch -- it's been especially amazing to me teachers are using AI to detect AI; you can't determine whether or not a robot wrote it, but you can assign a grade to it? Huh??

I have a very vocally anti-AI friend, but there is one thing he always goes on about that confuses me to no end: hates AI, strongly wants an AI sexbot, is constantly linking things trying to figure out how to get one, and asking me and the other nerds in our group about how the tech would work. No compromises anywhere except for one of the most human experiences possible. :shrug:

sho_hn

I think to me the weirdest and most unexpected (not so much in retrospect) AI use is that people will use it all day long to navigate chat conversations with their boyfriends/girlfriends, having it suggest romantic replies, etc.

I expect people to be lazy, but that we'd outsource feelings was surprising.

A4ET8a8uTh0_v2

It made me chuckle, because I absolutely buy the anecdotal anti-AI friend. On the other hand, if he applied himself, maybe he could figure it out. I honestly can't say I am not intruiged by the possibility.

doctorpangloss

People want AI lawyers and they really invented AI judges.

arthurcolle

Usually judges start out as lawyers

sho_hn

What's weird is that so many people shrug this off with "eh, it's what they said about the calculator".

Which to me is roughly as bad a take as "LLMs are just fancy auto-complete" was.

I feel it's worth reminding ourselves that evolution on the planet has rarely opted for human-level intelligence and that we possess it might just be a quirk we shouldn't take for granted; it may well be that we could accidentally habituate ans eventually breed outselves dumber and subsist fine (perhaps in different numbers), never realizing what we willingly gave up.

kordlessagain

People have always tended toward taking shortcuts. It's human nature. So saying "this technology makes people dumber or lazier" is tricky, because you first need a baseline: exactly how dumb or lazy were people before?

To quantify it, you'd need measurable changes. For example, if you showed that after widespread LLM adoption, standardized test scores dropped, people's vocabulary shrank significantly, or critical thinking abilities (measured through controlled tests) degraded, you'd have concrete evidence of increased "dumbness."

But here's the thing: tools, even the simplest ones, like college research papers, always have value depending on context. A student rewriting existing knowledge into clearer language has utility because they improve comprehension or provide easier access. It's still useful work.

Yes, by default, many LLM outputs sound similar because they're trained to optimize broad consensus of human writing. But it's trivially easy to give an LLM a distinct personality or style. You can have it write like Hemingway or Hunter S. Thompson. You can make it sound academic, folksy, sarcastic, or anything else you like. These traits demonstrably alter output style, information handling, and even the kind of logic or emotional nuance applied.

Thus, the argument that all LLM writing is homogeneous doesn't hold up. Rather, what's happening is people tend to use default or generic prompts, and therefore receive default or generic results. That's user choice, not a technological constraint.

In short: people were never uniformly smart or hardworking, so blaming LLMs entirely for declining intellectual rigor is oversimplified. The style complaint? Also overstated: LLMs can easily provide rich diversity if prompted correctly. It's all about how they're used, just like any other powerful tool in history, and just like my comment here.

majormajor

We could wait for further studies, but some already exist: https://www.media.mit.edu/publications/your-brain-on-chatgpt...

You say it's human nature to take shortcuts, so the danger of things that provide easy homogenizing shortcuts should be obvious. It reduces the chance of future innovation by making it more easy for more people have their perspectives silently narrowed.

Personally I don't need to see more anecdotal examples matching that study to have a pretty strong "this is becoming a problem" leaning. If you learn and expand your mind by doing the work, and now you aren't doing the work, what happens? It's not just "the AI told me this, it can't be wrong" for the uneducated, it's the equivalent of "google maps told me to drive into the pond" for the white-collar crowd that always had those lazy impulses but overcame them through their desire to make a comfortable living.

3willows

Perhaps that is the real danger. Everyone except a small elite who (rightly) feel they understand how LLMs work would simply give up serious thinking and accept whatever "majority" opinion is in their little social media bubble. We wouldn't have the patience to really engage with genuinely different viewpoints any more.

I recall some Chinese language discussion about the experience of studying abroad in the Anglophone world in the early 20th century and the early 21st century. Paradoxically, even if you are a university student, it may now be harder to break out of the bubble and make friends with non-Chinese/East Asians than before. In the early 20th century, you'd probably be one of the few non-White students and had to break out of your comfort zone. Now if you are Chinese, there'd be people from a similar background virtually anywhere you study in the West, and it is almost unnatural to make a deliberate effort to break out of that.

3willows

The point being: when you find someone who is tailoring all his/her/its attention to you and you alone, why bother talking to anyone else.

Hupriene

That's some real obsessive stalker logic there.

crimsoneer

This is how the church felt about the printing press.

nerevarthelame

While the church feared people interpreting information on their own, with LLMs it's the opposite: we fear that most interpretation of information will be done through a singular bland AI extruder. Tech companies running LLMs become the pre-press churches, with individuals depending on them to analyze and interpret information on their behalf.

majormajor

The church would've LOVED everyone asking the same one-to-four sources everything. ChatGPT is literally a controllable oracle. Quite the opposite of the printing press.

"Running your own models on your own hardware" is an irrelevant rounding error here compared to the big-company models.

toofy

this would be the opposite. the llm situation may be heading back towards something similar the church age.

the church did all of the reading and understanding for us. owners of the church gobbled up as much information as it could (encouraging confessions) and then the church owners decided when, how, where and which of that information flowed to us.

mensetmanusman

This analogy is going places.

XorNot

Who is "the Church" in this analogy?

rolph

refers to the gutenberg press, and mass production of printed works, threatening the siloed, ivory towers of knowledge at the time.

https://en.wikipedia.org/wiki/Printing_press#Gutenberg.27s_p...

if everyone has a bible, then who needs the church to tell you what it says.

brookst

People who consider themselves exceptionally smart, who are well educated and write well, who only ever need to communicate in their native tongue ye, and who have the luxury of investing time in developing a personal writing style.

It is a good analogy. There is great concern that the unwashed masses won’t know how to handle this tool and will produce information today’s curators would not approve of.

makk

It hasn't homogenized everything. It's further exposed humans for who they are. Humans are the virus.

jvm___

Agent Smith had it right when he was interviewing Morpheus in the Matrix.

goatlover

And then ironically Smith became a virus threatening both humans and machines. However, Agent Smith was the Oracle's tool to force a treaty between the machines and humans. As the Architect said at the end of Revolutions, she played a dangerous game.

But it was the only way forward to a new equilibrium.

iluvlawyering

And what if intelligence as measured by computation complexity reaches a natural limit inevitably marked by a detached compassionate disposition? Does the cure then become the virus?

jcalx

Alternatively stated:

> The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

> Viruses do not arise from kin, symbionts, or other allies.

> The signal is an attack.

―Blindsight, by Peter Watts

ayaros

We're going to have to go in the opposite direction and rely on directories or lists of verified human-made/accurate content. It will be like the old days of yahoo and web-indexes all over again.

DaveZale

A few years ago, some talk briefly circulated about local internet efforts, possibly run by public libraries.

Local news coverage has really suffered these past several years. Wouldn't it be great to see relevant local news emerge again, written by humans for humans?

That approach might be a good start. Use a cloud service that forbids AI bot scraping to protect copyright?

ceejayoz

> Wouldn't it be great to see relevant local news emerge again, written by humans for humans?

That sounds a lot like Nextdoor. With all the horrors that come with it.

ayaros

This doesn't seem to be structured differently than a standard-fare social media app. All the same issues with human verification on those apps would apply to this too.

Unless you mean a platform only for vetted local journalists...

righthand

Tie the account to the Library Card and then you can open it up to anyone.

Footprint0521

I feel like SEO trash has made this a must have for me for the past few years already. If it’s not stack overflow, Reddit, or stack exchange, I’m wasting my time

ayaros

Or MDN, which is yet another site that seems to be constantly ripped off by parasitic AI-generated SEO sites...

ayaros

(To clarify, I'm not suggesting this is necessarily a bad thing)

MPSimmons

I had the thought the other day that one of the most valuable things a human-driven website could offer would be a webring linking to other human-driven websites

JKCalhoun

I'm a fan of bringing back Web Rings.

Perhaps a site could kick off where people proposed sites for Web Rings, edited them. The sites in question could somehow adopt them — perhaps by directly pulling from the Web Ring site.

And while we're at it, no reason for the Web "Ring" not to occasionally branch, bifurcate, and even rejoin threads from time to time. It need not be a simple linked list who's tail points back to it's head.

Happy to mock something up if someone smarter than me can fill in the details.

Pick a topic: Risograph printers? 6502 Assembly? What are some sites that would be in the loop? Would a 6502 Assembly ring have "orthogonal branches" to the KIM-1 computer (ring)? How about a "roulette" button that jumps you to somewhere at random in the ring? (So not linear.) Is it a tree or a ring? If a tree, can you traverse in reverse?

ayaros

Web rings are a thing I've been thinking about a bit. Anyone know any good ones? There are a couple I reached out to for one of the projects I'm working on, to get my site on them, but I never got responses. There are also some webrings I've come across that have died or been retired. :(

johnnienaked

It gives me recollection back to the Simpsons episode where Itchy and Scratchy writers go on strike. What follows was a beautiful scene of children rubbing their eyes in unfamiliar sunlight as they're forced to go outside, making up games, playing on playgrounds, all while Beethovens Pastorale hums in the background.

I'm all for it. Let big tech destroy their cash cow, then maybe we can rebuild it in OUR interest.

gfody

I think we should prefer content-farm content to be replaced w/generated content. ultimately it'll compress back to the prompt that generated it and that'll be easier to filter out.

androng

i heard from an artist that Pinterest is full of AI-generated stuff now so artists looking for references have to go back to physical books for art references

t1234s

Is there any way these LLM tools watermark their output in a way that keeps them from re-training on output generated from the same LLM?

konfusinomicon

strategic em-dash placement is my guess but only the machine knows the code

StarlaAtNight

Nice try, AI!

Havoc

It’ll certainly get more noisy but I don’t quite buy the total communication collapse implied here.

Humans still have an inherent need to be heard and hear others. Even in a pretty extreme scenario I think bubbles of organic discussion will continue

null

[deleted]

mwkaufma

"Outside of the fate of the web, I see GPT as a monumental force of good." [Citation Needed]

thomashop

The rest was fine without citations, but the part you disagree with needs them?

null

[deleted]