Skip to content(if available)orjump to list(if available)

Isaac Asimov describes how AI will liberate humans and their creativity (1992)

slibhb

LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.

The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.

beloch

What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".

LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.

israrkhan

Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.

TheOtherHobbes

As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.

It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.

Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.

There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.

AI can reinforce that. But - ironically - it can also be very good at subverting it.

bad_user

I have yet to enjoy any of the "creative" slop coming out of LLMs.

Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.

Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.

I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".

schwartzworld

We thought machines were gonna do the work so we could pursue art and music. Instead of machines get to make the art and music, while humans work in the Amazon warehouses.

__MatrixMan__

We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.

protocolture

The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.

And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.

Lerc

"LLMs are statistical models"

I see this referenced over and over again to trivialise AI as if it is a fait acompli.

I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

lelandbatey

In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.

It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).

Lerc

That's a very archaic view of AI, like 70's era symbolic AI.

vacuity

Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.

slibhb

I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.

> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.

To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.

Lerc

>I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.

Point taken. As lelandbatey said, your comment seems to be the one case where it's not meant to trivialise.

>I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.

The (regardless of the program they run) suggests you think that AI cannot be achieved by algorithmic means. That runs a little counter to the belief that it is possible to build thinking machines unless you think those future machines have some non algorithmic enhancement that takes them beyond machines,

I do not assume we will "accidentally" create thinking machines, but I certainly think it's not impossible.

On the other hand I suspect the best chance we have of understanding consciousness will be by attempting to build one.

BeetleB

Reminds me of an old math professor I had. Before word processors, he'd write up the exam on paper, and the department secretary would type it up.

Then when word processors came around, it was expected that faculty members will type it up themselves.

I don't know if there were fewer secretaries as a result, but professors' lives got much worse.

He misses the old days.

null

[deleted]

zusammen

To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.

jhbadger

This wasn't just a "academia" thing, though. All business executives (even low level ones) had secretaries in the 1980s and earlier too. Typing wasn't something most people could do and it was seen as a waste of time for them to learn. So people dictated letters to secretaries who typed them. After the popularity of personal computers, it just became part of everyone's job to type their correspondence themselves and secretaries (greatly reduced in number and rebranded as "assistants" who deal more with planning meetings and things) became limited only to upper management.

Balgair

[flagged]

n4r9

I've only read the first Foundation novel by Asimov. But what you write applies equally well to many other Golden Age authors e.g. Heinlein and Bradbury, plus slightly later writers like Clarke. I doubt there was much in the way of autism awareness or diagnosis at the time, but it wouldn't be surprising if any of these landed somewhere on the spectrum.

Alfred Bester's "The stars my destination" stands out as a shining counterpoint in this era. You don't get much character development like that in other works until the sixties imo.

throwanem

Heinlein doesn't develop his characters? Oh, come on. You can't have read him at all!

wubrr

> LLMs are statistical models trained on human-generated text.

I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...

slibhb

> Also, human brains are arguably statistical models trained on human-generated/collected data as well...

I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.

wubrr

Almost everything we learn in schools, universities, most jobs, history, news, hackernews, etc is literally human-generated text. Our brains have an efficient structure to learn language, which has evolved over time, but the processes of actually learning languages happens after you are born, based on human-generated text/voice. Things like balance/walking, motion control, speaking (physical voice control), other physical things are trained on sensory data, but there's no reason LLMs/AIs can't be trained on similar data (and in many cases they already are).

827a

Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.

My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.

The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.

However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.

It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.

wubrr

Well the dog brain and human brain are very different statistical models, and I don't think we have any objective way of comparing/quantifying LLMs (as an architecture) vs human brains at this point. I think it's likely LLMs are currently not as good as human brains for human tasks, but I also think we can't say with any confidence that LLMs/NNs can't be better than human brains.

janalsncm

> Isaac Asimov describes artificial intelligence as “a phrase that we use for any device that does things which, in the past, we have associated only with human intelligence.”

This is a pretty good definition, honestly. It explains the AI Effect quite well: calculators aren’t “AI” because it’s been a while since humans were the only ones who could do arithmetic. At one point they were, though.

azinman2

Although calculators can now do things almost no humans can do, or at least in any reasonable time. But most (now) wouldn’t call it AI. It’s a tool, with a very limited domain

janalsncm

That’s my point, it’s not AI now. It used to be.

hinkley

Similarly, we esteem performance optimizations so aggressively that a lot of things that used to be called performance work are now called architecture, good design. We just keep moving the goal posts to make things more comfortable.

saalweachter

I mean, at one point "calculator" was a job title.

timewizard

The abacus has existed for thousands of years. Those who had the job of "calculator" also used pencil and paper to manage larger calculations which they would have struggled to do without any tools.

That's humanity. We're tool users above anything else. This gets lost.

musicale

And "computer".

aszantu

Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.

nitwit005

I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.

buzzy_hacker

Same here. A main point of I, Robot was to show why the three laws don't work.

cogman10

I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.

In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.

For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.

Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).

aszantu

The story from iRobot is one of Asimov s stories and it works exactly as intended. The AI figured that to keep humans safe you have to put them in cages. Humans will always fight over something

nthingtohide

Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.

Family Guy Nasty Wolf Pack

https://youtu.be/5oW9mNbMbmY

The perfect wish to outsmart a genie | Chris & Jack

https://youtu.be/lM0teS7PFMo

pfisch

I mean, now we call the three laws "alignment", but it honestly seems inevitable that it will go wrong eventually.

That of course isn't stopping us from marching forwards though in the name of progress.

nix-zarathustra

>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.

IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.

hinkley

And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.

pfisch

Isn't that the plot of westworld season 3?

hinkley

I think better than half the writers on Westworld were not born yet when the OG Foundation books were written.

kagakuninja

In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.

>A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Therefore a robot could allow some humans to die, if the 0th law took precedence.

creer

Good conceit or theme by an author - on which to base a series of books that will sell? Not everything is an engineering or math project.

soulofmischief

That is still one of my favorite stories of all time. It really sticks to you. It's part of the I, Robot anthology.

chuckadams

It certainly is liberating all our creative works from our possession...

vonneumannstan

Intellectual Property is a questionable idea to begin with...

chuckadams

It's not the loss of ownership I'm lamenting, it's the loss of production by humans in the first place.

vonneumannstan

People made the same argument about Cameras vs Painting. "Humans are no longer creating the art!"

But I doubt most people would subscribe to that view now and would say Photography is an entirely new art form.

Philpax

Humans will always produce; it's just that those productions may not be financially viable, and may not have an audience. Grim, but also not too far off from the status quo today.

immibis

If we're abolishing it, we have to really abolish it, both ways, not just abolish companies' responsibilities but not rights, while abolishing individuals' rights but not responsibilities.

pera

It's for sure less questionable than the current proposition of letting a handful of billionaires exploit the effort of millions of workers, without permission and completely disregarding the law, just for the sake of accumulating more power and more billions.

Sure, patent trolls suck, so do MAFIAA, but a world where creators have no means to subsist, where everything you do will be captured by AI corps without your permission, just to be regurgitated into a model for a profit, sucks way way more

adamsilkey

How so? Even in a perfectly egalitarian world, where no one had to compete for food or resources, in art, there would still be a competition for attention and time.

lupusreal

There is the general principle of legal apparatus to facilitate artists getting paid. And then there is the reality of our extant system, which retroactively extends copyright terms so corporations who bough corporations who bought corporations... ...who bought the rights to an artistic work a century ago can continue to collect rent on that today. Whatever you think of the idealistic premise, the reality is absurd.

palmotea

> Intellectual Property is a questionable idea to begin with...

I know! It's totally and completely immoral to give the little guy rights against the powerful. It infringes in the privileges and advantages of the powerful. It is the Amazons, the Googles, the Facebooks of the world who should capture all the economic value available. Everyone else must be content to be paid in exposure for their creativity.

null

[deleted]

mrdependable

Why do you say that?

justonceokay

If we are headed to a star-trek future of luxury communism, there will definitely be growing pains as the things we value become valueless within our current economic system. Even though the book itself is so-so IMO, Down and Out in the Magic Kingdom provides a look at a future economy where there is an infinite supply of physical goods so the only economy is that of reputation. People compete for recognition instead of money.

This is all theoretical, I don’t know if I believe that we as humans can overcome our desire to hoard and fight over our possessions.

lannisterstark

>star-trek future of luxury communism,

Banks' Culture Communism/Anarchism > Star Trek, any day imho.

robertlagrant

You're saying something exactly backwards from reality. Star Trek is communism (except it's not) because there's no scarcity. It's not selfishness that's the problem. It's the ever-increasing number of things invented inside capitalism we deem essential once invented.

Detrytus

I always say this: we are headed to a star-trek future, but we will not be the Federation, we will become Borg. Between social media platforms, smartphones and "wokeness" the inevitable result is that everybody will be forced into compliance, no originality or divergent thinking will be tolerated.

behringer

7 years or maybe 14 that's all anybody needs. Anything else is greed and stops human progress.

Philpax

I appreciate someone named "behringer" posting this sentiment. (https://en.wikipedia.org/wiki/Behringer#Controversies)

null

[deleted]

Philpax

I'm glad we're seeing the death of the concept of owning an idea. I just hope the people who were relying on owning a slice of the noosphere can find some other way to sustain themselves.

theF00l

Copyright law protects the expression of ideas, not the ideas themselves. Favourite case law that reinforces this case was between David Bowie and the Gallagher brothers.

I would argue patents are closer to protecting ideas, and those are alive and well.

I do agree copyright law is terribly outdated but I also feel the pain of the creatives.

01HNNWZ0MV43FF

I just wish it was not, as usual, the people with the most money benefiting first and most

robertlagrant

Did we previously have the concept of owning an idea?

observationist

Lawyers and people with lots of money figured out how to make even bigger piles of money for lawyers and people with lots of money from people who could make things like art, music, and literature.

They occasionally allowed the people who actually make things to become wealthy in order to incentivize other people who make things to continue making things, but mostly it's just the people with lots of money (and the lawyers) who make most of the money.

Studios and publishers and platforms somehow convinced everyone that the "service" and "marketing" they provided was worth a vast majority of the revenue creative works created.

This system should be burned to the ground and reset, and any indirect parties should be legally limited to at most 15% of the total revenues generated by a creative work. We're about to see Hollywood quality AI video - the cost of movie studios, music, literature, and images is nominal. There are already creative AI series and ongoing works that beat 90's level visual effects and storyboarding being created and delivered via various platforms for free (although the exposure gets them ad revenue.)

We better figure this stuff out, fast, or it's just going to be endless rentseeking by rich people and drama from luddites.

sorokod

Keeping technology secret or forbidden is as old as humanity itself.

dingnuts

patents and copyrights allow ownership of ideas and of the specific expression of ideas

Longtemps

[dead]

gmuslera

What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Saying that, a variant of Susan Calvin role could prove to be useful today.

throw_m239339

> What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Multivac in "the last question"?

bpodgursky

AI is far closer to Asimov's vision of AI than anyone else's. The "Positronic Brain" is very close to what we ended up with.

The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.

empath75

Not sure that I agree with that. People have been imagining human-like AI since before computers were even a thing. The Star Trek computer from TNG is basically an LLM, really.

AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.

NoTeslaThrow

> The Star Trek computer from TNG is basically an LLM, really.

The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.

lcnPylGDnU4H9OF

Their point is that it seems to function like an LLM even if it's more advanced. The points raised in this comment don't refute that, per the assertion that each of them is in the future of LLMs.

sgt

Yet when you ask it to dim the lights, it dims either way too little or way too much. Poor Geordi.

whilenot-dev

> The Star Trek computer from TNG is basically an LLM, really.

Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D

For anyone curious: https://www.youtube.com/watch?v=6CDhEwhOm44

palmotea

> The Star Trek computer from TNG is basically an LLM, really.

No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.

It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."

palmotea

I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future. He was good at it, which is why you listen and why it's enjoyable, but it's still all a fantasy.

triceratops

> Asimov was a fantasy writer

Asimov was mostly not a fantasy writer. He was a science writer and professor of biochemistry. He published over 500 books. I didn't feel like counting but half or more of them are about science. Maybe 20% are science fiction and fantasy.

https://en.wikipedia.org/wiki/Isaac_Asimov_bibliography_(cat...

staticman2

Asimov was not savy at computers and found it difficult to learn to use a word processor.

MetaWhirledPeas

> I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future.

Why not? Who is this technology expert with flawless predictions? Talking about the future is inherently an exercise of the imagination, which is also what fiction writing is.

And nothing he's saying here contradicts our observations of AI up to this point. AI artwork has gotten good at copying the styles of humans, but it hasn't created any new styles that are at all compelling. So leave that to the humans. The same with writing; AI does a good job at mimicking existing writing styles, but has yet to demonstrate the ability to write anything that dazzles us with its originality. So his prediction is exactly right: AI does work that is really an insult to the complex human brain.

timewizard

There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.

palmotea

> There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.

Did he though? Or was the Butlerian Jihad backstory whose function was allow him to believably center human characters in his stories, given sci-fi expectations of the time?

I like Herbert's work, but ultimately he (and Asimov) were producers of stories to entertain people, so entertainment always would take priority over truth (and then there's the entirely different problem of accurately predicting the future).

tehjoker

I think this is kind of misunderstanding scifi a bit. You're right it was designed to be entertaining, but the kernel of it is that they take some existing trend and extrapolate it into the future. Do that enough times, and some of the stories will start to be meaningful looking backwards and the people who made those predictions still deserve credit even if they weren't entirely useful in the forward direction.

triceratops

I always thought the Butlerian Jihad was a convenient way to remove AI as a plot element. Same thing with shields and explosions; it made swordfighting a plausible way to fight in a universe with faster-than-light travel.

calmbell

A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.

palmotea

> A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.

But that's more a knock on people like Marc Andreessen than a reason you should put stock in Asimov.

Jgrubb

> humanity in general will be freed from all kinds of work that’s really an insult to the human brain.

He can only be referring to these Jira tickets I need to write.

BeetleB

There is a Jira MCP server...

fragmede

oh woah https://glama.ai/mcp/servers/@CamdenClark/jira-mcp

and MCP can work with deepseek running locally. hmm...

m463

flashback to Tron:

"MCP is highly intelligent and yet ruthless. It apparently wants to get rid of humans and especially users."

https://disney.fandom.com/wiki/Master_Control_Program

icecap12

As someone who just got done putting a bullet in some long-used instances, I both appreciated and needed this laugh. Thanks!

eliaspro

Back then, when we also believed the access to every imaginable information through the internet and allowing communication across the globe would lead to universal wisdom, world-peace and an unimaginable utopia where common sense, based on science and knowledge prevails.

Oh boy, how foolish we've been!

Fin_Code

I'm just hoping it brings out an explosion of new thought and not less thought. Will likely be both.

shortrounddev2

I have found there to be less diversity in thought on the internet in the last 10 years. I used to find lots of wild ideas and theories out there on obscure sites. Now it seems like every website is the same, talking about the same things

behringer

They say the web is dead, but I think we just have bad search engines.

tim333

If you go on twitter/x you will find a lot of wild ideas, many completely contradictory with other groups on x and or reality. It can be scary how polarized it is. If you open a new account and follow/like a few people with some odd viewpoint, soon you feed will be filled with that viewpoint, whatever it is.

20after4

Two words: Endless September.

TimorousBestie

I find this difficult to understand. There was a great explosion of conspiracy theories in the last ten years, so you should be seeing more of it.

shortrounddev2

Even the conspiracy theory community has become like this. What used to be a community of passionate skeptics, ufo-ologists, and rabid anti-statists has turned into the most overtly boot licking right wing apologists who apply an incredible amount of mental energy to justifying the actions of what is transparently and blatantly the most corrupt government in American history, so long as that government is weaponized against whatever identity and cultural groups they hate

immibis

Maybe they're all the same conspiracy theories. All the current conspiracy theories are that immigrants are invading the country and Biden's in on it. Where is the next Time Cube or TempleOS?

bdhcuidbebe

What Asimov calls AI is not the same as what Sam Altman and the other sharlatans calls AI.

Its usually called AGI these days.

hoseyor

I have a genuine question I can’t find or come up with a viable answer to, a matter of said “unpleasantness” as he puts it; how do people make money or otherwise sustain themselves in this AI scenario we are facing?

Has anyone heard a viable solution, or even has one themselves?

I don’t hear anything about UBI anymore, could that be because after roughly 60+ million alien people flooding into western countries from countries with a populations so large that are effectively endless? What do we do about that? Will that snuff out any kind of advancement in the west when the roughly 6 billion people all want to be in the west where everyone gets UBI and it’s the land of milk and honey?

So what do we do then? We can’t all be tech industry people with 6-figure plus salaries, vested ownership, and most people aren’t multi-millionaires that can live far away from the consequences while demanding others subject themselves to them.

Which way?

GeoAtreides

>how do people make money or otherwise sustain themselves in this AI scenario we are facing?

1% of the labour force works in agriculture:

https://ourworldindata.org/grapher/share-of-the-labor-force-...

1%

let that number sink in; think about it really means.

And what it means is that at least basic food (unprocessed, no meat) could be completely free. It make take some smart logistics, but it's doable. All of our food is already one step, one small step, away from becoming free for everyone.

This applies to clothes and basic tools as well.

slfnflctd

I've always thought there should be a 'minimum viable existence' option for those who are willing to forego most luxuries in exchange for not being required to do anything specific other than abide by reasonable laws.

It would be very interesting to see the percentage breakdowns of how such people chose to spend their time. In my opinion, there would be enough benefit to society at large to make it worthwhile. For a large group (if not the majority), I'm certain the situation would turn out to be completely temporary-- they would have the option to prepare themselves for some type of work they're better adapted to perform and/or enjoy, ultimately enhancing the culture and economy. Most of the rest could be useful as research subjects, if they were willing of course.

Obviously this is a bit of a utopian fantasy, but what can I say, Star Trek primed me to hope for such a future.

null

[deleted]

nthingtohide

There will be relative scarcity. Consider a scenario where iPhone 50 is manufactured in a dark factory. But still there is waiting period to have access to it. This is because of resource bottlenecks.

janalsncm

I have soured on UBI because it tries to use a market solution to deal with problems that I don’t think markets can fix.

I want everyone to have food, housing, healthcare, education, etc. in a post scarcity world. That should be possible. I don’t think giving people cash is the best way to accomplish that. If you want people to have housing, give them housing. If you want people to have food, give them food.

Cash doesn’t solve the supply problem, as we can see with housing now. You would think a rise in the cost of housing would lead to more supply, but the cost of real estate also increases the cost of building.

kogus

I think we need to consider what the end goal of technology is at a very broad level.

Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.

That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.

When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.

mperham

> When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.

Read some philosophy. People have been wrestling with this question forever.

https://en.wikipedia.org/wiki/Philosophy

In the end, all we have is each other. Volunteer, help others.

quxbar

It depends on what you are trying to get out of a novel. If you merely require repetitions on a theme in a comfortable format, Lester Dent style 'crank it out' writing has been dominant in the marketplace for >100 years already (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).

Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.

lm28469

You could have said the same thing when we invented the steam engine, mechanized looms, &c. As long as the driving force of the economy/technology is "make numbers bigger" there is no end in sight, there will never be enough, there is no goal to achieve.

We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political

charlie0

It's the human scaling problem. What systems can be used to scale humans to billions while providing the best possible outcomes for everyone? Capitalism? Communism?

Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.

jillesvangurp

Evolution is not about being better / winning but about adapting. People will adapt and co-exist. Some better than others.

AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.

dominicrose

> I think we need to consider what the end goal of technology is at a very broad level.

"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.

It would be pretty bad to lose access to energy after having it, worse than never having it IMO.

The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.

norir

This is undeniably true. The consequences of a technological collapse at this scale would be far greater than having never had it in the first place. For this reason, the people in power (in both industry and government) have more destructive potential than at any time in human history by far. And they do not act like they have little to no awareness of the enormous responsibility they shoulder.

empath75

> But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be.

Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.

Philpax

> AI can't possibly do _everything_

Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.

seadan83

Why not - IMO you perhaps underestimate human complexity. There was a guardian article where researchers created a map of a mouse's brain, 1 cubic millimeter. Contains 45km worth of neurons and billions of synapses. IMO the AGI crowd are suffering expert beginner syndrome.

belter

- Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.

- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?

Philpax

> Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.

Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.

> Does their existence make you obsolete?

Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.

foobarian

> what then are humans "for"?

Folding laundry

giraffe_lady

Here's a passage from a children's book I've been carrying around in my heart for a few decades:

“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."

"Everyone has to do those things," she said.

"Rich people don't," I pointed out.

Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.

"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."

"Men don't do those things."

"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”

rqtwteye

A while ago I saw a video of a robot doing exactly that. Seems there is nothing left for us to do.