Skip to content(if available)orjump to list(if available)

Why my p(doom) has risen, dramatically

Animats

Armies of killer robots are real now. Ukraine makes about 4 million military drones a year. Russia makes slightly fewer. Control is becoming more automated, because there are not enough people to control all those drones. Drones have to operate despite jamming, so they need to be fully autonomous on the battlefield.

And yes, there are AI killbot startups.[1] "At SECL Group, our team has vast experience in developing various drone systems, including those related to drone swarms. If you are interested in implementing AI-based target recognition capabilities in your UAVs, feel free to get in touch with us to discuss the details."

[1] https://seclgroup.com/adding-ai-and-ml-to-military-drones-fo...

OgsyedIE

Don't forget the autonomous insect and bird-sized drones for surveillance, artillery spotting and anti-infantry/anti-civilian kill missions.

OgsyedIE

Everybody looks at AI 2027 and nobody looks at Gradual Disempowerment. They both came out at around the same time and they both portend tremendous volumes of doom with a litany of citations, but the preexisting skynet/paperclipper memes means that nobody talks at length about the scenario of a few hundred human owners using AI servants to eliminate everybody else out of an incremental slippery slope of good business sense.

https://gradual-disempowerment.ai/

thaumaturgy

I think this comes close to the probable near-future outcome, but is missing an important element.

Over the last several decades we've seen an enormous transfer and consolidation of wealth from many people into a much smaller number of people. I think AI is going to dramatically accelerate this.

Fully open source, locally-hosted AI models are currently lagging far behind their commercial counterparts (in adoption if not capability). The web had a free-for-all period run by free software where it was able to gestate and gain traction before the walled garden era took over. Application development likewise had a long period of time where independent developers were able to build and distribute software (for free or otherwise) before the app store model took over; now, a significant portion of software development needs to be authorized by a third party before it can be distributed to potential users.

We did not get that same gestational period for AI. It went from theory to commercial product astonishingly fast. There are daily threads on HN now comparing how much everyone is "happy" to pay for their favorite commercial AI each month.

Developers are paying companies for the privilege of writing software.

The developers that, for whatever reason, refuse to get on board this train are going to be quickly outcompeted by the rest. Maybe they have been already.

It's likely that by about 2028 or thereabouts, we will see a landscape where just a few commercial entities will have captured the process of software development. If you want to make money making software, you will have to pay one of them to do it.

OgsyedIE

The people in those commercial entities might wholly be the property (with human rights refactored/disrupted), or indentured servants, of just the members of the alignment team of whichever AI company hits recursive self-improvement first, who live like new emperors.

vouaobrasil

Gary Marcus has a dichotomy between extinction versus bad actors. I feel a third possibiltiy is much more likely: a world of extreme specialization where AI reigns supreme, and where humans are mainly button-pushers. Those at the top and who still enjoy life will be the techies who are good at and enjoy making things with AI, which will be approximately 1% of the population. The other 99% will be administrators, button-pushers, and those on UBI who have a fairly meaningless existence without much dignity.

Because once 99% of the population lose the opportunity to at least learn a skill that they are good at, and (Importantly!!) for which they have some aptitude over others, and apply it, then they will face an existence of very little meaning. Like it or not, people want to be distinguishable from others, and if everyone can do everything with AI, then that disappears.

Techies don't like to admit it because they are at the top, but through AI, they are creating their own bubble world with their little toys that will act through the immense power of AI as an oligarchy that rules the listless and depressed masses.

A rather contemptible existence, in my opinion.

ryanmcgarvey

Your description of "little toys" and "immense power of AI" seem to be at odds in your argument. Which is it?

vouaobrasil

little toys was meant to imply the juvenile nature of AI.

edwardbernays

Essentially, the empettment of people vs their enslavement.

michael_j_x

This is such a utilitarianistic point of view, Not everyone is defined by their work, not everyone cares about how distinguishable they are from the others, or even how the others think of them, and I would even argue, that very few oligarchs/billionaires etc belong to the group of people that truly enjoy life.

eisvogel

I don't think Elon has any malice towards his own species. I tend to agree with the first comment on the article, re: p(dystopia), where almost all LLM services are currently in play, mostly due to the actions of third party clients.

vouaobrasil

Why not? I think a lot of people with traits like Musk would make us completely subservient without any freedom if he had the technology and opportunity to do so.

margalabargala

The article is talking about actual, literal, species-level extinction. Like, no humans left. That's the sort of "malice towards the species" meant here.

> a lot of people with traits like Musk would make us completely subservient without any freedom

That would be the "dystopia" the person you replied to mentioned.

OgsyedIE

Total death-cult misanthropy is a surprisingly common feature of repeat divorcees of any sex. Broken hearts, the really broken hearts, do that.

Unfortunately, recovering from that state and returning to normality (and gaining/regaining the ability to face oneself seriously and make amends when necessary) usually requires having multiple supportive friends. Elon hasn't been in a position to get for years.

hiddencost

Given he's engaged in mass murder that's severely destabilizing a significant chunk of the planet (through his unconstitutional culling of USAID among other things), it doesn't seem unreasonable to attribute his behavior to malice.

corpusdeli

Lolwhat?

He’s literally called for jailing and worse of people with progressive ideas. Either he’s a junky who is so high he doesn’t know what he’s saying, or he’s an evil piece of white shit. (I consider “edgelord doing it for the lulz” to be the second category.)

Henchman21

Hey now, don't shoot so low. It's entirely possible for him to be both! :)

archon1410

Seems like all the p(doom) is coming from Elon himself, and not the AI. An "empowered" but unintelligent "stochastic parrot", which seems to be his view of LLMs, is more likely to hinder than help with one's plan for world domination and annihilation.

GuB-42

I thought that p(doom) was the probably of a device running Doom.

Instead it is about how Elon Musk and his AI will somehow end the world... Disappointed.

RagnarD

Elon Musk, pfft. Mark Zuckerberg is a much more plausible concern.

os2warpman

Zuckerberg is incompetent.

He was lucky, and that's all, with Facebook and has coasted on its momentum ever since.

Every single "play" after that has failed.

Not only have they failed, he's even fallen into the dictator's trap: where wealthy men who surround themselves with multiple layers of overlapping and competing of staff and hierarchy which act to completely insulate and isolate them from reality start or fund outlandish, impossible, vanity projects.

I don't know why these guys don't take the money and run and spend the rest of their lives scuba diving and fixing vintage sports cars in their garage-- but I'm not a sociopath.

Musk is less incompetent.

LeifCarrotson

I assume the simpler explanation to be that many billionaire/dictator's dream hobbies turn out to be starting or funding outlandish, impossible, vanity projects while completely insulated from consequence and isolated from reality.

Yeah, scuba is cool, but I've got a handful of pet engineering projects I'd love to tinker on if I won the lottery (while farming off any drudge work that comes up). I assume all the businesspeople and politicians I also observe to fall into this trap just do the same with their particular area of expertise.

dlivingston

Incompetent people don't stumble their way into a global empire. Nor do simply lucky people. What a strange comment.

os2warpman

Success is almost entirely familial positioning and timing.

The Great Man is a myth that needs to die.

If you or any other person had wealthy parents who sent you to Philips Exeter and then Harvard and been friends and roommates with who Zuckerberg had been, you would be a wealthy banker, professional, or with luck-- Facebook founder.

My best friend in the Army was a Philips Exeter cat who rebelled against his parents and joined the military, married a stripper, drank and fucked his way across a couple of countries, divorced the stripper, got out of the military, went back home with his hat in his hand, got his mom and dad to pay for Marquette, and now he's the CEO of a medical device manufacturing company his parents started.

Nice guy.

Dumb as hell, but nice.

lijok

So the founders of Juicero, Theranos, and WeWork must have been geniuses building sustainable empires?

LeifCarrotson

A modicum of competence, a willingness to engage in cut-throat business tactics to take more than most would think their contributions are worth, a good deal of runway/connections, a little risk-taking...and a huge, huge pile of luck are the only things that could have propelled Zuck to his empire.

Do you really think it's just competence?

A lot of other people also started businesses in ~2004. I worked with one yesterday, he's now a freelancing consultant who is well worth the $180/hour he charges for software development skills in the niche he's built in the last 20 years. He's technically brilliant, brings himself rapidly up to date on new tools, works hard and fast, and is also extremely professional when it comes to communication, finances, and scheduling - a rare combination. I haven't asked about his personal net worth, but he's talked about retirement in a few years, just guessing he's worth maybe a couple million?

Competence cannot be in any way correlated to empire and net worth if Zuck is worth 100,000 times more than my friend. From what I've seen and read from Zuckerberg, he's maybe 0.8x? 0.7? as capable or as intelligent. Maybe, if it's all a front and he's actually the most incredible human being ever, he's 2x. Or even imagine he's a so-called 10x worker. But there's no way that any human's contributions can be imagined to be worth 100,000x more than even an average person's labor.

I'll grant that maybe there's a floor on competence required to reach that level of income. Maybe Zuck isn't truly incompetent. But it's definitely not just competence.

weare138

I'm sorry but the whole Grok “mechahitler” thing was a feature not a bug. The timing between that and xAI announcing it's AI services for governments and subsequently being awarded a government contract was not coincidental. Elon was demonstrating that Grok 4 comes with a state propaganda switch governments can flip whenever they want.

lijok

p(doom) is 100% and it's silly to think it's anything less. As soon as anyone figures out how to build SI, all it takes is one person to run off with the secret sauce and build a malevolent version of it.

pasquinelli

i won't use the term p(doom), but i agree that the human race is "doomed" in the same way all things that exist are "doomed."

phillipcarter

Cool. Meanwhile in the real world that isn't governed by science fiction, we're dealing with a billionaire using LLMs to wage information warfare towards explicitly white supremacist ends right now. Let's focus on that right now?

anonymoushn

please post about my pet issue instead of whatever the comment section is nominally about.

lijok

Replace LLMs with any other source of labor and we've had that for centuries. What's changed?

Henchman21

Smart Phones are a true opiate of the masses. If you weren't aware, people on opium are generally really easy to manipulate to whatever ends especially if you offer more opium.

redeeman

could you provide more details on this?

LeifCarrotson

I believe the parent is talking about Grok, the Nazi Twitter AI from Elon Musk:

https://www.theguardian.com/technology/2025/jul/14/elon-musk...

Gothmog69

lol I wish that were true

OgsyedIE

Eh, AI-driven p(doom) is only 100% if there is no intelligence ceiling around the human level.

But consider, in the alternate universe where we never invent AI, economic p(doom) is also 100%. Humans are fragile and short-lived.

https://youtu.be/b5z5R6xqEG0

lijok

That's interesting - first time I'm hearing the idea there might be an intelligence ceiling around the human level. Who talks about this more?

OgsyedIE

It is an open question and typing "human intelligence ceiling ai" into your search engine of choice is a good way to see that, but it is basically wishful thinking. The likes of David Deutsch, Judea Pearl and Noam Chomsky all argue this at differing levels of academic integrity (IMO Steven Byrnes has produced the most rigorous work on the English internet), but there is a simple objection.

If you have a single AI that is capped at an intelligence ceiling equivalent to human levels of intellect, what is stopping you from running fifty of them, in parallel, at fifty times faster than our normal human clock speed, as a team?