stego-tech
wruza
No one's gonna solve anything. "Our" world is based on greedy morons concentrating power through hands of just morons who are happy to hit you with a stick. This system doesn't think about what "we" should or allowed to do, and no one's here is at the reasonable side of it either.
lest we run the very real risk of societal collapse or species extinction
Our part is here. To be replaced with machines if this AI thing isn't just a fart advertised as mining equipment, which it likely is. We run this risk, not they. People worked on their wealth, people can go f themselves now. They are fine with all that. Money (=more power) piles in either way.
No encouraging conclusion.
jrvarela56
wruza
Thanks for the read. One could think that the answer is to simply stop being a part of it, but then again you're from the genus that outcompeted everyone else in staying alive. Nature is such a shitty joke by design, not sure how one is supposed to look at the hypothetical designer with warmth in their heart.
YetAnotherNick
> very real risk of societal collapse or species extinction
News like this is the reason why many people stopped listening to climate warnings because they are false.
No, there is no risk of species extinction in the near future due to climate change.
bsenftner
Whatever the future is, it is not American, not the United States. The US's cultural individualism has been Capitalistically weaponized, and the educational foundation to take the country forward is not there. The US is kaput, and we are merely observing the ugly demise. The future is Asia, with all of western culture going down. Yes, it is not pretty, The failed experiment of American self rule.
nroets
I fail to see how corporations are responsible for the climate crisis: Politicians won't tax gas because they'll get voted out.
We know that Trump is not captured by corporations because his trade policies are terrible.
If anything, social media is the evil that's destroying the political center: Americans are no longer reading mainstream newspapers or watching mainstream TV news.
The EU is saying the elections in Romania was manipulated through manipulation of TikTok accounts and media.
baq
If you put a knife in someone’s heart, you’re the one who did it and ultimately you’re responsible. If someone told you to do it and you were just following orders… you still did it. If you say there were no rules against putting knives in other people’s hearts, you still did it and you’re still responsible.
If it’s somehow different for corporations, please enlighten me how.
nroets
The oil companies are saying their product is vital to the economy and they are not wrong. How else will we get food from the farms to the store ? Ambulances to the hospitals ? And many, many other things.
Taxes are the best way to change behaviour (smaller cars driving less. Less flying etc). So government and the people who vote for them is to blame.
netsharc
> Politicians won't tax gas because they'll get voted out.
I wonder if that's corporations' fault after all: shitty working conditions and shitty wages, so that Bezos can afford to send penises into space. What poor person would agree to higher tax on gas? And the corps are the ones backing politicians who'll propagandize that "Unions? That's communism! Do you want to be Chaina?!" (and spread by those dickheads on the corporate-owned TV and newspaper, drunk dickheads who end up becoming defense secretary)
nroets
When people have more money, they tend to buy larger cars that they drive further. Flying is also a luxury.
So corporations are involved in the sense that they pay people more than a living wage.
sofixa
> Politicians won't tax gas because they'll get voted out.
Have you seen gas tax rates in the EU?
> We know that Trump is not captured by corporations because his trade policies are terrible.
Unless you think it's a long con for some rich people to be able to time the market by getting him to crash it.
> The EU is saying the elections in Romania was manipulated through manipulation of TikTok accounts and media.
More importantly, Romanian courts say that too. And it was all out in the open, so not exactly a secret
lucianbr
Romainan courts say all kinds of things, many of them patently false. It's absurd to claim that since romanian courts say something, it must be true. It's absurd in principle, because there's nothing in the concept of a court that makes it infallible, and it's absurd in this precise case, because we are corrupt as hell.
I'm pretty sure the election was manipulated, but the court only said so because it benefits the incumbents, which control the courts and would lose their power.
It's a struggle between local thieves and putin, that's all. The local thieves will keep us in the EU, which is much better than the alternative, but come on. "More importantly, Romanian courts say so"? Really?
ivraatiems
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?
I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.
Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.
"May you live in interesting times" is a curse for a reason.
zdragnar
> What's the point of our existence if we have no way to meaningfully contribute to our own world?
You may find this to be insightful: https://meltingasphalt.com/a-nihilists-guide-to-meaning/
In short, "meaning" is a contextual perception, not a discrete quality, though the author suggests it can be quantified based on the number of contextual connections to other things with meaning. The more densely connected something is, the more meaningful it is; my wedding is meaningful to me because my family and my partners family are all celebrating it with me, but it was an entirely meaningless event to you.
Thus, the meaningfulness of our contributions remains unchanged, as the meaning behind them is not dependent upon the perspective of an external observer.
ionwake
Please don't be offended by my opinion, I mean it in good humour to share some strong disagreements - Im going to give my take after reading your comment and the article which both seem completely OTT ( contextwise regarding my opinions ).
>meaning behind them is not dependent upon the perspective of an external observer.
(Yes brother like cmon)
Regarding the author, I get the impression he grew up without a strong father figure? This isnt ad hominem I just get the feeling of someone who is so confused and lost in life that he is just severely depressed possibly related to his directionless life. He seems so confused he doesn't even take seriously the fact most humans find their own meaning in life and says hes not even going to consider this, finding it futile.( he states this near the top of the article ).
I believe his rejection of a simple basic core idea ends up in a verbal blurb which itself is directionless.
My opinion ( Which yes maybe more floored than anyones ), is to deal with Mazlows hierarchy, and then the prime directive for a living organism which after survival , which is reproduction. Only after this has been achieved can you then work towards your family community and nation.
This may seem trite, but I do believe that this is natural for someone with a relatively normal childhood.
My aim is not to disparage, its to give me honest opinion of why I disagree and possible reasons for it. If you disagree with anything I have said please correct me.
Thanks for sharing the article though it was a good read - and I did struggle myself with meaning sometimes.
lm28469
> Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?
We spend the best 40 years of our lives working 40-50 hours a week to enrich the top 0.1% while living in completely artificial cities. People should wonder what is the point of our current system instead of worrying about Terminator tier sci fi system that may or may not come sometimes in the next 5 to 200 years
joshdavham
> I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be.
For me personally, I hope that we do get AGI. I just don't want it by 2027. That feels way too fast to me. But AGI 2070 or 2100? That sounds much more preferable.
abraxas
I think LLM or no LLM the emergence of intelligence appears to be closely related to the number of synapses in a network whether a biological or a digital one. If my hypothesis is roughly true it means we are several orders of magnitude away from AGI. At least the kind of AGI that can be embodied in a fully functional robot with the sensory apparatus that rivals the human body. In order to build circuits of this density it's likely to take decades. Most probably transistor based, silicon based substrate can't be pushed that far.
joshjob42
I think generally the expectation is that there are around 100T synapses in the brain, and of course it's probably not a 1:1 correspondence with neural networks, but it doesn't seem infeasible at all to me that a dense-equivalent 100T parameter model would be able to rival the best humans if trained properly.
If basically a transformer, that means it needs at inference time ~200T flops per token. The paper assumes humans "think" at ~15 tokens/second which is about 10 words, similar to the reading speed of a college graduate. So that would be ~3 petaflops of compute per second.
Assuming that's fp8, an H100 could do ~4 petaflops, and the authors of AI 2027 guesstimate that purpose wafer scale inference chips circa late 2027 should be able to do ~400petaflops for inference, ~100 H100s worth, for ~$600k each for fabrication and installation into a datacenter.
Rounding that basically means ~$6k would buy you the compute to "think" at 10 words/second. Generally speaking that'd probably work out to maybe $3k/yr after depreciation and electricity costs, or ~30-50¢/hr of "human thought equivalent" 10 words/second. Running an AI at 50x human speed 24/7 would cost ~$23k/yr, so 1 OpenBrain researcher's salary could give them a team of ~10-20 such AIs running flat out all the time. Even if you think the AI would need an "extra" 10 or even 100x in terms of tokens/second to match humans, that still puts you at genius level AIs in principle runnable at human speed for 0.1 to 1x the median US income.
There's an open question whether training such a model is feasible in a few years, but the raw compute capability at the chip level to plausibly run a model that large at enormous speed at low cost is already existent (at the street price of B200's it'd cost ~$2-4/hr-human-equivalent).
nopinsight
If by “several” orders of magnitude, you mean 3-5, then we might be there by 2030 or earlier.
baq
Exponential growth means the first order of magnitude comes slowly and the last one runs past you unexpectedly.
Palmik
Exponential growth generally means that the time between each order of magnitude is roughly the same.
ivraatiems
I think there is a good chance you are roughly right. I also think that the "secret sauce" of sapience is probably not something that can be replicated easily with the technology we have now, like LLMs. They're missing contextual awareness and processing which is absolutely necessary for real reasoning.
But even so, solving that problem feels much more attainable than it used to be.
throwup238
I think the missing secret sauce is an equivalent to neuroplasticity. Human brains are constantly being rewired and optimized at every level: synapses and their channels undergo long term potentiation and depression, new connections are formed and useless ones pruned, and the whole system can sometimes remap functions to different parts of the brain when another suffers catastrophic damage. I don’t know enough about the matrix multiplication operations that power LLMs, but it’s hard to imagine how that kind of organic reorganization would be possible with GPUs matmul. It’d require some sort of advanced “self aware” profile guided optimization and not just trial and error noodling with Torch ops or CUDA kernels.
I assume that thanks to the universal approximation theorem it’s theoretically possible to emulate the physical mechanism, but at what hardware and training cost? I’ve done back of the napkin math on this before [1] and the number of “parameters” in the brain is at least 2-4 orders of magnitude more than state of the art models. But that’s just the current weights, what about the history that actually enables the plasticity? Channel threshold potentials are also continuous rather than discreet and emulating them might require the full fp64 so I’m not sure how we’re even going to get to the memory requirements in the next decade, let alone whether any architecture on the horizon can emulate neuroplasticity.
Then there’s the whole problem of a true physical feedback loop with which the AI can run experiments to learn against external reward functions and the core survival reward function at the core of evolution might itself be critical but that’s getting deep into the research and philosophy on the nature of intelligence.
narenm16
i agree. it feels like scaling up these large models is such an inefficient route that seems to be warranting new ideas (test-time compute, etc).
we'll likely reach a point where it's infeasible for deep learning to completely encompass human-level reasoning, and we'll need neuroscience discoveries to continue progress. altman seems to be hyping up "bigger is better," not just for model parameters but openai's valuation.
TheDong
> What's the point of our existence if we have no way to meaningfully contribute to our own world?
For a sizable number of humans, we're already there. The vast majority of hacker news users are spending their time trying to make advertisements tempt people into spending money on stuff they don't need. That's an active societal harm. It doesn't contribute in any positive way to the world.
And yet, people are fine to do that, and get their dopamine hits off instagram or arguing online on this cursed site, or watching TV.
More people will have bullshit jobs in this SF story, but a huge number of people already have bullshit jobs, and manage to find a point in their existence just fine.
I, for one, would be happy to simply read books, eat, and die.
john_texas
Targeted advertising is about determining and giving people exactly what they need. If successful, this increases consumption and grows the productivity of the economy. It's an extremely meaningful job as it allows for precise, effective distribution of resources.
baron816
My vision for an ASI future involves humans living in simulations that are optimized for human experience. That doesn’t mean we are just live in a paradise and are happy all the time. We’d experience dread and loss and fear, but it would ultimately lead to a deeply satisfying outcome. And we’d be able to choose to forget things, including whether we’re in a simulation so that it feels completely unmistakeable from base reality. You’d live indefinitely, experiencing trillions of lifespans where you get to explore the multiverse inside and out.
My solution to the alignment problem is that an ASI could just stick us in tubes deep in the Earth’s crust—it just needs to hijack our nervous system to input signals from the simulation. The ASI could have the whole rest of the planet, or it could move us to some far off moon in the outer solar system—I don’t care. It just needs to do two things for it’s creators—preserve lives and optimize for long term human experience.
arisAlexis
do you really think that AGI is impossible after all that happened up to today? how is this possible?
KaiserPro
> AI has started to take jobs, but has also created new ones.
Yeah nah, theres a key thing missing here, the number of jobs created needs to be more than the ones it's destroyed, and they need to be better paying and happen in time.
History says that actually when this happens, an entire generation is yeeted on to the streets (see powered looms, Jacquard machine, steam powered machine tools) All of that cheap labour needed to power the new towns and cities was created by automation of agriculture and artisan jobs.
Dark satanic mills were fed the decedents of once reasonably prosperous crafts people.
AI as presented here will kneecap the wages of a good proportion of the decent paying jobs we have now. This will cause huge economic disparities, and probably revolution. There is a reason why the royalty of Europe all disappeared when they did...
So no, the stock market will not be growing because of AI, it will be in spite of it.
Plus china knows that unless they can occupy most of its population with some sort of work, they are finished. AI and decent robot automation are an existential threat to the CCP, as much as it is to what ever remains of the "west"
baq
> So no, the stock market will not be growing because of AI, it will be in spite of it.
The stock market will be one of the very few ways you will be able to own some of that AI… assuming it won’t be nationalized.
kypro
> and probably revolution
I theorise that revolution would be near-impossible in post-AGI world. If people consider where power comes from it's relatively obvious that people will likely suffer and die on mass if we ever create AGI.
Historically the general public have held the vast majority of power in society. 100+ years ago this would have been physical power – the state has to keep you happy or the public will come for them with pitchforks. But in an age of modern weaponry the public today would be pose little physical threat to the state.
Instead in todays democracy power comes from the publics collective labour and purchasing power. A government can't risk upsetting people too much because a government's power today is not a product of its standing army, but the product of its economic strength. A government needs workers to create businesses and produce goods and therefore the goals of government generally align with the goals of the public.
But in an post-AGI world neither businesses or the state need workers or consumers. In this world if you want something you wouldn't pay anyone for it or workers to produce it for you, instead you would just ask your fleet of AGIs to get you the resource.
In this world people become more like pests. They offer no economic value yet demand that AGI owners (wherever publicly or privately owned) share resources with them. If people revolted any AGI owner would be far better off just deploying a bioweapon to humanely kill the protestors rather than sharing resources with them.
Of course, this is assuming the AGI doesn't have it's own goals and just sees the whole of humanely as nuance to be stepped over in the same way humans will happy step over animals if they interfere with our goals.
Imo humanity has 10-20 years left max if we continue on this path. There can be no good outcome of AGI because it would even make sense for the AGI or those who control the AGI to be aligned with goals of humanity.
wkat4242
> I theorise that revolution would be near-impossible in post-AGI world. If people consider where power comes from it's relatively obvious that people will likely suffer and die on mass if we ever create AGI.
I agree but for a different reason. It's very hard to outsmart an entity with an IQ in the thousands and pervasive information gathering. For a revolution you need to coordinate. The Chinese know this very well and this is why they control communication so closely (and why they had Apple restrict AirDrop). But their security agencies are still beholden to people with average IQs and the inefficient communication between them.
An entity that can collect all this info on its own and have a huge IQ to spot patterns and not have to communicate it to convince other people in its organisation to take action, that will crush any fledgling rebellion. It will never be able to reach critical mass. We'll just be ants in an anthill and it will be the boot that crushes us when it feels like it.
robinhoode
> In this world people become more like pests. They offer no economic value yet demand that AGI owners (wherever publicly or privately owned) share resources with them. If people revolted any AGI owner would be far better off just deploying a bioweapon to humanely kill the protestors rather than sharing resources with them.
This is a very doomer take. The threats are real, and I'm certain some people feel this way, but eliminating large swaths of humanity is something dicatorships have tried in the past.
Waking up every morning means believing there are others who will cooperate with you.
Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
758597464
> This is a very doomer take. The threats are real, and I'm certain some people feel this way, but eliminating large swaths of humanity is something dicatorships have tried in the past.
Tried, and succeeded in. In times where people held more power than today. Not sure what point you're trying to make here.
> Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
I agree that most of humanity has empathy for others — but it's been shown that the prevalence of psychopaths increases as you climb the leadership ladder.
Fear or hope are the responses of the passive. There are other routes to take.
Centigonal
I think "resource curse" countries are a great surrogate for studying possible future AGI-induced economic and political phenomena. A country like the UAE (oil) or Botswana (diamonds) essentially has an economic equivalent to AGI: they control a small, extremely productive utility (an oilfield or a mine instead of a server farm), and the wealth generated by that utility is far in excess of what those countries' leaders need to maintain power. Sure, you hire foreign labor and trade for resources instead of having your AGI supply those things, but the end result is the same.
dovin
Dogs offer humans no economic value, but we haven't genocided them. There are a lot of ways that we could offer value that's not necessarily just in the form of watts and minerals. I'm not so sure that our future superintelligent summoned demons will be motivated purely by increasing their own power, resources, and leverage. Then again, maybe they will. Thus far, AI systems that we have created seem surprisingly goal-less. I'm more worried about how humans are going to use them than some sort of breakaway event but yeah, don't love that it's a real possible future.
chipsrafferty
A world in which most humans fill the role of "pets" of the ultra rich doesn't sound that great.
OgsyedIE
Unfortunately the current system is doing a bad job of finding replacements for dwindling crucial resources such as petroleum basins, new generations of workers, unoccupied orbital trajectories, fertile topsoil and copper ore deposits. Either the current system gets replaced with a new system or it doesn't.
pydry
>History says that actually when this happens, an entire generation is yeeted on to the streets
History hasnt had to contend with a birth rate of 0.7-1.6.
It's kind of interesting that the elite capitalist media (economist, bloomberg, forbes, etc) is projecting a future crisis of both not enough workers and not enough jobs simultaneously.
wkat4242
I don't really get the American preoccupation with birth rates. We're already way overpopulated for our planet and this is showing in environmental issues, housing cost, overcrowded cities etc.
It's totally a great thing if we start plateauing our population and even reduce it a bit. And no we're not going extinct. It'll just cause some temporary issues like an ageing population that has to be cared for but those issues are much more readily fixable than environmental destruction.
ahtihn
The planet is absolutely not over populated.
Overcrowded cities and housing costs aren't an overpopulation problem but a problem of concentrating economic activity in certain places.
NitpickLawyer
> I don't really get the American preoccupation with birth rates.
Japan is currently in the finding out phase of this problem.
yoyohello13
I think it’s more of a “be fruitful and multiply” thing than an actual existential threat thing. You can see many of loudest people talking about it either have religious undertones or want more peasants to work the factories.
Demographic shift will certainly upset the status quo, but we will figure out how to deal with it.
torlok
Don't try to reason with this population collapse nonsense. This has always been about racists fearing that "not enough" white westerners are being born, or about industrialists wanting infinite growth. For some prominent technocrats it's both.
alxjrvs
Racist fears of "replacement", mostly.
chipsrafferty
It's the only way to increase profits under capitalism in the long term once you've optimized the technology.
mattnewton
I think a good part of it is fear of a black planet.
null
torlok
Hayek has been lobbied by US corporations so hard for so long that regular people treat the invisible hand of the market like it's gospel.
torginus
Much has been made in its article about autonomous agents ability to do research via browsing the web - the web is 90% garbage by weight (including articles on certain specialist topics).
And it shows. When I used GPT's deep research to research the topic, it generated a shallow and largely incorrect summary of the issue, owning mostly to its inability to find quality material, instead it ended up going for places like Wikipedia, and random infomercial listicles found on Google.
I have a trusty Electronics textbook written in the 80s, I'm sure generating a similarly accurate, correct and deep analysis on circuit design using only Google to help would be 1000x harder than sitting down and working through that book and understanding it.
Aurornis
This story isn’t really about agents browsing the web. It’s a fiction about a company that consumes all of the web and all other written material into a model that doesn’t need to browse the web. The agents in this story supersede the web.
But your point hits on one of the first cracks to show in this story: We already have companies consuming much of the web and training models on all of our books, but the reports they produce are of mixed quality.
The article tries to get around this by imagining models and training runs a couple orders of magnitude larger will simply appear in the near future and the output of those models will yield breakthroughs that accelerate the next rounds even faster.
Yet here we are struggling to build as much infrastructure as possible to squeeze incremental improvements out of the next generation of models.
This entire story relies on AI advancement accelerating faster in a self-reinforcing way in the coming couple of years.
adastra22
There's an old adage in AI: garbage in, garbage out. Consuming and training on the whole internet doesn't make you smarter than the average intelligence of the internet.
drchaos
> Consuming and training on the whole internet doesn't make you smarter than the average intelligence of the internet.
This is only true as long as you are not able to weigh the quality of a source. Just like getting spam in your inbox may waste your time, but it doesn't make you dumber.
dimitri-vs
Interesting, I've hard the exact opposite experience. For example I was curious why in metal casting the top box is called the cope and the bottom is called the drag. And it found very niche information and quotes from page 100 in a PDF on some random government website. The whole report was extremely detailed and verifiable if I followed its links.
That said I suspect (and am already starting to see) the increased use of anti-bot protection to combat browser use agents.
somerandomness
Agreed. However, source curation and agents are two different parts of Deep Research. What if you provided that textbook to a reliable agent?
Plug: We built https://RadPod.ai to allow you to do that, i.e. Deep Research on your data.
preommr
So, once again, we're in the era of "There's an [AI] app for that".
skeeter2020
that might solve your sourcing problem, but now you need to have faith it will draw conclusions and parallels from the material accurately. That seems even harder than the original problem; I'll stick with decent search on quality source material.
somerandomness
The solution is a citation mechanism that points you directly where in the source material it comes from (which is what we tried to build). Easy verification is important for AI to have a net-benefit to productivity IMO.
demadog
RadPod - what models do you use to power it?
null
beklein
Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)
https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
//edit: remove the referral tags from URL
samth
I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.
LordDragonfang
Could you give some specific examples of things you feel definitely did not come to pass? Because I see a lot of people here talking about how the article missed the mark on propaganda; meanwhile I can tab over to twitter and see a substantial portion of the comment section of every high-engagement tweet being accused of being Russia-run LLM propaganda bots.
Aurornis
Agree. The base claims about LLMs getting bigger, more popular, and capturing people's imagination are right. Those claims are as easy as it gets, though.
Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.
The societal claims also fall apart quickly:
> Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.
This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.
This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:
> Most of America gets their news from Twitter, Reddit, etc.
Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.
madethisnow
something you can't know
elicksaur
This doesn’t seem like a great way to reason about the predictions.
For something like this, saying “There is no evidence showing it” is a good enough refutation.
Counterpointing that “Well, there could be a lot of this going on, but it is in secret.” - that could be a justification for any kooky theory out there. Bigfoot, UFOs, ghosts. Maybe AI has already replaced all of us and we’re Cylons. Something we couldn’t know.
The predictions are specific enough that they are falsifiable, so they should stand or fall based on the clear material evidence supporting or contradicting them.
motoxpro
That's incredible how much it broadly aligns with what has happened. Especially because it was before ChatGPT.
FairlyInvolved
There's a pretty good summary of how well it has held up here, by the significance of each claim:
https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating...
reducesuffering
Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
ffsm8
People have been predicting the singularity to occur sometimes around 2030 and 2045 waaaay further back then 2015. And not just by enthusiasts, I dimly remember an interview with Richard Darkins from back in the day...
Though that doesn't mean that the current version of language models will ever achieve AGI, and I sincerely doubt they will. They'll likely be a component in the AI, but likely not the thing that "drives"
pixl97
>This forum has been so behind for too long.
There is a strong financial incentive for a lot of people on this site to deny they are at risk from it, or to deny what they are building has risk and they should have culpability from that.
samr71
It's not something you need to worry about.
If we get the Singularity, it's overwhelmingly likely Jesus will return concurrently.
hn_throwaway_99
> Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
OK, say I totally believe this. What, pray tell, are we supposed to do about it?
Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.
I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.
So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.
goatlover
> "Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”
If that's really true, why is there such a big push to rapidly improve AI? I'm guessing OpenAI, Google, Anthropic, Apple, Meta, Boston Dynamics don't really believe this. They believe AI will make them billions. What is OpenAI's definition of AGI? A model that makes $100 billion?
archagon
And why are Altman's words worth anything? Is he some sort of great thinker? Or a leading AI researcher, perhaps?
No. Altman is in his current position because he's highly effective at consolidating power and has friends in high places. That's it. Everything he says can be seen as marketing for the next power grab.
cavisne
This article was prescient enough that I had to check in wayback machine. Very cool.
torginus
I'm not seeing the prescience here - I don't wanna go through the specific points but the main gist here seems to be that chatbots will become very good at pretending to be human and influencing people to their own ends.
I don't think much has happened on these fronts (owning to a lack of interest, not technical difficulty). AI boyfriends/roleplaying etc. seems to have stayed a very niche interest, with models improving very little over GPT3.5, and the actual products are seemingly absent.
It's very much the product of the culture war era, where one of the scary scenarios show off, is a chatbot riling up a set of internet commenters and goarding them lashing out against modern leftist orthodoxy, and then cancelling them.
With all thestrongholds of leftist orthodoxy falling into Trump's hands overnight, this view of the internet seems outdated.
Troll chatbots still are a minor weapon in information warfare/ The 'opinion bubbles' and manipulation of trending topics on social media (with the most influential content still written by humans), to change the perception of what's the popular concensus still seem to hold up as primary tools of influence.
Nowadays, when most people are concerned about stuff like 'will the US go into a shooting war against NATO' or 'will they manage to crash the global economy', just to name a few of the dozen immediately pressing global issues, I think people are worried about different stuff nowadays.
At the same time, there's very little mention of 'AI will take our jobs and make us poor' in both the intellectual and physical realms, something that's driving most people's anxiety around AI nowadays.
It also puts the 'superintelligent unaligned AI will kill us all' argument very often presented by alignment people as a primary threat rather than the more plausible 'people controlling AI are the real danger'.
dkdcwashere
> The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics. For example, they literally ask the models “so, are you aligned? If we made bigger versions of you, would they kill us? Why or why not?” (In Diplomacy, you can actually collect data on the analogue of this question, i.e. “will you betray me?” Alas, the models often lie about that. But it’s Diplomacy, they are literally trained to lie, so no one cares.)
…yeah?
botro
This is damn near prescient, I'm having a hard time believing it was written in 2021.
He did get this part wrong though, we ended up calling them 'Mixture of Experts' instead of 'AI bureaucracies'.
robotresearcher
We were calling them 'Mixture of Experts' ~30 years before that.
stavros
I think the bureaucracies part is referring more to Deep Research than to MoE.
smusamashah
How does it talk about GPT-1 or 3 if it was before ChatGPT?
dragonwriter
GPT-3 (and, naturally, all prior versions even farther back) was released ~2 years before ChatGPT (whose launch model was GPT-3.5)
The publication date on this article is about halfway between GPT-3 and ChatGPT releases.
Tenoke
GPT-2 for example came out in 2019. ChatGPT wasn't the start of GPT.
LordDragonfang
> (2025) Making models bigger is not what’s cool anymore. They are trillions of parameters big already. What’s cool is making them run longer, in bureaucracies of various designs, before giving their answers.
Holy shit. That's a hell of a called shot from 2021.
someothherguyy
its vague, and could have meant anything. everyone knew parameters would grow and its reasonable to expect that things that grow have diminishing returns at some point. this happened in late 2023 and throughout 2024 as well.
zurfer
In the hope of improving this forecast, here is what I find implausible:
- 1 lab constantly racing ahead and increasing the margin to other; the last 2 years are filled with ever-closer model capabilities and constantly new leaders (openai, anthropic, google, some would include xai).
- Most of the compute budget on R&D. As model capabilities increase and cost goes down, demand will increase and if the leading lab doesn't provide, another lab will capture that and have more total dollars to back channel into R&D.
moab
> "OpenBrain (the leading US AI project) builds AI agents that are good enough to dramatically accelerate their research. The humans, who up until very recently had been the best AI researchers on the planet, sit back and watch the AIs do their jobs, making better and better AI systems."
I'm not sure what gives the authors the confidence to predict such statements. Wishful thinking? Worst-case paranoia? I agree that such an outcome is possible, but on 2--3 year timelines? This would imply that the approach everyone is taking right now is the right approach and that there are no hidden conceptual roadblocks to achieving AGI/superintelligence from DFS-ing down this path.
All of the predictions seem to ignore the possibility of such barriers, or at most acknowledge the possibility but wave it away by appealing to the army of AI researchers and industry funding being allocated to this problem. IMO it is the onus of the proposers of such timelines to argue why there are no such barriers and that we will see predictable scaling in the 2--3 year horizon.
throwawaylolllm
It's my belief (and I'm far from the only person who thinks this) that many AI optimists are motivated by an essentially religious belief that you could call Singularitarianism. So "wishful thinking" would be one answer. This document would then be the rough equivalent of a Christian fundamentalist outlining, on the basis of tangentially related news stories, how the Second Coming will come to pass in the next few years.
viccis
Crackpot millenarians have always been a thing. This crop of them is just particularly lame and hellbent on boiling the oceans to get their eschatological outcome.
ivm
Spot on, see the 2017 article "God in the machine: my strange journey into transhumanism" about that dynamic:
https://www.theguardian.com/technology/2017/apr/18/god-in-th...
pixl97
Eh, not sure if the second coming is a great analogy. That wholly depends on the whims of a fictional entity performing some unlikely actions.
Instead think of them saying a crusade occurring in the next few years. When the group saying the crusade is coming is spending billions of dollars to trying to make just that occur you no longer have the ability to say it's not going to happen. You are now forced to examine the risks of their actions.
MrScruff
I would assume this comes from having faith in the overall exponential trend rather than getting that much into the weeds of how this will come about. I can sort of see why you might think that way - everyone was talking about hitting a wall with brute force scaling and then inference time scaling comes along to keep things progressing. I wouldn't be quite as confident personally and as have many have said before, a sigmoid looks like an exponential in it's initial phase.
barbarr
It also ignores the possibility of plateau... maybe there's a maximum amount of intelligence that matter can support, and it doesn't scale up with copies or speed.
AlexandrB
Or scales sub-linearly with hardware. When you're in the rising portion of an S-curve[1] you can't tell how much longer it will go on before plateauing.
A lot of this resembles post-war futurism that assumed we would all be flying around in spaceships and personal flying cars within a decade. Unfortunately the rapid pace of transportation innovation slowed due to physical and cost constraints and we've made little progress (beyond cost optimization) since.
Tossrock
The fact that it scales sub linearly with hardware is well known and in fact foundational to the scaling laws on which modern LLMs are built, ie performance scales remarkably closely to log(compute+data), over many orders of magnitude.
pixl97
Eh, these mathematics still don't work out in humans favor...
Lets say intelligence caps out at the maximum smartest person that's ever lived. Well, the first thing we'd attempt to do is build machines up to that limit that 99.99999 percent of us would never get close to. Moreso the thinking parts of humans is only around 2 pounds of mush in side of our heads. On top of that you don't have to grow them for 18 years first before they start outputting something useful. That and they won't need sleep. Oh and you can feed them with solar panels. And they won't be getting distracted by that super sleek server rack across the aisle.
We do know 'hive' or societal intelligence does scale over time especially with integration with tooling. The amount of knowledge we have and the means of which we can apply it simply dwarf previous generations.
ddp26
Check out the Timelines Forecast under "research". They model this very carefully.
(They could be wrong, but this isn't a guess, it's a well-researched forecast.)
IshKebab
This is hilariously over-optimistic on the timescales. Like on this timeline we'll have a Mars colony in 10 years, immortality drugs in 15 and Half Life 3 in 20.
danpalmer
These timelines always assume that things progress as quickly as they can be conceived of, likely because these timelines come from "Ideas Guys" whose involvement typically ends at that point.
Orbital mechanics begs to disagree about a Mars colony in 10 years. Drug discovery has many steps that take time, even just the trials will take 5 years, let alone actually finding the drugs.
movpasd
It reminds me of this rather classic post: http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
Science is not ideas: new conceptual schemes must be invented, confounding variables must be controlled, dead-ends explored. This process takes years.
Engineering is not science: kinks must be worked out, confounding variables incorporated. This process also takes years.
Technology is not engineering: the purely technical implementation must spread, become widespread and beat social inertia and its competition, network effects must be established. Investors and consumers must be convinced in the long term. It must survive social and political repercussions. This process takes yet more years.
wkat4242
Didn't the covid significantly reduce trial times? I thought that was such a success that they continued on the same foot.
danpalmer
The other reply has better info on covid specifically, but also consider that this refers to "immortality drugs". How long do we have to test those to conclude that they do in fact provide "immortality"?
Now sure, they don't actually mean immortality, and we don't need to test forever to conclude they extend life, but we probably do have to test for years to get good data on whether a generic life extension drug is effective, because you're testing against illness, old age, etc, things that take literally decades to kill.
That's not to mention that any drug like that will be met with intense skepticism and likely need to overcome far more scrutiny than normal (rather than the potentially less scrutiny that covid drugs might have managed).
agos
trial times were very brief for Covid vaccines because 1) there was no shortage of volunteers, capital, and political alignment at every level 2) the virus was everywhere and so it was really, really easy to verify if it was working. Compare this with a vaccination for a very rare but deadly disease: it's really hard to know if it's working because you can't just expose your test subjects to the deadly disease!
pama
No it didn’t. At least not for new small molecule drugs. It did reduce times a bit for the first vaccines because there were many volunteers available, and it did allow some antibody drug candidates to be used before full testing was complete. The only approved small molecule drug for covid is paxlovid, with both components of its formulation tested on humans for the first time many years before covid. All the rest of the small molecule drugs are still in early parts of the pipeline or have been abandoned.
mchusma
I like that the "slowdown" scenario has by 2030 we have a robot economy, cure for aging, brain uploading, and are working on a Dyson Sphere.
Aurornis
The story is very clearly modeled to follow the exponential curve they show.
Like the drew the curve out into the shape they wanted, put some milestones on it, and then went to work imagining what would happen if it continued with a heavy dose of X-risk doomerism to keep it spicy.
It conveniently ignores all of the physical constraints around things like manufacturing GPUs and scaling training networks.
joshjob42
https://ai-2027.com/research/compute-forecast
In section 4 they discuss their projections specifically for model size, the state of inference chips in 2027, etc. It's largely pretty in line with expectations in terms of the capacity, and they only project them using 10k of their latest gen wafer scale inference chips by late 2027, roughly like 1M H100 equivalents. That doesn't seem at all impossible. They also earlier on discuss expectations for growth in efficiency of chips, and for growth in spending, which is only ~10x over the next 2.5 years, not unreasonable in absolute terms at all given the many tens of billions of dollars flooding in.
So on the "can we train the AI" front, they mostly are just projecting 2.5 years of the growth in scale we've been seeing.
The reason they predict a fairly hard takeoff is they expect that distillation, some algorithmic improvements, and iterated creation of synthetic data, training, and then making more synthetic data will enable significant improvements in efficiency of the underlying models (something still largely in line with developments over the last 2 years). In particular they expect a 10T parameter model in early 2027 to be basically human equivalent, and they expect it to "think" at about the rate humans do, 10 words/second. That would require ~300 teraflops of compute per second to think at that rate, or ~0.1H100e. That means one of their inference chips could potentially run ~1000 copies (or fewer copies faster etc. etc.) and thus they have the capacity for millions of human equivalent researchers (or 100k 40x speed researchers) in early 2027.
They further expect distillation of such models etc. to squeeze the necessary size down / more expensive models overseeing much smaller but still good models squeezing the effective amount of compute necessary, down to just 2T parameters and ~60 teraflops each, or 5000 human-equivalents per inference chip, making for up to 50M human-equivalents by late 2027.
This is probably the biggest open question and the place where the most criticism seems to me to be warranted. Their hardware timelines are pretty reasonable, but one could easily expect needing 10-100x more compute or even perhaps 1000x than they describe to achieve Nobel-winner AGI or superintelligence.
ctoth
Can you share your detailed projection of what you expect the future to look like so I can compare?
IshKebab
Sure
5 years: AI coding assistants are a lot better than they are now, but still can't actually replace junior engineers (at least ones that aren't shit). AI fraud is rampant, with faked audio commonplace. Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it.
Tesla's robotaxi won't be available, but Waymo will be in most major US cities.
10 years: AI assistants are now useful enough that you can use them in the ways that Apple and Google really wanted you to use Siri/Google Assistant 5 years ago. "What have I got scheduled for today?" will give useful results, and you'll be able to have a natural conversation and take actions that you trust ("cancel my 10am meeting; tell them I'm sick").
AI coding assistants are now very good and everyone will use them. Junior devs will still exist. Vibe coding will actually work.
Most AI Startups will have gone bust, leaving only a few players.
Art-based AI will be very popular and artists will use it all the time. It will be part of their normal workflow.
Waymo will become available in Europe.
Some receptionists and PAs have been replaced by AI.
15 years: AI researchers finally discover how to do on-line learning.
Humanoid robots are robust and smart enough to survive in the real world and start to be deployed in controlled environments (e.g. factories) doing simple tasks.
Driverless cars are "normal" but not owned by individuals and driverful cars are still way more common.
Small light computers become fast enough that autonomous slaughter it's become reality (i.e. drones that can do their own navigation and face recognition etc.)
20 years: Valve confirms no Half Life 3.
FeepingCreature
It kind of sounds like you're saying "exactly everything we have today, we will have mildly more of."
Quarrelsome
you should add a bit where AI is pushed really hard in places where the subjects have low political power, like management of entry level workers, care homes or education and super bad stuff happens.
Also we need a big legal event to happen where (for example) autonomous driving is part of a really big accident where lots of people die or someone brings a successful court case that an AI mortgage underwriter is discriminating based on race or caste. It won't matter if AI is actually genuinely responsible for this or not, what will matter is the push-back and the news cycle.
Maybe more events where people start successfully gaming deployed AI at scale in order to get mortgages they shouldn't or get A-grades when they shouldn't.
9dev
It’s soothing to read a realistic scenario amongst all of the ludicrous hype on here.
petesergeant
> Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it.
I think this is much closer than you think, because there's a good percentage of call centers that are basically just humans with no power cosplaying as people who can help.
My fiber connection went to shit recently. I messaged the company, and got a human who told me they were going to reset the connection from their side, if I rebooted my router. 30m later with no progress, I got a human who told me that they'd reset my ports, which I was skeptical about, but put down to a language issue, and again reset my router. 30m later, the human gave me an even more outlandish technical explanation of what they'd do, at which point I stumbled across the magical term "complaint" ... an engineer phoned me 15m later, said there was something genuinely wrong with the physical connection, and they had a human show up a few hours later and fix it.
No part of the first-layer support experience there would have been degraded if replaced by AI, but the company would have saved some cash.
FairlyInvolved
We are going to scale up GPT4 by a factor of ~10,000 and that will result in getting an accurate summary of your daily schedule?
archagon
> Small light computers become fast enough that autonomous slaughter it's become reality
This is the real scary bit. I'm not convinced that AI will ever be good enough to think independently and create novel things without some serious human supervision, but none of that matters when applied to machines that are destructive by design and already have expectations of collateral damage. Slaughterbots are going to be the new WMDs — and corporations are salivating at the prospect of being first movers. https://www.youtube.com/watch?v=UiiqiaUBAL8
Gud
Slightly slower web frameworks by 2026. By 2030, a lot slower.
Trumpion
We currently don't see any ceiling if this continues in this speed, we will have cheaper, faster and better models every quarter.
Therewas never something progressing so fast
It would be very ignorant not to keep a very close eye on it
There is still a a chance that it will happen a lot slower and the progression will be slow enough that we adjust in time.
But besides AI we also now get robots. The impact for a lot of people will be very real
zvitiate
No, sooner lol. We'll have aging cures and brain uploading by late 2028. Dyson Swarms will be "emerging tech".
turnsout
IMO they haven't even predicted mid-2025.
> Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days.
Yeah, we are so not there yet.Tossrock
That is literally the pitch line for Devin. I recently spoke to the CTO of a small healthtech startup and he was very pro-Devin for small fixes and PRs, and thought he was getting his money worth. Claude Code is a little clunkier but gives better results, and it wouldn't take much effort to hook it up to a Slack interface.
turnsout
Yeah, I get that there are startups trying to do it. But I work with Cursor quite a bit… there is no way I would trust an LLM code agent to take high-level direction and issue a PR on anything but the most trivial bug fix.
Jun8
ACT post where Scott Alexander provides some additional info: https://www.astralcodexten.com/p/introducing-ai-2027.
Manifold currently predicts 30%: https://manifold.markets/IsaacKing/ai-2027-reports-predictio...
Aurornis
> ACT post where Scott Alexander provides some additional info: https://www.astralcodexten.com/p/introducing-ai-2027
The pattern where Scott Alexander puts forth a huge claim and then immediately hedges it backward is becoming a tiresome theme. The linguistic equivalent of putting claims into a superposition where the author is both owning it and distancing themselves from it at the same time, leaving the writing just ambiguous enough that anyone reading it 5 years from now couldn't pin down any claim as false because it was hedged in both directions. Schrödinger's prediction.
> Do we really think things will move this fast? Sort of no
> So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
The talk of "not our precise median" and "Not something we feel safe ruling out" is an elaborate way of hedging that this isn't their actual prediction but, hey, anything can happen so here's a wild story! When the claims don't come true they can just point back to those hedges and say that it wasn't really their median prediction (which is conveniently not noted).
My prediction: The vague claims about AI becoming more powerful and useful will come true because, well, they're vague. Technology isn't about to reverse course and get worse.
The actual bold claims like humanity colonizing space in the late 2020s with the help of AI are where you start to realize how fanciful their actual predictions are. It's like they put a couple points of recent AI progress on a curve, assumed an exponential trajectory would continue forever, and extrapolated from that regression until AI was helping us colonize space in less than 5 years.
> Manifold currently predicts 30%:
Read the fine print. It only requires 30% of judges to vote YES for it to resolve to YES.
This is one of those bets where it's more about gaming the market than being right.
leonidasv
> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
Important disclaimer that's lacking in OP's link.
crazystar
47% now soo a coin toss
elicksaur
Note the market resolves by:
> Resolution will be via a poll of Manifold moderators. If they're split on the issue, with anywhere from 30% to 70% YES votes, it'll resolve to the proportion of YES votes.
So you should really read it as “Will >30% of Manifold moderators in 2027 think the ‘predictions seem to have been roughly correct up until that point’?”
layer8
32% again now.
superconduct123
Why are the biggest AI predictions always made by people who aren't deep in the tech side of it? Or actually trying to use the models day-to-day...
AlphaAndOmega0
Daniel Kokotajlo released the (excellent) 2021 forecast. He was then hired by OpenAI, and not at liberty to speak freely, until he quit in 2024. He's part of the team making this forecast.
The others include:
Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
And finally, Scott Alexander himself.
kridsdale3
TBH, this kind of reads like the pedigrees of the former members of the OpenAI board. When the thing blew up, and people started to apply real scrutiny, it turned out that about half of them had no real experience in pretty much anything at all, except founding Foundations and instituting Institutes.
A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.
MrScruff
I kind of agree - since the Bostrom book there is a cottage industry of people with non-technical backgrounds writing papers about singularity thought experiments, and it does seem to be on the spectrum with hard sci-fi writing. A lot of these people are clearly intelligent, and it's not even that I think everything they say is wrong (I made similar assumptions long ago before I'd even heard of Ray Kurzweil and the Singularity, although at the time I would have guessed 2050). It's just that they seem to believe their thought process and Bayesian logic is more rigourous than it actually is.
flappyeagle
c'mon man, you don't believe that, let's have a little less disingenuousness on the internet
nice_byte
this sounds like a bunch of people who make a living _talking_ about the technology, which lends them close to 0 credibility.
mickelsen
[dead]
superconduct123
I mean either researchers creating new models or people building products using the current models
Not all these soft roles
null
torginus
Because these people understand human psychology and how to play on fears (of doom, or missing out) and insecurities of people, and write compelling narratives while sounding smart.
They are great at selling stories - they sold the story of the crypto utopia, now switching their focus to AI.
This seems to be another appeal to enforce AI regulation in the name of 'AI safetyiism', which was made 2 years ago but the threats in it haven't really panned out.
For example an oft repeated argument is the dangerous ability of AI to design chemical and biological weapons, I wish some expert could weigh in on this, but I believe the ability to theorycraft pathogens effective in the real world is absolutely marginal - you need actual lab work and lots of physical experiments to confirm your theories.
Likewise the dangers of AI systems to exfiltrate themselves to multi-million dollar AI datacenter GPU systems everyone supposedly just has lying about, is ... not super realistc.
The ability of AIs to hack computer systems is much less theoretical - however as AIs will get better at black-hat hacking, they'll get better at white-hat hacking as well - as there's literally no difference between the two, other than intent.
And here in lies a crucial limitation of alignment and safetyism - sometimes there's no way to tell apart harmful and harmless actions, other than whether the person undertaking them means well.
ZeroTalent
People who are skilled fiction writers might lack technical expertise. In my opinion, this is simply an interesting piece of science fiction.
rglover
Aside from the other points about understanding human psychology here, there's also a deep well they're trying to fill inside themselves. That of being someone who can't create things without shepherding others and see AI as the "great equalizer" that will finally let them taste the positive emotions associated with creation.
The funny part, to me, is that it won't. They'll continue to toil and move on to the next huck just as fast as they jumped on this one.
And I say this from observation. Nearly all of the people I've seen pushing AI hyper-sentience are smug about it and, coincidentally, have never built anything on their own (besides a company or organization of others).
Every single one of the rational "we're on the right path but not quite there" takes have been from seasoned engineers who at least have some hands-on experience with the underlying tech.
FeepingCreature
I use the models daily and agree with Scott.
Tenoke
..The first person listed is ex-OpenAI.
bpodgursky
Because you can't be a full time blogger and also a full time engineer. Both take all your time, even ignoring time taken to build talent. There is simply a tradeoff of what you do with your life.
There are engineers with AI predictions, but you aren't reading them, because building an audience like Scott Alexander takes decades.
ohgr
In the path to self value people explain their worth by what they say not what they know. If what they say is horse dung, it is irrelevant to their ego if there is someone dumber than they are listening.
This bullshit article is written for that audience.
Say bullshit enough times and people will invest.
porphyra
Seems very sinophobic. Deepseek and Manus have shown that China is legitimately an innovation powerhouse in AI but this article makes it sound like they will just keep falling behind without stealing.
aoanevdus
Don’t assume that because the article depicts this competition between the US and China, that the authors actually want China to fail. Consider the authors and the audience.
The work is written by western AI safety proponents, who often need to argue with important people who say we need to accelerate AI to “win against China” and don’t want us to be slowed down by worrying about safety.
From that perspective, there is value in exploring the scenario: ok, if we accept that we need to compete with China, what would that look like? Is accelerating always the right move? The article, by telling a narrative where slowing down to be careful with alignment helps the US win, tries to convince that crowd to care about alignment.
Perhaps, people in China can make the same case about how alignment will help China win against US.
MugaSofer
That whole section seems to be pretty directly based on DeepSeek's "very impressive work" with R1 being simultaneously very impressive, and several months behind OpenAI. (They more or less say as much in footnote 36.) They blame this on US chip controls just barely holding China back from the cutting edge by a few months. I wouldn't call that a knock on Chinese innovation.
princealiiiii
Stealing model weights isn't even particularly useful long-term, it's the training + data generation recipes that have value.
hexator
Yes, it's extremely sinophobic and entirely too dismissive of China. It's pretty clear what the author's political leanings are, by what they mention and by what they do not.
ugh123
Don't confuse innovation with optimisation.
pixl97
Don't confuse designing the product with winning the market.
usef-
In both endings it's saying that because compute becomes the bottleneck, and US has far more chips. Isn't it?
a3w
How so? Spoiler: US dooms mankind, China is the saviour in the two endings.
sivaragavan
Thanks to the authors for doing this wonderful piece of work and sharing it with credibility. I wish people see the possibilities here. But we are after all humans. It is hard to imagine our own downfall.
Based on each individual's vantage point, these events might looks closer or farther than mentioned here. but I have to agree nothing is off the table at this point.
The current coding capabilities of AI Agents are hard to downplay. I can only imagine the chain reaction of this creation ability to accelerate every other function.
I have to say one thing though: The scenario in this site downplays the amount of resistance that people will put up - not because they are worried about alignment, but because they are politically motivated by parties who are driven by their own personal motives.
It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.
The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.
The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.