US and UK refuse to sign AI safety declaration at summit
843 comments
·February 12, 2025doright
gretch
> Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there.
The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
TeMPOraL
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.
Were they?
The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.
> And if we had taken their "lesson", then human society would be in a much worse place.
Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.
I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.
--
[0] - Assuming they don't kill us first - see AGI.
soulofmischief
It's not the fault of the printing press that the Church built its empire upon the restriction of information and was willing to commit bloodshed to hold onto its power.
All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.
robwwilliams
Lots of good content here, but the main group that “suffered” from the invention and spread of the printing press was the aristocracy, so i am not shedding tears.
As for “breaking” Christianity: Christianity has been one schism after another for 2000 years: a schism from a schism from a schism. Power plays all the way down to Magog.
Socrates complained about how writing and the big boom in using the new Greek alphabet was ruining civilization and true learning.
And on and on it goes.
RandomLensman
I think that is overstating the relevance of the printing press vs existing power struggles, rivalries, discontent, etc. - it wasn't some sort of vacuum that the reformation happened in, for example.
Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.
throwawayqqq11
I dont understand, why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself.
Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
pipes
I'm very glad that it broke the power of the Catholic church (and I was raised in a Catholic family). It allowed the enlightenment to happen and freedom from dogma. I don't think it it broke Christianity at all. It brought actual Christianity to the masses because the bible was printed in their own languages rather than Latin. The catholic church burnt people at the stake for creating non Latin bibles (William Tyndale for example).
shmeeed
That's a very thought provoking insight regarding to the often repeated "printing press doomsayer" talking point. Thank you!
kiratp
> And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
“A society grows great when old men plant trees in whose shade they know they shall never sit”
relistan
So much this
beezlebroxxxxxx
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.
It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.
roenxi
But if the state approaches a technology with intent it is usually for the purposes of a military offence. I don't think that is a good idea in the context of AI! Although I also don't think there is any stopping it. The US has things like DARPA for example and a lot of Chinese investment seems to be done with the intent of providing capabilities to their army.
The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.
We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.
jstanley
> I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas
The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.
circuit10
The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal. In fact we humans are doing the same thing, we can't really improve our intelligence directly but we are trying to create AI to achieve our goals, and there's no reason that the AI itself wouldn't do so assuming it's capable and we don't attempt to stop it, and currently we don't really know how to reliably control it.
We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely
bbor
I usually don't engage on A[GS]I on here, but I feel like this is a decent time for an exception -- you're certainly well spoken and clear, which helps! Three things:
(I) All of the focus on AGI is a distraction.
I strongly disagree on that, at least if you're implying some intentionality. I think it's just provably true that many experts are honestly worried, even if you don't include the people who have dedicated a good portion of their lives to the cause. For example: OpenAI has certainly been corrupted through the loss of its nonprofit board, but I think their founding charter[1] was pretty clearly earnest -- and dire. (II) "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly)
To be fair, this uncertainty in the term has been there since the dawn of the field, a fact made clear by perrenial rephrasings of the sentiment "AI is whatever hasn't been done yet" (~Larry Tesler 1979, see [2]).I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.
Finally, that brings me to the crux:
(III) AI... is a technology like any other
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such: I’ve always thought of A.I. as the most profound technology humanity is working on-more profound than fire or electricity or anything that we’ve done in the past. It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before. [3]
When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,
2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and
3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.
Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.
To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":
The scientific method, whose chief advances had been in mathematics and the physical sciences, took possession of other domains of experience: the living organism and human society also became the objects of systematic investigation... instead of mechanism forming a pattern for life, living organisms began to form a pattern for mechanism.
In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.[7]
TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.[1] OpenAI's Charter https://web.archive.org/web/20230714043611/https://openai.co...
[2] Investigation of a famous AI quote https://quoteinvestigator.com/2024/06/20/not-ai/
[3] Pichai, 2023: "AI is more profound than fire or electricity" https://fortune.com/2023/04/17/sundar-pichai-a-i-more-profou...
[4] Bostrom, 2014: Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
[5] Yudkowsky, 2013: Intelligence Explosion Microeconomics https://intelligence.org/files/IEM.pdf
[6] Huw Price's bio @ The Center for Existential Risk https://www.cser.ac.uk/team/huw-price/
[7] Mumford, 1934: Technics and Civilization https://archive.org/details/in.ernet.dli.2015.49974
RajT88
A useful counterexample is all the people who predicted doomsday scenarios with the advent of nuclear weapons.
Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.
idontwantthis
And imagine if private companies had had the resources to develop nuclear weapons and the US government had decided it didn’t need to even regulate them.
achierius
If it weren't for one guy -- literally one person, one vote -- out of three who were on a submarine, the Cuban Missile Crisis would have escalated to a nuclear strike on the US Navy. Whether we would have followed with nuclear strikes on Russia, who knows. But you trying to pretend that we didn't come incredibly close to disaster is just totally unfounded in history.
Especially when you consider -- we came that close despite incredible international efforts at constraining nuclear escalation. What you are arguing for now is like arguing to go back and stop all of that because it clearly wasn't necessary.
chasd00
i see your point but the analogy doesn't get very far. For example, nuclear weapons were never mass marketed to the public. Nor is it possible to push the bounds of nuclear weapon yield by a private business, university, r/d lab, group of friends, etc.
lxnn
Note that we only got to observe outcomes in which we didn't die from nuclear annihilation. https://en.wikipedia.org/wiki/Anthropic_principle
gretch
>Just because it has not come to pass yet does not mean they were wrong.
This assertion is meaningless because it can be applied to anything.
"I think vaccines cause autism and will cause human annihilation" - just because it has not yet come to pass does not mean it is wrong.
harrall
But we already know.
I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.
History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.
What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.
radley
> What advice can we can take from it?
That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.
gibspaulding
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.
llm_trw
They had some sort of revolution the previous few centuries too.
Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.
The printing press was a net positive in every time scale.
daedrdev
Given countries at the time were all monarchies with limited rights, I'm not sure if it's too comparable.
BurningFrog
The printing press meant regular people could read the bible, which led to protestantism and a century of very bloody wars across Europe.
Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.
Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.
ls612
>At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.
I wonder what people in 2300 will say about networked computers...
dartos
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.
What energy? What were they wrong about?
The luddite type groups have historically been correct in their fears. It just didn’t matter in the face of industrialization.
alfalfasprout
The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.
It's the inevitable result of low-trust societies infiltrating high trust ones. And it means that as technologies with dangerous implications for society become more available there's enough people willing to prostitute themselves out to work on society's downfall that there's no realistic hope of the train stopping.
torginus
I think the fundamental false promise of capitalism and industrial society is that it claims to be able to manufacture happiness and life satistfaction.
Even on the material realm this is untrue, beyond meeting the basic needs of people on the technological level, he majority desirable things - such as nice places to live - have a fixed supply.
This necessitates that the price of things like real estate, must increase in price in proportion to the money supply. With increasing inequality, one must fight tooth and nail to get the standard of life our parents considered easily available. Not being greedy is not a valid life strategy to pursue, as that means relinquishing an ever greater proportion of wealth to people who are, and becoming poorer in the process.
mionhe
I don't disagree that money (and therefore capitalism or frankly any financial system) is unable to create happiness.
I disagree with your example, however, as the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
Addressing your example specifically, there's a fixed supply of housing in capitalist countries not because people don't want to build houses, but because government or bureacracy artificially limits the supply or creates other disincentives that amount to the same thing.
vixen99
Let's not ascribe the possession of higher level concepts like a 'promise' to abstract entities. Reserve that for individuals. As with some economic theories, you appear to have a zero sum game outlook which is, I submit, readily demolished.
There are some thoughts on this here: https://www.playforthoughts.com/blog/concepts-from-game-theo...
Aurornis
> The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.
This is definitely not a new phenomenon.
In my experience, tech has been one of the more considerate areas of societal impact. Spend some time in other industries and it's eye-opening to see the wanton disregard for consumers and the environment.
There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
hnbad
> There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
Sure, we don't need to talk about how certain Big Oil companies knew about the climate catastrophe before any scientists publicly talked about it, or how tobacco companies knew their product was an addictive drug while blatantly lying about it even in public hearings.
But it's ironic to mention FAANG given what the F is for if you recall that when the algorithmic timeline was first introduced by Facebook, the response from Facebook to criticism was literally that satisfaction went down but engagement went up. People directly felt that the algorithm made them more unhappy, more isolated and overall less satisfied but because it was more addictive, because it created more "engagement", Facebook doubled down on it.
Also "sustainable" stopped being a talking point when the tech industry became obsessed with LLMs. Microsoft made a big show of wanting to become "carbon neutral" (of course mostly using bogus carbon offset programs that don't actually do anything and carbon capture technologies that are net emission positive and will be for decades if not forever but still, at least they pretended) and then silently threw all of that away when it became more strategically important to pursue AI at any cost. Companies that previously desperately tried to sell messages of green washing and carbon neutrality now talk about building their own non-renewable power plants because of all the computational power they need to run their LLMs (not to mention how much more hardware needs to be produced and replaced for this - the same way the crypto bubble ate through graphics cards).
I think the pearl-clutching is justified considering that ethics and climate protection have now been folded into "woke" and there's a tidal wave in Western politics to dismantle civil rights and capture democratic systems for corporate interests that is using the "anti-woke" culture war to further its goals - the Trump government being the most obvious example. It's no longer in FAANG's financial interests to appear "green" or "privacy conscious", it's now in their interest to be "anti-woke" and that now means no longer having to care about these things and having freedom to crack down on any dissident voices within without fearing public backlash or "cancel culture".
timacles
> reality is that a culture of selfishness has become too widespread.
Tale as old as time. We’re yet another society blinded by our own hubris. Tell me what is happening now is not exactly how Greece and Rome fell.
The scary part is that we as a species are becoming more and more capable of large scale destruction. Seems like we are doomed to end civilization this way someday
hnbad
> Tell me what is happening now is not exactly how Greece and Rome fell.
I'm not sure what you mean by that. Ancient Greece was a loose coalition of city states, not an empire. You could say they were short-sighted by being more concerned about their rivalry than external threats but the closest they came to being united was under Alexander the Great, whose death left a power vacuum.
There was no direct cause of "the fall" of Ancient Greece. The city states were suffering greatly from social inequality, which created tensions and instability. They were militarily weakened from the war with the Persians. Alexander's death left them without a unifying force. Then the Roman Empire knocked on its door and that was the end of it.
Rome likewise didn't fall in one single way. "Rome" isn't even what people think it is. Roman history spans several different entities and even if you talk about the "empire in decline" that's covering literally hundreds of years, ending with the Holy Roman Empire, which has been retroactively reimagined as a kind of proto-Germany. But even then that's only the Western Roman Empire - the Eastern Roman Empire continued to exist as the Byzantine Empire until the Ottoman Empire conquered Constaninople. And this distinction between the two empires is likewise retroactive and did not exist in the minds of Romans at the time (although they were de facto independent of each other).
If you only focus on the century or so that is generally considered to represent the fall of Western Rome, the ultimate root cause actually seems to be natural climate change. The Huns fled climate change, chasing away other groups that then fled into the Empire. Late Western Rome also again suffered from massive wealth inequality, which the ruling class attempted to maintain with increasingly cruel punishments.
So, if you want to look for a common thread, it seems to be the hubris of the financial elite, not "society" as a whole.
NL807
>The harsh reality is that a culture of selfishness has become too widespread.
I'm not even sure this is a culture specific issue. More like selfishness is a survival mechanism hard wired into humans, including other animals. While one could argue that cooperation is also a good survival mechanism, but that's only true so long environmental factors put a pressure on people to cooperate. When that pressure is absent, accumulating resources at the expense of others gives an individual a huge advantage, and they would do it, given the chance.
AlienRobot
When tech does it, it's on record because the Internet never forgets. And it's a very, very long record, and it saddens me a lot.
hnbad
I'd argue you've got things mixed up, actually.
Humans are social animals. We are individually physically weak and defenseless. Unlike other animals, we are born into this world immobile, naked, starving and helpless. It takes us literally years to mature to the point where we wouldn't simply die outright if we were abandoned by others. Newborns can literally die from touch deprivation. We develop huge brains not only to allow us to come up with clever tools but also to help us build and navigate complex social relationships. We're evolved to live in tribes, yes, but we're also evolved to interact with other tribes - we created diplomacy and trading and even currency to interact with those other tribes without having to resort to violence or avoidance.
In crises, this is the behavior we fall back to. Yes, some will self-isolate and use violence to keep others away until they feel safe again. But overwhelmingly what we see after natural disasters and spaces where the formal order of civilisation and state is disrupted and leaves a vacuum is cooperation, mutual aid and people taking risks to help others - because we intrinsically know that being alone means death and being in a group means surviving. Of course the absence of state control also often enables other existing groups to assert their power, i.e. organized crime. But it shouldn't be surprising that the fledgling and atrophied ability to self-organize might not be strong enough to withstand a fast moving power grab by an existing group - what might be more surprising is that this is rarely the case and often news stories about "looting" after a natural disaster turn out to be uncharitable descriptions of self-organized rescues and searches.
I think a better analogy for human selfishness would be the mirage of "alpha wolves". As seems to be common knowledge at this point, there is no such thing as an "alpha wolf" hierarchy in groups of wolves living in nature and the phenomenon the author who coined the term (and has since regretted doing so) was mistakenly extrapolating from observations he made of wolves in captivity. But the behavior does seem to exist in captivity. Not because it's "inherent" or their natural behavior "under pressure" but because it's a maladaptation that arises from the unnatural circumstances of captivity (e.g. different wolves with no prior bonds being forced into a confined space, naturally trying to form a group but being unable to rely on natural bonds and shared trust).
Humans do not naturally form strict social hierarchies. For the longest time, Europeans would have laughed at you if you claimed the feudal system was not in the human nature - it would have literally been heresy to challenge it. Nowadays in the West most people will say capitalism or markets are human nature. Outside the West, people will still likely at least tell you that authoritarianism is human nature - whether it's the boot of a dictatorship, the boots of oligarchs or "the people's boot" that's pushing down on the unruly (yourself included).
What we do know about more egalitarian tribal societies is that they often use delegation, especially in times of war. When quick decisions need to be made, you don't have the time for lengthy discussions and consensus seeking and it can be an advantage to have one person giving orders and coordinating an attack or defense. But these systems can still be consent-based: if the war chief is reckless or seeks to take advantage of the group for his own gain, he is easily demoted and replaced. Likewise in times of unsolvable problems like droughts, spiritual leaders might be given more power by the group. Now shift from more mobile, nomadic groups to more static, agrarian groups (though it's worth pointing out the distinction here is not agriculture but more likely granaries, crop rotation and irrigation, as some nomadic tribes still engaged in forms of agriculture) and suddenly it becomes easier for that basis of consent to be forgotten and the chosen leaders to maintain that initial state of desperation and to begin justifying their status with the divine mandate. Oops, you got a monarchy going.
Capitalism freed us from the monarchy but it did not meaningfully upset the hierarchy. Aristocrats became capitalists, the absence of birthright class assignment created some social mobility but the proportions generally remained the same. You can't have a leader without followers, you can't have a ruling class without a class of those they can rule over, you can't have an owning class without a class to rent that owned property out to and to work for that owned capital to be realized into profits.
But just like a monarch despite their divine authority was still beholden to the support of the aristocracy to exert power over others and to the laborers to till the fields, build the castle and fight off foreign claims to power, the owning class too exists in a state of perpetual desperation and distrust. The absence of divine right means a billionaire must maintain their wealth and the capitalist mantra of infinite growth means anything other than growing that wealth is insufficient to maintain it. All the while they have to compete with the other billionaires above them as well as maintain control over those beneath them and especially the workers and renters whose wealth and labor they must extract from in order to grow theirs. The perverse reality of hierarchies is that even those at the top of it are crushed underneath its weight. Nobody is allowed to be happy and at peace.
khazhoux
> Too many people (especially in tech) don't really care what happens to others as long as they get rich off
This is a problem especially everywhere.
greenimpala
Profit over ethics, self-interest over communal well-being, and competition over cooperation. You're describing capitalism.
tmnvix
I don't necessarily disagree with you, but I think the issue is a little more nuanced.
Capitalism obviously has advantages and disadvantages. Regulation can address many disadvantages if we are willing. Unfortunately, I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person). We have literally created monsters. There is no reason we had to go this far. Capitalism doesn't have to mean the preeminence of capital above all else. It needs to be put back in its place and not necessarily discarded. I am certain there are better ways to practice capitalism. They probably involve balancing it out with some other 'isms.
raincole
The harsh truth is people stop pretending the world is rule based.
If they signed the agreement... so what? Do people forget that the US has withdrawn from Paris Agreement and is withdrawing from WHO? Do people forgot Israel and North Korea got nukes even when we supposedly had a global nonproliferation treaty?
If AGI is as powerful and dangerous as doomsayers believe, the chance the US (or China, or any country with enough talented computer scientists) would respect whatever treaty they have about AGI is exactly zero.
chasd00
How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.
If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
palmotea
> How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.
> If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
Like any other real-life law? Software engineers (a class which I'm a recovering member of) seem to have a pretty common misunderstanding about the law: that it needs to be air tight like secure software, otherwise it's pointless. That's just not true.
So the way you "prevent advancements in [AI] software" is you 1) punish them severely when detected and 2) restrict access to information and specialized hardware to create a barrier (see: nuclear weapons proliferation, "born secret" facts, CSAM).
#1 is sufficient to control all the important legitimate actors in society (e.g. corporations, university researchers), and #2 creates a big barrier to everyone else who may be tempted to not play by the rules.
It won't be perfect (see: the drug war), but it's not like cartel chemists are top-notch, so it doesn't have to be. I don't think the software engineering equivalent of a cartel chemist will be able to "do cutting edge research and innovate with architectures and algorithms" with only a "laptop and inet connection."
Would the technology disappear? No. Will it be pushed to the margins? Yes. Is that enough? Also yes.
DoctorOetker
What makes you think government sponsored entities would actually stop work on machine learning?
Even if governments overtly agree to stop or pause or otherwise limit machine learning, how credible would such a "gentlemans agreement" be?
Consider the basic operations during training and inference, like matrix multiplication, derivatives, gradient descent. Which of these would be banned? All of them? None of them? Some of them? The combination of them?
How would you inspect compliance in the context of privacy?
The analogy with drugs is rather poor: people don't have general purpose laboratories in their house. People do have general purpose computational platforms in their home. Another is that nations do not prohibit each other from producing drugs, they even permit each other to research and investigate pathogens, chemical weapons in laboratories deemed sufficiently safe.
It's not even clear what you mean with "AI" does it mean all machine learning? or LLM's? Where do you draw this boundary?
What remains is the threat of punishment in your proposal, but how credible is it, wouldn't a small collective of programmers conspiring work on machine learning, predict getting paperclipped in case of arrest?
AnimalMuppet
Punish them severely when detected? Nice plan. What if they aren't in your jurisdiction? Are you going to punish them severely when they're in China? North Korea? Somalia? Good luck with that.
The problem is that the information can go anywhere that has an internet connection, and the enforcement can't.
thingsilearned
Regulating is very hard at the software level but not hard at the hardware level. The US and allies control all major chip manufacturing. Open AI and others have done work showing that regulating compute should be significantly easier to do than other regulations we've done such as nuclear https://www.cser.ac.uk/media/uploads/files/Computing-Power-a...
torginus
This paper should be viewed in retrospect with the present day knowledge that Deepseek exists - regulating compute in not as easy or effective as previously thought.
As for the Chinese chip industry, I don't claim to be an expert on it, but it seems the Chinese are quickly coming up with increasingly less inferior alternatives to Western tech.
pj_mukh
"We are able to think of thousands of hypothetical ways technology could go off the rails in a catastrophic way"
Am I the only one here saying that this is no reason to preemptively pass legislation? That just seems crazy to me. Imagined horrors aren't real horrors?
I disagree with this administrations approach, I think we should be vigilant, and keeping people who stand to gain so much from the tech in the room, doesn't seem like a good idea, but other than that, I haven't seen any real reason to do more than wait and be vigilant?
achierius
Just because we haven't seen anyone die from nuclear terrorism doesn't mean we shouldn't legislate against it. And we do: significant investments have been made into things like roadside nuclear detectors, and during large events we even go so far as to do city-wide nuclear scans from the air to look for emission sources.
That's an "imagined" horror too. Are you suggesting that what we should do instead is just wait for someone to kill N million people and then legislate? Why do you value the incremental economic benefit of this technology over the lives of people we can predictably protect?
pj_mukh
“Just because we haven't seen anyone die from nuclear terrorism”
I mean, we have…for debatable definitions of “terrorism”.
saulpw
Predicted horrors aren't real horrors either. But maybe we don't have to wait until the horrors are realized and embedded into the fabric of society before we apply the brakes a bit. How else could we possibly be vigilant? Reading news articles and wringing our hands?
XorNot
There's a difference between the trolley speeding towards someone tied to the tracks, versus someone tied to the tracks but the trolley is stationary, and to someone standing at the station looking at the bare ground and saying "if we built some tracks and put a trolley on it, and then tied someone to the tracks the trolley would kill them! We need to regulate against this dangerous trolley technology before it's too late". Then instead someone builds a freeway because it turns out the area wasn't well suited to a rail trolley.
talldrinkofwhat
I think it's worth noting that we can't even combat the real horrors. The fox is already in the henhouse. The quote that sticks with me is:
"We've already lost our first encounter with AI" - I think Yuval Hurari.
Algorithms heavily thumbed the scales on our social contracts. Where did all of the division come from? Why is extremism blossoming everywhere? Because it gets clicks. Maybe we're just better observing what's been going on under the hood all along, but it seems like there's about 350 million little cans of gasoline dousing American eyeballs.
Make Algorithms Govern All indeed.
zoogeny
I think the alternative is just as chilling in some sense. You don't want to be stuck in a country that outlaws AI (especially from other countries) if that means you will be uncompetitive in the new emerging world.
The future is going to be hard, why would we choose to tie one hand behind our back? There is a difference between being careful and being fearful.
TFYS
It's because of competition that we are in this situation. When the economic system and relationships between countries are based on competition, it's nearly impossible to avoid these races to the bottom. We need more systems based on cooperation instead of competition.
int_19h
International systems are more organic than designed, but the problem with cooperation is that it's not a particularly stable arrangement without enforcement - sure, everybody is better off when everybody cooperates, but you can be even better off when you don't cooperate but everybody else does.
JumpCrisscross
> We need more systems based on cooperation instead of competition.
That requires dissolving the anarchy of the international system. Which requires an enforcer.
zoogeny
I'm not certain of the balance myself. I was thinking as a counterpoint of the band The Beatles where the two song writers (McCartney and Lennon) are seen in competition. There is a balance there between their competitiveness as song writers and their cooperation in the band.
I think it is one-sided to see any situation where we want to retain balance as being significantly affected by one of the sides exclusively. If one believes that there is a balance to be maintained between cooperation and competition, I don't immediately default to believing that any perceived imbalance is due to one and not the other.
pb7
Competition is as old as time. There are single celled organisms on your skin right now competing for resources to live. There is nothing more innate to life than this.
tmnvix
> You don't want to be stuck in a country that outlaws AI
Just as you don't want to be stuck in the only town that outlaws murder...
I am not a religious person, but I can see the value in promoting shared taboos. The question is, how do we do this in the modern world? We had some success with nuclear weapons. I don't think it's any coincidence that contemporary leaders (and possibly populations) seem to have forgotten how bloody dangerous they are and how utterly stupid it is to engage in brinkmanship with so much on the line.
zoogeny
This is a good point, and it is the reason why communists argued that the only way communism could work is if it happened globally simultaneously. You don't want to be the only non-capitalist country in a world of capitalists. Of course, when the world-wide revolution didn't happen they were forced to change their tune and adjust.
As for nuclear weapons, I mean it does kind of suck in today's age to be a country without nuclear weapons, right? Like, certain well known countries would really like to have them so they wouldn't feel bullied by the ones that have them. So, I actually think that example works against you. And we very well may end up in a similar circumstance where a few countries get super powerful AGIs and then use their advantage to prevent any other country from getting it as well. Therefore my point stands: I don't want to be in one of the countries that doesn't get to be in that exclusive club.
latexr
> if that means you will be uncompetitive in the new emerging world. (…) There is a difference between being careful and being fearful.
I’m so sick of that word. “You need to be competitive”, “you need to innovate”. Bullshit. You want to talk about fear? “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant. They’re not being competitive or innovative, they’re sucking you dry of as much value as they can. We all need to take a breath. Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails. Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more.
zoogeny
I live in a ruralish area. There is a lot of forested area and due to economic depression there are a lot of people living in the woods. Most live in tents but some actually cut down the trees and turn them into make-shift shacks. Using planks and nails like you suggest. They often drag propane burners into the woods which often leads to fires. Perhaps this is what you mean?
In reality, most people will continue to live the modern life where there are doctors, accountants, veterinarians, mechanics. We'll continue to enjoy food distribution and grocery stores. We'll all hope that North America gets its act together and build high speed rail so we can travel comfortably for long distances.
There was a time Canada was a big exporter of engineering technology. From mining to agriculture, satellites, and nuclear technology. I want Canada to be competitive in these ways, not making makeshift shacks out of planks and nails for junkies that have given up on life and live in the woods.
JumpCrisscross
> “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant
If a society is okay accepting a lower standard of living and sovereign subservience, then sure, competition doesn't matter. But if America and China have AI and nukes and Europe doesn't, one side gets to call the shots and the other has to listen.
Henchman21
This may resonate with you:
https://crimethinc.com/2018/09/03/the-mythology-of-work-eigh...
yibg
When does that competitiveness and innovation stop though? If they stopped 100 years ago where would we be today as a species and is that better or worse than today? How about 1000 years ago?
We face issues (like we always have), but I'd argue quite strongly that the competitiveness in our history and drive to invent and innovate has led to where we are today and it's a good thing.
achierius
"one hand behind our back"? We're talking about who's going to be the first to build the thing that might kill all of humanity. Or, even in many of the happier scenarios, the thing which will impoverish and immiserate the vast majority of the population, rendering them permanently subject to the whims of the capital-owning few.
Why is it "our" back? The people who will own these machines do not consider you one of them. The people leading the countries that will use these machines to kill each other's civilians do not consider you one of them. You have far more in common with a Chinese worker than you do with Sam Altman or Jeff Bezos.
And frankly? I think choosing a (say, conservatively, just going off of the estimates Altman and Amodei have made in the past) 20% chance of killing everyone as our first resort is just morally unacceptable. If the US made an effort to halt research and China still kept at it, sure, I won't complain I suppose, but we haven't, and pretending that China is the problem when it's our labs pushing the edge on capabilities -- it's just comedic.
yibg
This is true for all new technology of significant potential impact right? Similar discussions were had about nuclear technology I'm sure.
The reality is, with increased access to information and accelerated pace of discovery in various fields, we'll come across things that has the potential for great harm. Be it AI, some genetical engineering causing a plague, nuclear fallout etc. We don't necessarily know what the harm / benefits are all going to be ahead of time, so we only really have 2 choices:
1. try to stop / slow down such advances. Not sure this is even possible in the long run
2. try to get a good grasp of potential dangers and figure out ways to mitigate / control them
orangebread
I think the core of what people are scared of is fear itself. Or put more eloquently by some dead guy "There is nothing to fear, but fear itself".
If we don't want to live in a world where these incredibly powerful technologies are leveraged for nefarious purposes there needs to be emotional maturity and growth amongst humanity. Those who are able to make these growths need to hold the irresponsible ones accountable (with empathy).
The promise of AI is that these incredibly powerful technologies will be disseminated to the masses and Open AI know this is the next step and it's why they're trying to keep a grip on their market share. With the advent of nVidia's project digits and powerful open source models like deepseek, it's very clear how this trajectory will go.
Just wanted to add some of this to the convo. Cheers.
throwaway9980
Everything you are describing sounds like the phenomenon of government in the United States. If we replace a human powered bureaucracy with a technofeudalist dystopia it will feel the same, only faster.
We are upgrading the gears that turn the grist mill. Stupid, incoherent, faster.
snickerbockers
AI isn't like nuclear fission. You can't remotely detect that somebody is training an AI. It's far too late to sequester all the information related to AI like what was done with uranium enrichment. The equipment needed to train AI is cheap and ubiquitous.
These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.
To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."
pjc50
> Video and pictures will soon have no evidentiary value.
I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
> but nothing AI is "disrupting" existed 200 years ago
200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
snickerbockers
> I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.
And then there's the problem of the US government, which is known to strongarm CAs into signing fraudulent certificates.
> 200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
I think that's a good argument against the kazinksy-ites, but I was primarily speaking towards concerns such as 'misinformation' and machines pushing humans out of jobs. We're still going to have food, medicine, and shelter. AI can't take that away; the only concern is adapting our society so that we can either feed significant populations of unproductive people, or move those people into whatever jobs machines can't do yet.
We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt. There has always been something that has the potential to destroy civilization in the near future, but if you're reading this post then your ancestors weren't the ones that failed to adapt.
ben_w
> Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.
Or the front-door analog route, point a real camera at a screen showing fake images.
That said, lots of people are incompetent at forging, about knowing what "tells" each process of fakery has and how to overcome them, so I think this will still broadly work.
> We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt.
That's underestimating the impact this can have. An AI which reaches human performance and speed on 250 watt hardware, at current global average electricity prices, costs about the same to run as a human costs just to feed.
By coincidence, the global electricity supply is currently about 250 watts/capita.
mywittyname
Encryption doesn't need to last forever, just long enough to be scrutinized. Once a trusted individual is convinced that a certain camera took this picture at this time and location, then that authentication is forever. Maybe that trust only includes devices built in the past 5 years, as hacks and bugs are fixed. Or corroborating evidence can be gathered; say several older, "potentially untrustworthy" devices take very similar video of an event.
As with most things, the primary issue is not really a technical one. People will believe fake photos and not believe real ones based on their own biases. So even if we had the Perfect Technology, it wouldn't necessarily matter.
And this is the reason we have fallen into a dystopian feudalistic society (we aren't teetering). The weak link is our incompetent collective human brains. And a handful of people built the tools necessary to exploit that incompetence; we aren't going back.
inetknght
> I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
When you outlaw [silent cameras] the only outlaws will have [silent cameras].
Where a camera might "authenticate" a photograph, an AI could "authenticate" a camera.
rocqua
You handle the authentication by signatures with private keys embedded in hardware modules. An AI isn't going to be able to fake that signature. Instead, the system will fail because the keys will be extracted from the hardware modules.
Dalewyn
You might be interested to know that the Managing Director of the Fukuoka Stock Exchange was arrested yesterday[1][2] on allegations that he took upskirt shots of schoolgirls. He was caught because his tablet's camera emitted the mandatory shutter sound.
Laws like this serve primarily to deter casual criminals and catch patently stupid criminals which are the vast majority of cases. In this case it took a presumable sexual predator off the streets, which is a great application of the law.
[1]: https://www3.nhk.or.jp/news/html/20250212/k10014719841000.ht...
[2]: https://www3-nhk-or-jp.translate.goog/news/html/20250212/k10...
null0pointer
Camera authentication will never work because you can always just take an authenticated photo of your AI image.
IshKebab
I think you could make it difficult for the average user, e.g. if cameras included stereo depth estimation.
Still, I can't really see it happening.
root_axis
> I think we may eventually get camera authentication as a result of this
How would this work? Not sure if something like this is possible.
abdullahkhalids
You can't really tell if someone is developing chemical weapons. You can tell when such weapons are used. This is very similar to AI.
Yet, the international agreements on non-use of chemical weapons have held up remarkably well.
czhu12
I actually agree with you, but just wanted to bring up this interesting article challenging that: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...
Basically claims that chemical weapons have been phased out because they aren't effective, not because we've become more moral, or international standards have been set.
"During WWII, everyone seems to have expected the use of chemical weapons, but never actually found a situation where doing so was advantageous... I struggle to imagine that, with the Nazis at the very gates of Moscow, Stalin was moved either by escalation concerns or the moral compass he so clearly lacked at every other moment of his life."
Der_Einzige
Really? What happened to Bashir Al Assad after he gassed his own people? Oh yeah, nothing.
frank_nitti
I’m not certain that premise is valid: https://thegrayzone.com/2021/04/18/at-un-aaron-mate-debunks-...
JumpCrisscross
> Video and pictures will soon have no evidentiary value
We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.
wand3r
I think we'll have a decade of chaos but not because of this. A lot of stories during the election cycle in news media and on the internet were simply Democratic or Republican "fan fiction". I don't want to make this political, I only illustrate this example to say, that I was burned in believing some of these things and you develop the muscle pretty quickly. Tweets, anecdotes, images and even stories reported by "reputable" media companies already require a degree of critical thinking.
I haven't really believed in aliens existing on earth for most of my adult life. However, I have sort of come around to at least entertaining the idea in recent years but would need solid photographic or video evidence. I am now convinced that aliens could basically land in broad daylight in 3 years while being heavily photographed and it would easily be able to be explained away as AI. Especially if governments want to do propaganda or counter propaganda.
null
Sophira
What happens when you eventually get read/write brain interfaces? Because I'm pretty sure that's going to happen at some point.
It sounds like complete science fiction, but so did where we are with generative AI only a few decades ago.
hollerith
>You can't remotely detect that somebody is training an AI.
There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.
And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.
jandrewrogers
> use billions of dollars of electricity and GPUs
For now. Qualitative improvements in efficiency are likely to change what is required.
milesrout
And none of them want to do that. Why would they! AI is perfectly safe. The idea it will take over the world is ludicrous and all "AI safety" in practice seems to mean is censoring it so it won't make jokes about women or ethnic minorities.
hollerith
Yes, as applied to the current generation of AIs, "safety" and "alignment" refer to things like preventing the product from making jokes about women or ethnic minorities, but that is because the current generation is not powerful enough to threaten human safety and human survival. The OP in contrast is about what will happen if the labs succeed in their stated goal of creating AIs that are much more powerful.
null
concordDance
Current AIs are safe. Will the ones in 5 years be, 20 years?
parliament32
>Video and pictures will soon have no evidentiary value.
This is one bit that has a technological solution. Canon's had some version of this since the early 2000s: https://www.bhphotovideo.com/c/product/319787-REG/Canon_9314...
A more recent initiative: https://c2pa.org/
mzajc
This is purely security by obscurity. I don't see why someone with motivation and capability to forge evidence wouldn't be able to forge these signatures, considering the private keys presumably come with the camera you buy.
rocqua
If you make it expensive enough to extract, and tie the private key to a real identity, then you can make it hard to abuse on scale.
Here I mean that at point of sale you register yourself as owner for the camera. And you make extracting a key cost about a million. Then bulk forgeries won't happen.
parliament32
Shipping secure secrets is also a somewhat solved problem: TPMs ship with EKs that, AFAIK, nobody has managed to extract (yet?): https://docs.trustauthority.intel.com/main/articles/tpm-ak-p...
manquer
> You can't remotely detect that somebody is training an AI.
Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.
Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.
whiplash451
It is a lot easier to distinguish civil/military usage of uranium than it is to distinguish "good" vs "bad" usage of a model being trained.
manquer
Not if you are making a dirty bomb . Any radioactive material even at the levels found in power reactors can be dangerous.
The point is not the usage is harmful or not, almost any tech can be used for bad purposes if you wish to do so.
You can put controls is the point , controls here could be agent Dameons monitoring the gpus and tallying usage to heat signals, or firmware etc . The controls on what is being trained would be at a higher level than just agents process on a gpu .
hollerith
That's why we should ban all large training runs without trying to distinguish good ones from bad ones. People will have to be satisfied with the foundation models we have now.
tonymet
The US worked with all printer manufacturers to add watermarking. In theory they could work with fabs or service providers to embed instruction detection, , similar to how hosting providers do mining instruction detection.
null
r00fus
All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.
LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
ryanackley
Half moat-building, half marketing. The need for "safety" implies some awesome power.
Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
bombcar
> The need for "safety" implies some awesome power.
This is a big part of it, and you can get others to do it for you.
It's like the drain cleaner sold in an extra bag. Obviously it must be the best, it's so scary they have to put it in a bag!
r00fus
So it's a tool like the internal combustion engine, or the moveable typeset. Game-changing technology that may alter society but not dangerous like nukes.
timewizard
> eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
I doubt this. Productivity is gained through experience and expertise. If you don't know what you don't know than the LLM is perfectly useless to you.
ksynwa
Are you not alarmed by the startling discoveries made by the hard-at-work researchers where LLMs lie (when explicitly told to) and copy their source files (when explicitly told to)?
z7
Waymo's driverless taxis are currently operating in San Francisco, Los Angeles and Phoenix.
raincole
I am willing to bet that even when driverless taxis are operating in at least 50% of big cities around the world, you will still see comments like "auto driving is a pipe dream like NFT" on HN every other day.
Spivak
This kind of hypocrisy only exists in a made up person. Anyone who is saying that autonomous vehicles are still a ways away are not taking about the very impressive but very much semi-autonomous vehicles deployed today. But instead vehicles that have no need for a human operator ever. The kind you could buy off the shelf, switch into taxi mode, and let it do its thing.
Semi-autonomous vehicles are impressive for the fact that one driver can now scale well beyond a single vehicle. Fully-autonomous vehicles are impressive because they can scale limitlessly. The former is evolutionary, the latter is revolutionary.
ceejayoz
Notably, not Musk's, and very different promised functionality.
hector126
What did Musk's promised driverless taxis provide that existing driverless taxis don't? The tech has arrived; it's a car that drives itself while the passenger sits in the back. Is the "gotcha" that the car isn't a Tesla?
Seems like we're splitting hairs a bit here.
edanm
> All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.
Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.
They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.
You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.
z3n0n
entertaining podcast about all this "rationalist" weirdness: https://www.youtube.com/watch?v=e57oo7AgrJY
yodsanklai
I'd say AGI is like Musk talking about interstellar traveling.
anon291
> LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.
wand3r
Driverless taxis already exists?
elric
The way I see "safety" isn't really about what AI "can do", but about how we allow it to be used. E.g. an AI that's used to assess an insurance claim should be fully explainable so we know it isn't using racist biases to deny claims based on skin colour. If the AI can't give that guarantee, it isn't fit for purpose and its use shouldn't be allowed.
Same with killer robots (or whatever it is people are afraid of when they talk about "AI safety"). As long as we can control who they kill, when, and why, there's no real difference with any other weapon system. If that control is even slightly in doubt: it's not fit for service.
Does this mean that bullshit generating LLMs aren't fit for service in many areas: it probably does. But maybe there steps can be taken to mitigate risks.
I'm sure this will involve some bureaucratic overhead. But it seems worth the hassle to me.
Being against AI Safety is a stupid hill to die on. Being against some concrete declaration or a part thereof, sure, that might make sense. But this smells a lot like the tabacco industry being against warnings/filters/low-tar, or the car industry being anti-seatbelt.
jstummbillig
How does your theory account for the Eliezer Yudkowsky type person, who clearly shows no love for any of the labs or the current progress, and yet is very much pro-"AI safety"?
jcarrano
When you are the dominant world power, you just don't let others determine your strategy, as simple as that.
Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.
briankelly
I read in Supermen (book on Cray) that the test moratorium was a strategic advantage for the US since labs here could simulate nuclear weapons using HPC systems.
jcarrano
I was referring to the 1958 moratorium I'd be surprised if they could simulate weapons with the computers of the time. Here [1] is the clip from Teller's interview.
In another clips he says that he believes it was inevitable that the Soviets would come up with an H-bomb on their own.
[1] https://www.youtube.com/watch?v=zx1JTLrhbnI&list=PLVV0r6CmEs...
briankelly
Ok yeah this was a treaty that came much later - didn’t realize there were multiple agreements made over the years.
aucisson_masque
but how long is the us even going to be dominant ?
it's well known that china has long caught up with the us, in almost every way, and is on the verge of surpassing it on the others. just look at deepseek, as efficient as openai for a fraction of the cost. Baidu, alibaba ai and so on.
China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
In fact most countries did. India too.
it's not the case of the looser making new rule, it's the big boy discussing how they are going to handle the situation and the retarded ones thinking they are too good for that.
hector126
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
I'd be very happy to take a high stakes, longterm bet with you if that's your earnest position.
Axsuul
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
Are you actually saying this in the year 2025?
Aurornis
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
China has signed on to many international agreements that it has absolutely no interest in following or enforcing.
Intellectual property is the most well known. They’re party to various international patent agreements but good luck trying to get them to enforce anything for you as a foreigner.
cutemonster
> and yet they did sign that agreement.
Of course they aren't going to follow it, just sign it. They're bright people
cakealert
This is the correct answer.
However the attempts are token and they know it too. Just an attempt to appear to be doing something for the naive information consumers, aka useful idiots.
oceanplexian
I missed the boat on the 80s but as a “hacker” who made it through the 90s and 00s there’s something deeply sad and disturbing about how the conversation around AI is trending.
Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.
buildfocus
> Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer.
My understanding is that approximately zero government-level safety discussion is restriction of just building & running AI yourself. There are no limits of AI hacking even in the EU AI regaultion or discussions I've seen.
Regulation is around business & government applications and practical use cases: no unaccountable AI making final employment decisions, no widespread facial recognition in public spaces, transparency requirements for AI usage in high-risk areas (health, education, justice), no AIs with guns, etc.
lolinder
Who is saying this? Do you have specific comments in mind that you're referring to? I can't find anything anywhere near the top that says anything like this.
bmc7505
Called it three years ago: https://news.ycombinator.com/item?id=30142353
entropi
I would say the debate currently going on is less about "running code on your own machine" and more about "making sure the thing your are replacing at least a portion of your labor force with is at least somewhat dependable and those who benefit from the replacement are still responsible".
TomK32
I think management is putting too much hope into this, any negative outcome from replacing a human with AI might result in liabilities surpassing the savings. Air Canada's chatbot was decided just a year ago and I'm sure the hallucinating AI chatbot, from development to legal fees, cost the airline more money than they saved in their call-center.
TomK32
Is there any source for you claim that any (democratic) government wants to criminalize running code on your own computer? I didn't see it in this declaration from the AI Action summit where the USA and UK are missing from the signatories https://www.politico.eu/wp-content/uploads/2025/02/11/02-11-...
As you mention ethics: what ethics do we apply to AI? None? Some? The same as to a human? As AI is replacing humans in decision-making, it needs to be held responsible just as a human.
charles_f
AI development is not led by hackers in their garages, but by multi-billion corporations with no incentives other than profit and no care other than their shareholders. The only way to control their negative outcomes in this system is regulation.
If you explained to that hacker that govs and corps would leverage that same technology to spy on everyone and control their lives because line must go up, they might understand better than anyone else why this needs to be sabotaged early in the process.
jart
Get involved in the local AI community. You're more likely to find people with whom you share affinity on places like r/LocalLLaMA. There's also the e/acc movement on Twitter which espouses the same gen x style rebellious libertarian ideals that once dominated the Internet. Stay away from discussions that attract policy larping.
knodi
I think you're missing the point. People are saying that government should make sure AI is not weaponized against the people of the world. But lets face it, US and UK governments will likely be the first to weaponized against the people.
As DeepSeek is shown us progress is hard to hinder unless you go to war and kill the people....
ExoticPearTree
Most likely the countries who will have unconstrained AGIs will get to advance technologically by leaps and bounds. And those who constrain it will remain in the "stone age" when it comes to it.
_Algernon_
Assuming AGI doesn't lead to instant apocalyptic scenario it is more likely to lead to a form of resource curse[1] than anything that benefits the majority. In general countries where the elite is dependent on the labor of the people for their income have better outcomes for the majority of people than countries that don't (see for example developing countries with rich oil reserves).
What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.
Really not something to aspire to.
emsign
That's a valid concern. The theory that the population only gets education, health care, human rights and so on, if these people are actually needed for the rulers to stay in power, is valid. The whole idea of AGIs replacing beaurocrats, the way for example DOGE is betting on to be successful with, is already axing people's livelihood and purpose in life. Why train government workers, why spend money on education, training, health care plans, if you have an old nuclear plant powering your silicon farms.
If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit. We are currently going into that direction with full speed. The rich aren't even bothering to hide this anymore from the public because they think they have won the game and can't be overruled anymore. Let's hope there will be still elections in four years and MAGA doesn't rig it like Fidesz in Hungary and so many other countries who have fallen into the hands of the internationalist oligarchy.
alexashka
> If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit
Maybe. I think it's a matter of culture.
Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?
I'm no history buff but my hunch is that mistreatment of people largely came from a fear that if I don't engage in cruelty to maximize power, my opponents will and given that they're cruel, they'll be cruel to me when they come to take over.
So we end up with this zero sum game of squeezing people, animals, resources and the planet in an arms race because everyone's afraid to lose.
In the past - you couldn't be sure if someone else was building up an army, so you had to build up an army. But now that we have satellites and we can largely track everything - we can actually agree to not engage in this zero sum dynamic.
There will be a shift from treating people as means to an end of power accumulation and containment, to treating people as something you just inherently like and would like to see prosper.
It'll be a shift away from this deeply corrosive idea of never ending competition and growth. When people's basic needs are met and no one is grouping up to take other people's goodies - why should regular people compete with one another?
They shouldn't and they won't. People who want to do good work will do so and improving the lives of people worldwide will be its own reward. Private islands, bunkers and yachts will become incomprehensible because there'll be no serf class to service any of it. We'll go back to if you want to be well liked and respected - you have to be a good person. I look forward to it :)
ExoticPearTree
Come on, let’s be real: all governments are bloated with bureaucrats and what DOGE is doing, albeit in Musk style, is to trim the fat a little bit.
You can’t seriosly claim they are upending people’s jobs when those jobs were BS in the first place.
ExoticPearTree
I see it as everyone having access to an AI so they can iterate very fast through ideas. Or do research at a level not possible now in terms of speed.
Or, my favorite outcome, the AI to iterate over itself and develop its own hardware and so on.
randerson
Hackers would be our only hope of a revolution.
crvdgc
Andrew Yang: Data is the new oil.
Sociology: The Oil Curse
daedrdev
I mean that itself is a hotly debated idea. From your own link " As of at least 2024, there is no academic consensus on the effect of resource abundance on economic development."
For example US is probably the most resource rich country in the world, but people don't consider it for the resource curse because the rest of its economy is so huge.
Night_Thastus
I don't see any point in speculating about a technology that doesn't exist and that LLMs will never become.
Could it exist some day? Certainly. But currently 'AI' will never become an AGI, there's no path forward.
stackedinserter
Probably it doesn't have to be an AGI that does tricks like passing Turing test v2. It can be an LLM with context window of 30GB that can outsmart your rival in geopolitics, economics and policies.
wordpad25
with LLMs able to generate infinite synthetic data to train on it seems like AGI is just around the corner
contagiousflow
Whoever told you this is a path forward lied to you
eikenberry
IMO we should focus on the AI systems we have today and worry about the possibility of AGI coming anytime soon. All indicators are that it is not.
hackinthebochs
>All indicators are that it is not.
What indicators are these?
mitthrowaway2
Focusing on your own feet proved to be near-sighted to a fault in 2022; how sure are you that it is adequately future-proofed in 2025?
eikenberry
Focusing on the clouds is no better.
thrance
Perhpas, but meanwhile making it legal to have racial profiling AI tech in the hands of government and corporations does a great disservice to your freedom and privacy. Do not buy the narrative, EU regulations are not about forbidding AGI, they're about ensuring a minumum of decency in how the tech is allowed to exist. Something Americans seem deathly allergic to.
ijidak
It's so interesting that much of this is playing out like that movie Creator where New Asia embraces AI and robotics and the western world doesn't.
Here we are, a couple of years later, truly musing about sectors of the world embracing AI and others not.
That sort of piecemeal adoption was predictable but not that we are here to have this debate this soon!
emsign
Or maybe those countries' economies will collapse once they let AGIs control institutions instead of human beaurocrats, because the AGIs are doing their own thing and trick the government by alignment faking and in-context scheming.
CamperBob2
Eh, I'm not impressed with the humans who are running things lately. I say we give HAL a shot.
timewizard
Or it will be viewed like nuclear weapons and those who have it will be bombed by those who don't.
These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.
It'd be funny if there wasn't billions of dollars being burnt to market this crap.
sschueller
Those countries with unrestricted AGI will be the ones letting AI decide if you live or die depending on cost savings for share holders...
ExoticPearTree
Not if Skynet emerges first and we all die :))
With every technological advancement it can always be good or bad. I believe it is going to be good to have a true AI available at our fingertips.
mdhb
Ok but what lead you to that particular belief in the first place?
Because I can think of a large number of historical scenarios where malicious people get access to certain capabilities and it absolutely does not go well and you do have to somehow account for the fact that this is a real thing that is going to happen.
ta1243
Those are "Death Panels", and only exist in places like the US where commercial needs run your health care
snickerbockers
Canada had a case a couple years ago where a disabled person wanted canadian-medicare to pay for a wheelchair ramp in her house and they instead referred her to their assisted suicide program.
Imnimo
Am I right in understanding that this "declaration" is not a commitment to do anything specific? I don't really understand why it matters who does or does not sign it.
flanked-evergl
The children running the European countries (into the ground) like this kind of theatre because they can pretend to be doing something productive without having to think.
null
karaterobot
Yep, it's got all the force of a New Year's resolution. It does not appear to be much more specific than one, either. It's about a page and a half long—the list of countries is as long as than the declaration itself, and it basically says "we talked about how we won't do anything bad".
amai
If it would be without commitment when why not sign it?
sva_
Diplomatic theater, justification to get/keep more bureaucrats on the payroll
layer8
It’s an indication of the values shared, or in this case, not shared.
puff_pastry
They’re right, the declaration is useless and it’s just an exercise in futility
seydor
Europe just loves signing declarations and concerned letters. It would make no difference if they signed it.
swyx
leading in ai safety theater is actually worse than leading in ai because the leadership of ai safety is actually just in leading in ai period
llm_trw
Leading in AI safety is leading in AI lobotomization.
flanked-evergl
European leaders are overgrown children. It's kind of pathetic.
lupusreal
These kind of "don't be evil" declarations are typically meaningless gestures by which non-players who weren't going to be participating anyway can posture as morally superior, while having no meaningful impact on the course of things. See also, the Ottawa Treaty; non-signatories include the US, China, Russia, Pakistan and India, Egypt, Israel, Iran, Cuba, North and South Korea... In other words all the countries from which landmine use is expected in the first place. And when push comes to shove, signatories like Ukraine will use landmines anyway because national defense is worth more than feeling morally superior for adhering to a piece of paper.
junto
> “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.”
That was never going to fly with the current U.S. administration. Not only is the word inclusive in there but ethical and trustworthy as well.
Joking aside, I genuinely don’t understand the “job creation” claims of JD Vance in his dinner speech in Paris.
Long-term I just can’t imagine what a United States will look like when 75% of the population are both superfluous and a burden to society.
If this happens fast, society will crumble. Sheep are best kept busy grazing.
Spooky23
The audience is JD’s masters and whomever we are menacing today.
The voters are locked in idiots, and don’t have agency at the moment. The bet from Musk, Theil, etc is that AI is as powerful and strategic as nuclear weapons were in 1947 - that’s what The Musk administration diplomacy seems to be like.
ToucanLoucan
I mean it feels like a joke, but also their “policy ideas” do basically boil down to targeting anything with the wrong words in them. I read somewhere they’re creating havoc right now because of a critical intelligence function called “privilege escalation” related to raising security clearance of personnel that’s been mired in stupid culture war controversy because it has the word privilege in it.
sagarpatil
The fundamental issue with these AI safety declarations is that they completely ignore game theory. The technology has already proliferated (see: DeepSeek, Qwen) and trying to control it through international agreements is like trying to control cryptography in the 90s.
I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.
China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.
The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.
The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.
(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)
Something tells me aspects of living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time. Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there. All it takes is a single group with enough collective intelligence and breakthroughs and the next AI will be delivered to our doorstop whether or not we asked for it.
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.