Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers
1647 comments
·January 21, 2025serjester
You have to keep in mind Microsoft is planning on spending almost 100B in datacenter capex this year and they're not alone. This is basically OpenAI matching the major cloud provider's spending.
This could also be (at least partly) a reaction to Microsoft threatening to pull OpenAI's cloud credits last year. OpenAI wants to maintain independence and with compute accounting for 25–50% of their expenses (currently) [2], this strategy may actually be prudent.
[1] https://www.cnbc.com/2025/01/03/microsoft-expects-to-spend-8...
throitallaway
Microsoft has lots of revenue streams tied to that capex outlay. Does OpenAI have similar revenue numbers to Microsoft?
tuvang
OpenAI has a very healthy revenue stream in the form of other companies throwing money at them.
But to answer your question, no they aren’t even profitable by themselves.
manquer
> they aren’t even profitable
Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.
This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].
In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.
[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.
MR4D
Given the release of the new DeepSeek R1 model [0], OpenAI’s future revenue stream is probably more at risk than it was a week ago.
[0] - https://arstechnica.com/ai/2025/01/china-is-catching-up-with...
tantalor
That's like saying I have a healthy revenue stream from my credit card.
Cthulhu_
They do get a lot of customers buying their stuff, but on top of that, a company with unique IP and mindshare can get investors to open their wallet easily enough; I keep thinking of AMD that was not or barely profitable for like 15 years in a row.
null
jiggawatts
Meanwhile, Azure has failed to keep up with the last 2-3 generations of both Intel and AMD server processors. They’re available only in “preview” or in a very limited number of regions.
I wonder if this is a sign of the global economic downturn pausing cloud migrations or AI sucking the oxygen out of the room.
whimsicalism
global economic downturn? what?
it’s absolutely the second one, this is a commonality across many orgs i’ve talked to who cannot get their CPU request met bc of GPU spend
SecretDreams
Serious question - why Texas???
tempusalaria
Texas is a world leader in renewable energy. Easy permitting, lots of space, lots of existing grid infrastructure from the o&g industry.
doctorpangloss
Why do you think datacenters have actually been built in Oregon?
jnurmine
What about hurricanes? Extreme weather events which spew a lot of water, wind and debris around might potentially do a lot of damage to a data center.
SecretDreams
Any downsides?
LarsDu88
My kneejerk response was to point to the incoming administration, but the fact Stargate has been in the works for more than a year now says to me it's because of tax credits.
b3ing
Lots of back door deals. Just expect more government things put in TX just like the Army built that place in Austin, when we have plenty of dead bases that could be reused
chickenbig
Natural gas to power the turbines while the nuclear plant are built, I guess. Also is Texas more open to large-scale development than elsewhere?
SecretDreams
Any downsides?
wilson090
It's where the energy is for this project.
This is unfortunately paywalled but a good writeup on how the datacenter came to be: https://www.theinformation.com/articles/why-openai-and-oracl...
vancroft
I'm not a subscriber so I can't read it, which startup are they referring to in the headline?
hrfister
Probably for the same reason that Silcon Valley has been moving there slowly and quietly for a while now.
SecretDreams
Because rich people inevitably don't like taxes? And maybe forest fires?
throwaway48476
How is compute only 50% of their expenses?
Jarwain
I'd guess salaries and such for all the devs and researchers make up a significant portion of the other half
throwaway48476
That seems almost impossible.
bboygravity
Isn't it more likely a reaction to xAI now having the most training compute?
PittleyDunkin
.
oldpersonintx
Who is "we"?
This isn't your money
kdmtctl
It is not. But this kind of money does have impact for society in any field. So, this a proper concern.
deknos
This is so much money with which we could actually solve problems in the world. maybe even stop wars which break out because of scarcity issues.
maybe i am getting to old or to friendly to humans, but it's staggering to me how the priorities are for such things.
CSSer
For less than this same price tag, we could’ve eliminated student loan debt for ~20 million Americans. It would in turn open a myriad number of opportunities, like owning a home and/or feeling more comfortable starting a family. It would stimulate the economy in predictable ways.
Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
JumpCrisscross
> we could’ve eliminated student loan debt for ~20 million Americans. It would in turn open a myriad number of opportunities, like owning a home
I'd give the money to folks starting in the trades before bailling out the college-educated class.
Also, wiping out numbers on a spreadsheet doesn't erect new homes. If we wiped out student debt, the portion of that value that went into new homeownership would principally flow to existing homeowners.
Finally, you're comparing a government hand-out to private investment into a capital asset. That's like comparing eating out at a nice restaurant to buying a bond. Different prerogatives.
no_wizard
Couple things that aren’t accounted for:
A) this is a pledge by companies they may or may not even have the cash required to back it up. Certainly they aren’t spending it all at once, but to be completely honest it’s nothing more than a PR stunt right now that seems to be an exercise in courting favor
B) that so called private capital is going to get incentives from subsidies, like tax breaks, grants etc. It’s inevitable if this proceeds to an actual investment stage. What’s that about it being pure private capital again?
C) do to the aforementioned circumstances in A it seems whatever government support systems are stood up to support this - and if this isn’t ending in hot air, there will be - it still means it’s not pure private capital and worse yet, they’ll likely end up bilking tax payers and the initiative falls apart with companies spending far less then the pledge but keeping all the upside.
I’ll bet a years salary it plays out like this.
If this ends up being 100% private capital with no government subsidies of any kind, I’ll be shocked and elated. Look at anything like this in the last 40 years and you’ll find scant few examples that actually hold up under scrutiny that they didn’t play out this way.
Which brings me to my second part. So we are going to - in some form - end up handing out subsidies to these companies, either at the local state or federal level, but by the logic of not paying off student debt, why are we going to do this? It’s only propping up an unhealthy economic policy no?
Why is it so bad for us to cancel student debt but it’s fine to have the same cost equivalent as subsidies for businesses? Is it under the “creates jobs” smoke screen? Despite the fact the overwhelming majority of money made will not go to the workers but back to the wealthy and ultra wealthy.
There's no sense of equity here. If the government is truly unequivocally hands off - no subsidies, no incentives etc - than fine, the profits go where they go, and thats the end of it.
However, it won't be, and that opens up a perfectly legitimate ask about how this money is going to get used and who it benefits
tomlockwood
I think the idea of a "college-educated class" speaks to another fundamental problem with the American project - that a college education is now seen as some upper-class bauble. It is only seen as such a luxury because it is such a slog and expense. Y'all should fix that problem too!
lumost
Student loans are the only loan type which you cannot bankrupt out of. I'm sure that many students would accept bankruptcy rather than bailouts if that is preferable. It doesn't make sense to saddle 20 year olds with insurmountable debt.
tsunamifury
This sort of folksy take always ignores that the same issue happens with the trades as does with the professional classes. No one class is immune to a crash due to unnatural promotion. You've let your moral view overcome that reality
talldayo
It's a fair comparison. Stargate is fundamentally about two things - America's industry needs a cash injection, and we're choosing a completely hype-dominated vein to push the needle into.
Problem is, the parent comment is right. Even if you think student loan mitigation has washy economics behind it, the outcome is predictable and even desirable if you're playing the long-game in politics. If not that, spend $500,000,000,000 towards onshoring Apple and Microsoft's manufacturing jobs. Spend it re-invigorating America's mothballed motor industry and let Elon spec it out like a kid in a candy shop. Relative to AI, even student loan forgiveness and dumping money into trades looks attractive (not that Trump would consider either).
Nobody on HN should be confused by this. We know Sam Altman is a scammer (Worldcoin, anyone?) and we know OpenAI is a terrible business. This money is being deliberately wasted to keep OpenAI's lights on and preserve the Weekend At Bernie's-esque corpse that is America's "lead" in software technology. That's it. It's blatantly simple.
visarga
The problem with allowing student debt to rack up to these levels and then cancelling it is that it would embolden universities to ask even higher tuition. A second problem is that not all students get the benefit, some already paid off their debts or a large part of it. It would be unfair to them.
bun_at_work
> not all students get the benefit, some already paid off their debts or a large part of it.
I'm one of the people who paid off a large portion of debt and probably don't need this assistance. However, this argument is so offensive. People were encouraged to take out debt for a number of reasons, and by a number of institutions, without first being educated about the implications of that. This argument states that we shouldn't help people because other people didn't have help. Following this logic, we shouldn't seek to help anyone ever, unless everyone else has also received the exact same help.
- slaves shouldn't be freed because other slaves weren't freed - we shouldn't give food to the starving, because those not starving aren't getting free food - we shouldn't care about others because they don't care about me
These arguments are all the greedy option in game theory, and all contribute to the worst outcomes across the board, except for those who can scam others in this system.
The right way to think about programs that help others is to consider cooperating - some people don't get the maximum possible, but they do get some! And when the game is played over and over, all parties get the maximum benefit possible.
In the case of student debt, paying it off and fixing the broken system, by allowing bankruptcy or some other fix, would benefit far more people than it would hurt; it would also benefit some people who paid their loans off completely: parents of children who can't pay off their loans now.
In the end the argument that some already paid off their debts is inherently a selfish argument in the style of "I don't want them to get help because I didn't get help." Society would be better if we didn't think in such greedy terms.
All that said - there are real concerns about debt repayment. The point about emboldening universities to ask for higher tuition highlights the underlying issue with the student loan system. Why bring up the most selfish possible argument when there are valid, useful arguments for your position?
jimkleiber
Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.
And there could be a change in the law that allows people to forgive student debt in personal bankruptcy, and that could make sure higher tuition doesnt happen.
aylmao
With half a trillion dollars you can also open a lot of universities. Increased supply would lower prices for everyone. One could even open public universities and offer education at very reduced or no tuition.
Twirrim
If we block on the basis that previous people didn't have something and that it would be unfair to them we would literally never make any progress in this world.
Instead of starting a new better world, we'll just stick with the old one that sucks because we don't want to be unfair. What an awful, awful way to look at the world.
thfuran
Only the first of those is a real problem, but it really is a problem.
datavirtue
Simple ten years of tax deductions for paid student loans. Fixed it.
_aavaa_
When the bailout is for business the money always comes, but suggest even a fraction of that amount of money go towards regular people and all of a sudden there's hand wringing and talk of moral hazards.
> it would embolden universities to ask even higher tuition.
Then cap the amount you give out loans. Many of them are back by one level of the government or another.
> A second problem is that not all students get the benefit, some already paid off their debts or a large part of it. It would be unfair to them.
This is a very flimsy argument. Shall we get rid of the polio vaccine since it's unfair to those who already contracted it that our efforts with the vaccine don't benefit them?
Octoth0rpe
> Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
AFAICT from this article and others on the same subject, the 500 billion number does not appear to be public money. It sounds like it's 100 billion of private investment (probably mostly from Son), and FTA,
> could reach five times that sum
(5x 100 billion === 500 billion, the # everyone seems to be quoting)
nejsjsjsbsb
Eliminating some student debt is a fish. Free university is the fishing rod. Do that instead.
whimsicalism
we are vastly overspending and will either need to monetize the debt (disastrous) or massively cut spending and raise taxes in the future. already now, we need to massively raise taxes on the wealthy but even that will be insufficient with our current spend.
free college is just a giveaway to the wealthier third of our society and irresponsible with our current fiscal situation.
_heimdall
Free to the student sounds nice, but who pays for it in the end? And does an education lose a bit of its value when anyone can get it for free?
rqtwteye
"we could’ve eliminated student loan debt for ~20 million Americans. "
Don't throw more money at schools. They will happily take the money and jack up tuition even more. There is no reason why tuition is going up at the pace it does.
aimanbenbaha
> There is no reason why tuition is going up at the pace it does.
There is and it's explained by Baumol's cost disease. Basically you can't sustain paying professors the same wage while productivity increases in other parts of the economy. Even if the actual labor of "professing" hasn't gotten more productive. You have to retain them by keeping up with the broader wage increases. And that cost increase gets passed down to students.
cudgy
Allowing student debts to be included in bankruptcy takes care of most of the issue. Those that were unable to find decent, paying jobs will have a path to relieve the stress of high student loan payments, while those that found high paying jobs will continue paying on the loans from which they received a benefit.
bitlax
Let the schools pay back the people they scammed.
ajmurmann
Or, prices of houses would go up even more because we still aren't allowing supply to increase and people having more money doesn't change that.
pizzathyme
I am surprised at the negativity from HN. Their clear goal is to build superintelligence. Listen to any of the interviews with Altman, Demis Hassabis, or Dario Amodei (Anthropic) on the purpose of this. They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.
Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.
scottLobster
All of those things would put them out of business if realized and are just a PR smokescreen.
Have we not seen enough of these people to know their character? They're predators who, from all accounts, sacrifice every potentially meaningful personal relationship for money, long after they have more than most people could ever dream of. If we legalized gladiatorial blood sport and it became a billion-dollar business, they'd be doing that. If monkey torture porn was a billion dollar business they'd be doing that.
Whatever the promise of actual AI (and not just performative LLM garbage), if created they will lock the IP down so hard that most of the population will not be able to afford it. Rich people get Ozempic, poor people get body positivity.
dinkumthinkum
That's the crazy thing about this "super AI" business is that at some point no one would buy it because no one could afford because no one has a job (spare me the UBI magic money fantasy). I love the body positivity line. But if such a thing came to pass, I think something different would probably happen to the rich.
jstummbillig
I continue to be amazed at how motivated some of us are to make such cruel, far-reaching and empty claims with regards to people of some popularity/notoriety.
rachofsunshine
Do you want a superintelligence ruling over all humanity until the stars burn out controlled by these people?
The lesson of everything that has happened in tech over the past 20 years is that what tech can do and what tech will do are miles apart. Yes, AGI could give everyone a free therapist to maximize their human well-being and guide us to the stars. Just like social media could have brought humanity closer together and been an unprecedented tool for communication, understanding, and democracy. How'd that work out?
At some point, optimism becomes willfully blinding yourself to the terrible danger humanity is in right now. Of course founders paint the rosy version of their product's future. That's how PR works. They're lying - maybe to themselves, and definitely to you.
akra
Just my opinion/observation really but I believe its because people are implicitly entertaining the possibility that it is no longer about software or rather this announcement implicitly states that talent long term isn't the main advantage but instead hardware, compute, etc and most importantly the wealth and connections to gain access to large sums of capital. AI will enable capital/wealthy elite to have more of an advantage over human intelligence/ingenuity which I think is not typically what most hacker/tech forums are about.
For example it isn't what you can do tinkering in your home/garage anymore; or what algorithm you can crack with your intrinsic worth to create more use cases and possibilities - but capital, relationships, hardware and politics. A recent article that went around, and many others are believing capital and wealth will matter more and make "talent" obsolete in the world of AI - this large figure in this article just adds money to that hypothesis.
All this means the big get bigger. It isn't about startup's/grinding hard/working hard/being smarter/etc which means it isn't really meritocratic. This creates an uneven playing field that is quite different than previous software technology phases where the gains/access to the gains has been more distributed/democratized and mostly accessible to the talented/hard working (e.g. the risk taking startup entrepreneur with coding skills and a love of tech).
In some ways it is kind of the opposite of the indy hacker stereotype who ironically is probably one of the biggest losers in the new AI world. In the new world what matters is wealth/ownership of capital, relationships, politics, land, resources and other physical/social assets. In the new AI world scammers, PR people, salespeople, politicians, ultra wealthy with power etc thrive and nepotism/connections are the main advantage. You don't just see this in AI btw (e.g. recent meme coins seen as better path to wealth than working due to weak link to power figure), but AI like any tech amplifies the capability of people with power especially if by definition the powerful don't need to be smart/need other smart people to yield it unlike other tech in the past.
They needed smart people in the past; we may be approaching a world where the smart people make themselves as a whole redundant. I can understand why a place like this doesn't want that to succeed, even if the world's resources are being channeled to that end. Time will tell.
gmd63
Exactly as you say. AI is imagined to be the wealthy nepotist's escape pod from an equal playing field and democratized access to information. Win at all cost soulless predators who find infinite sacrifice somehow righteous love games like the ones that macro-scale AI creates.
The average person's utility from AI is marginal. But to a psychopath like Elon Musk who is interested in deceiving the internet about Twitter engagement or juicing his crypto scam, it's a necessary tool to create seas of fake personas.
enraged_camel
>> Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.
I joined in 2012, and been reading since 2010 or so. The community definitely has changed since then, but the way I look at it is that it actually became more reasoned as the wide-eyed and naive teenagers/twenty-somethings of that era gained experience in life and work, learned how the world actually works, and perhaps even got burned a few times. As a result, today they approach these types of news with far more skepticism than their younger selves would. You might argue that the pendulum has swung too far towards the cynical end of the spectrum, but I think that's subjective.
holoduke
I think (big assumption) most here are from that same period/time. Most are in their late 30s, 40s. Kids, busy life etc. Not the young hacker mindsets, but the responsible maybe a bit stressed person.
timewizard
> Their clear goal is to build superintelligence
One time I bought a can of what I clearly thought was human food. Turns out it was just well dressed cat food.
> to unlimited energy, curing disease, farming innovations to feed billions,
Aw they missed their favorite hobby horse. "The children." Then again you might have to ask why even bother educating children if there is going to be "superintelligent" computers.
Anyways.. all this stuff will then be free.. right? Is someone going to "own" the superintelligent computer? That's an interesting proposition that gets entirely left out of our futurism fatansy.
stephen_g
I'm sure some do, but understand what they're basically saying is "we will build an AI God, and it will save us from all our problems"
At that point, it's not technology, that's religion (or even bordering on cult-like thinking)
dyauspitr
I’m willing to believe. It’s probably the closest we’ve come to actually having a real life god. I’m going to get pushback on this but I’ve used o1 and it’s pretty mind blowing to me. I would say something 10x as intelligent with sensors to perceive the world and some sort of continuously running self optimization algorithm would essentially be a viable artificial intelligence.
semi-extrinsic
> They discuss the roadmaps to unlimited energy, curing disease, farming innovations to feed billions, permanent solutions to climate change, and more.
Look at who is president, or who is in charge of the biggest companies today. It is extremely clear that intelligence is not a part of the reason why they are there. And with all their power and money, these people have essentially zero concern for any of the topics you listed.
There is absolutely no reason to believe that if artificial superintelligence is ever created, all of a sudden the capitalist structure of society will get thrown away. The AIs will be put to work enriching the megalomaniacs, just like many of the most intelligent humans are.
dinkumthinkum
Unlimited energy? No, I don't believe in this. I thought people on HN generally accepted science and not nonsense. A "superintelligence" that would ... what? Destroy the middle, destroy the economy, cause riots and civil wars? If its even possible. Sounds great.
tim333
>wars which break out because of scarcity issues
That doesn't seem to be much of a thing these days. If you look at Russia/Ukraine or China/Taiwan there's not much scarcity. It's more bullying dictator wants to control the neighbours issues.
Cthulhu_
It will be, or, it's slowly happening already. Climate change is triggering water and food shortages, both abroad and on your doorstep (California wildfires), which in turn trigger mass migrations. If a richer and/or more militarily equipped country decides they want another country's resources to survive, we'll see wars erupt everywhere.
Then again, it's more of a logistics challenge, and if e.g. California were to invade Canada for its water supply, how are they going to get it all the way down there?
I can see it happening in Africa though, a long string of countries rely on the Nile, but large hydropower dams built in Sudan and Ethiopia are reducing the water flow, which Egypt is really not happy about as it's costing them water supply and irrigated land. I wouldn't be surprised if Egypt and its allies declares war on those countries and aims to have the dams broken. Then again, that's been going on for some years now and nothing has happened yet as far as I'm aware.
(the above is armchair theorycrafting from thousands of miles away based on superficial information and a lively imagination at best)
tim333
I was in Egypt a while and there's no talk of them invading Sudan or Ethiopia. A lot of Egypt's economy is overseas aid from the US and similar.
The main military thing going on there - I was in Dahab where there are endless military checkpoints - is Hamas like guys trying to come over and overthrow the fairly moderate Egyptian government and replace it with a hardline Hamas type islamic dictatorship for the glorification of Allah etc. Again it's not about reducing scarcity - more about increasing scarcity in return for political control. Dahab and Cairo are both a few hours drive from Gaza.
bagels
California moves water the long way with aqueducts, pipes and pumps. It's an understood problem, but expensive.
qrsjutsu
> it's more of a logistics challenge
and a bureaucratic one as well. in Germany, they want to trim bureaucratic necessities while (not) expecting multiple millions of climate refugees.
lot's of undocumented STUFF (undocumented have nowhere to go so they don't get vaccines, proper help when sick, injured, mentally unstable, threatened, abused) incoming which means more disease, crime, theft, money for security firms and insurance companies, which means more smuggle, more fear-mongering via media, more polarization, more hard-coding of subservience into the young, more financial fascism overall, less art, zero authenticity, and a spawn of VR worlds where the old rules apply forever.
plus more STDs and micro-pandemics due to viral mutations because people will be even more careless when partying under second-semester light-shows in metropolitan city clubs and festivals and when selling out for an "adventurous" quick potent buck and bug, which of course means more money pouring into pharma who won't be able to test their drugs thoroughly (and won't have to, not requiring platforms to fact check will transfer somewhat into the pharma industry) because the population will be more diverse in terms of their bio-chemical reactions towards ingredients in context of their "fluid" habitats chemical and psycho-social make-ups.
but it's cool, let's not solve the biggest problems before pseudo-transcending into the AGI era. will make for a really great impression, especially those who had the means, brains, skills, (past) careers, opportunity and peace of mind.
dbspin
There's a terrifying amount of food insecurity and poverty in Russia - https://www.globalhungerindex.org/russia.html - https://databankfiles.worldbank.org/public/ddpext_download/p...
tim333
Your first link says "With a score under 5, Russian Federation has a level of hunger that is low."
The current situation with Russia and China seems caused by them becoming prosperous. In the 1960s in China and 1990s in Russia they were broke. Now they have money they can afford to put it into their militaries and try to attack the neighbours.
I'm reminded of the KAL cartoon on Russia https://www.economist.com/cdn-cgi/image/width=1424,quality=8... That was from 2014. Already Russia is heading to the next panel in the cycle.
cpursley
Russia is a massive grain producer and exporter. One of their biggest health issues right now is obesity (from those cheap grains) with 60% of the adult population overweight, and growing. Furthermore, obesity has actually been an issue for their recruiting effort (there's a lot of running in war).
palmfacehn
I would wager that states such as Russia and others misallocate resources, which in turn reduces productivity. Worse yet, some of the policy prescriptions stated above would further misallocate scarce resources and reduce productivity. Scarcity doom becomes a self-fulfilling prophesy. This outcome is used to rationalize further economic intervention and the cycle compounds upon itself.
To be explicitly clear, the US granting largess to tech companies for datacenters also counts as a misallocation in my view.
akho
Have you tried opening the links? They show Russia at developed country level in terms of food insecurity (score <5, they don't differentiate at those levels; this is a level mostly shown for EU countries); and a percentage of population below the international poverty line of 0.0% (vs, as an example, 1.8 % in Romania). This isn't great — being in the poverty briefs at all is not indicative of prosperity — but your terrification should probably come from elsewhere.
infecto
Russia is run by the mob. The country has no real dominant industry beyond its natural resources. Are they really a good example?
HeatrayEnjoyer
At any given time approximately 1 in 10 humans are facing starvation or severe food insecurity.
Octoth0rpe
I don't doubt that, but it's harder to connect that fact to a specific international conflict.
rainingmonkey
"Global warming may not have caused the Arab Spring, but it may have made it come earlier... In 2010, droughts in Russia, Ukraine, China and Argentina and torrential storms in Canada, Australia and Brazil considerably diminished global crops, driving commodity prices up. The region was already dealing with internal sociopolitical, economic and climatic tensions, and the 2010 global food crisis helped drive it over the edge."
https://www.scientificamerican.com/article/climate-change-an...
boxed
Or religious fanatics wants to murder other religious groups.
_Tev
> That doesn't seem to be much of a thing these days.
If you ignore Gaza and whole of Africa, maybe.
tim333
Gaza seems mostly to be about who controls Israel/Palestine politically. Gaza was reasonably ok for food and housing and is now predictably trashed as a result of Hamas wanting to control Palestine from the river to the sea as they say.
South Sudan is some ridiculous thing where two rival generals are fighting for control. Are there any wars which are mostly about scarcity at the moment?
energy123
Very zero-sum outlook on things which is factually untrue much of the time. When you invest money in something productive that value doesn't get automatically destroyed. The size of the pie isn't fixed.
Nullabillity
> something productive
So... not this.
jstummbillig
It's an indirect attempt of tackling any first order problem. So is all software engineering.
ozim
Money doesn't fix stuff. You need good will people and good will people don't need that much money.
ajmurmann
More importantly, money, at global scale, doesn't solve scarcity issues. If there are 100 apples and 120 people making sure everyone has a lot of money doesn't magically create 20 more apples. It just raises the price of apples. Building an apple orchard creates apples. Stargate is people betting that they are building a phenomenal apple orchard. I'm not sure they will and an worried the apple orchard will poison us all but unlike me these people are putting their money where their mouth is and had thus larger inventive to figure out what they are doing.
ozim
On global scale if you have 100 people and 150 apples but apples are on the opposite side of the globe it is not like you can sustainably get those apples delivered all the time.
Getting 150 apples once is better than nothing but still doesn’t fix the problem.
mft_
Money alone might not fix stuff... but an absence of money can prevent stuff being fixed.
b3lvedere
Such mega investments are usually not for the sake of humankind. They are usually for the sake of a very selected group of humans.
thelastgallon
We could do 20 Manhattan projects with it[1].
1) Build fully autonomous cars so there are zero deaths from car accidents. This is ~45K deaths/year (just US!) and millions of injuries. Annual economic cost of crashes is $340 billion. Worldwide the toll is 10 - 100x?
2) Put solar on top of all highways.
3) Give money to all farmers to put solar.
4) Build transmission.
And many more ...
The Manhattan Project employed nearly 130,000 people at its peak and cost nearly US$2 billion (equivalent to about $27 billion in 2023): https://en.wikipedia.org/wiki/Manhattan_Project
XorNot
You can't just compare things to the Manhattan project. The Manhattan project was large for it's time, but the thing they were doing was ultimately, simple. You can build a nuclear bomb with simply a large enough sphere of enriched U-235 and it'll explode. Which is what the Hiroshima bomb was - a gun type assembly. This is not a complex device.
The relative complexity of projects only ever increases, because if they were simpler we would already have done them. The modern LHC is far more complicated then the Manhattan project. So is ITER. Hell, the US military's logistics chain is more complicated then the Manhattan project.
The fundamental attribution error here is going "look the power to destroy a city was so much cheaper!"
ahmeneeroe-v2
(contingent on the money actually being spent, which....) This is basically an AI-manhattan project. It would employ vast numbers of construction, tradesmen, manufacturing, etc.
philomath_mn
This isn't a handout, it is an investment with an expected return. Which is good, because it is less likely to be applied to bad ideas like forcing solar roadways and solar farmers.
LMYahooTFY
> 3) Give money to all farmers to put solar.
...on their roofs? Over all their crops? What's the play here?
TheAceOfHearts
I'm confused and a bit disturbed; honestly having a very difficult time internalizing and processing this information. This announcement is making me wonder if I'm poorly calibrated on the current progress of AI development and the potential path forward. Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.
og_kalu
"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.
Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.
rhubarbtree
The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.
The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”
The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.
LeftHandPath
Absolutely. Look at how Sam Altman speaks.
If you've taken a couple of lectures about AI, you've probably been taught not to anthropomorphize your own algorithms, especially given how the masses think of AI (in terms of Skynet, Cortana, "Her", Ex Machina, etc). It encourages people to mistake the capabilities of the models and ascribe to them all of the traits of AI they've seen in TV and movies.
Sam has ignored that advice, and exploited the hype that can be generated by doing so. He even tried to mimic the product in "Her", down to the voice [0]. The old board said his "outright lying" made it impossible to trust him [1]. That behavior raises eyebrows, even if he's got a legitimate product.
[0]: https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial...
[1]: https://www.theverge.com/2024/5/28/24166713/openai-helen-ton...
og_kalu
>The problem is, they are hugely incentivised to hype to raise funding.
Hype is extremely normal. Everyone with a business gets the chance to hype for the purpose of funding. That alone isn't going to get several of the biggest tech giants in the world to pour billions.
Satya just said, "he has his 80 billion ready". Is Microsoft an "AI foundation company" ? Is Google ? Is Meta ?
The point is the old saying - "Put your money where your mouth is". People can say all sorts of things but what they choose to spend their money on says a whole lot.
And I'm not saying this means the investment is guaranteed to be worth it.
sandspar
The newest US president announced this within the 48 hours of assuming office. Hype alone couldn't set such a big wheel in motion.
skrebbel
> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.
I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.
HeatrayEnjoyer
NFTs couldn't pass the Turing test, something I didn't expect to witness in my lifetime.
The two are qualitatively different.
rglover
It's identical energy. A significant number of people are attaching their hopes and dreams to a piece of technology while deluding themselves about the technical limitations of that technology. It's all rooted in greed. Relatively few are in it to push humanity forward, most are just trying to "get theirs."
og_kalu
Well Crypto had nowhere near the uptake [0] and investment (even leaving this announcement aside, several of the biggest tech giants are pouring billions into this).
At any rate, I'm not saying this means that all this investment is guaranteed to pay off.
[0] With 300 million weekly active users/1 billion messages per day and #8 in visits worldwide the last few months just 2 years after release, ChatGPT is the software product with the fastest adoption ever.
root_axis
Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"
og_kalu
Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.
A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.
ca_tech
I am not qualified to make any assumptions but I do wonder if a massive investment into computing infrastructure serves national security purposes beyond AI. Like building subway stations that also happen to serve as bomb shelters.
Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?
misswaterfairy
I'm not a cryptographer, nor am I good with math (actually I suck badly; consider yourself warned...), but am I curious about how threatened password hashes should feel if the 'AI juggernauts' suddenly fancy themselves playing on the red team, so I quickly did some (likely poor) back-of-the-napkin calculations.
'Well known' password notwithstanding, let's use the following as a password:
correct-horse-battery-staple
This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).
I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.
If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.
Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.
Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:
dXIl5p*Vn6Gt#BH
Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!
More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.
I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.
All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.
DebtDeflation
>Maybe they really are all wrong
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
skepticATX
You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.
anthonypasq
I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.
It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money
og_kalu
It's obviously not taken to mean literally everybody.
Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.
whiplash451
Who says they will stick to autoregressive LLMs?
sanderjd
I think it will be in between, like most things end up being. I don't think they are charlatans at all, but I think they're probably a bit high on their own supply. I think it's true that "everything is about to change", but I think that change will look more like the status quo than the current hype cycle suggests. There are a lot of periods in history when "everything changed", and I believe we're already a number of years into one of those periods now, but in all those cases, despite "everything" changing, a perhaps surprising number of things remained the same. I think this will be no different than that. But it's hard, impossible really, to accurately predict where the chips will land.
paul7986
My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.
She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.
Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.
That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).
liamwire
My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.
lm28469
I still fail to see who desire that, how it benefits humanity, or why we need to invest 500b to get to this
nhinck3
Sorry, you live in a different world, google glasses were aggressively lame, the ray bans only slightly less so.
But pulling out your phone to talk to it like a friend...
varsketiz
Very insightful take on agents interacting with agents thanks for sharing.
Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com
nejsjsjsbsb
I am hoping it is just the usual ponzi thing.
ajmurmann
How would this be a Ponzi scheme? Who are the leaf nodes ending up holding the bag?
Davidzheng
Let me avoid the use of the word AGI here because the term is a little too loaded for me these days.
1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.
2) intelligence at a certain level is easier to achieve algorithmically when the hardware improves. There's also a larger path to intelligence and often simpler mechanisms
3) most current generation reasoning AI models leverage test time compute and RL in training--both of which can make use of more compute readily. For example RL on coding against compilers proofs against verifiers.
All of this points to compute now being basically the only bottleneck to massively superhuman AIs in domains like math and coding--rest no comment (idk what superhuman is in a domain with no objective evals)
philipwhiuk
You can't block AGI on a whim and then deploy 'superhuman' without justification.
A calculator is superhuman if you're prepared to put up with it's foibles.
Davidzheng
It is superhuman in a very specific domain. I didn't use AGI because its definitions are one of two flavors.
One, capable of replacing some large proportion of global gdp (this definition has a lot of obstructions: organizational, bureaucratic, robotic)...
two, difficult to find problems in which average human can solve but model cannot. The problem with this definition is that the distinct nature of intelligence of AI and the broadness of tasks is such that this metric is probably only achievable after AI is already in reality massively superhuman intelligence in aggregate. Compare this with Go AIs which were massively superhuman and often still failing to count ladders correctly--which was also fixed by more scaling.
All in all I avoid the term AGI because for me AGI is comparing average intelligence on broad tasks rel humans and I'm already not sure if it's achieved by current models whereas superhuman research math is clearly not achieved because humans are still making all of progress of new results.
lossolo
> All of this points to compute now being basically the only bottleneck to massively superhuman AIs
This is true for brute force algorithms as well and has been known for decades. With infinite compute, you can achieve wonders. But the problem lies in diminishing returns[1][2], and it seems things do not scale linearly, at least for transformers.
1. https://www.bloomberg.com/news/articles/2024-12-19/anthropic...
2. https://www.bloomberg.com/news/articles/2024-11-13/openai-go...
null
rhubarbtree
> 1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.
What would you say is the strongest evidence for this statement?
__loam
Well the contrived benchmarks the industry selling the models made up seem to be improving.
viccis
>reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute
I still have a pretty hard time getting it to tell me how many sisters Alice has. I think this might be a bit optimistic.
SketchySeaBeast
They plugged the hole for "how many 'r''s in 'strawberry'", but I just asked it how many "l"s in "lemolade" (spelling intentional) and it told me 1. If you make it close to, but not exactly a word it would be expecting it falls over.
smartmic
I see it somewhat differently. It is not that technology has reached a level where we are close to AGI, we just need to throw in a few more coins to close the final gap. It is probably the other way around. We can see and feel that human intelligence is being eroded by the widespread use of LLMs for tasks that used to be solved by brain work. Thus, General Human Intelligence is declining and is approaching the level of current Artificial Intelligence. If this process can be accelerated by a bit of funding, the point where Big Tech can overtake public opinion making will be reached earlier, which in turn will make many companies and individuals richer faster, also the return on investment will be closer.
dauhak
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?
My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now. Progress has been insane over the last few years but there's been this lurking worry around signs that the pre-training scaling paradigm has diminishing returns.
What recent outputs like o1, o3, DeepSeek-R1 are showing is that that's fine, we now have a new paradigm around test-time compute. For various reasons people think this is going to be more scalable and not run into the kind of data issues you'd get with a pre-training paradigm.
You can definitely debate on whether that's true or not but this is the first time I've been really seeing people think we've cracked "it", and the rest is scaling, better training etc.
rhubarbtree
> My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now
My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.
Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.
Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.
dauhak
Yeah obviously motivations are murky and all over the place, no one's free of bias. I'm not taking a strong stance on whether they're right or not or how much of it is motivated reasoning, I just think at least quite a bit is genuine (I'm mainly basing this off researchers I know who have a track record of being very sober and "boring" rather than the flashy Altman types)
To your point, yeah the models still suck in some surprising ways, but again it's that thing of they're the worst they're ever going to be, and I think in particular on the reasoning issue a lot of people are quite excited that RL over CoT is looking really really promising for this.
I agree with your broader point though that I'm not sure how close we are and there's an awful lot of noise right now
sroussey
Summarizing is quite difficult. You need to keep the salient points and facts.
If anyone has experience on getting this right, I would like to know how you do it.
NitpickLawyer
I agree with your take, and actually go a bit further. I think the idea of "diminishing returns" is a bit of a red herring, and it's instead a combination of saturated benchmarks (and testing in general) and expectations of "one llm to rule them all". This might not be the case.
We've seen with oAI and Anthropic, and rumoured with Google, that holding your "best" model and using it to generate datasets for smaller but almost as capable models is one way to go forward. I would say that this shows the "big models" are more capable than it would seem and that they also open up new avenues.
We know that Meta used L2 to filter and improve its training sets for L3. We are also seeing how "long form" content + filtering + RL leads to amazing things (what people call "reasoning" models). Semantics might be a bit ambitious, but this really opens up the path towards -> documentation + virtual environments + many rollouts + filtering by SotA models => new dataset for next gen models.
That, plus optimisations (early exit from meta, titans from google, distillation from everyone, etc) really makes me question the "we've hit a wall" rhetoric. I think there are enough tools on the table today to either jump the wall, or move around it.
lm28469
Yeah that's called wishful thinking when it's not straight up pipe dreams. All these people have horses in the race
HarHarVeryFunny
Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.
This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).
internetter
Shouldn't there be a fear of obsolescence?
HarHarVeryFunny
It seems you'd need to figure periodic updates into the operating cost of a large cluster, as well as replacing failed GPUs - they only last a few years if run continuously.
I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.
It'd be interesting to read something about how updates are typically managed/scheduled.
anonzzzies
Don't they say in the article that it is also for scaling up power and datacenters? That's the big cost here.
HarHarVeryFunny
There's the servers and data center infrastructure (cooling, electricity) as well as the GPUs of course, but if we're talking $10B+ of GPUs in a single datacenter, it seems that would dominate. Electricity generation is also a big expense, and it seems nuclear is the most viable option although multi-GW solar plants are possible too in some locations. The 1GW ~ $10B number I suggested is in the right ballpark.
tim333
>AI development has figured out enough to brute force a path towards AGI?
I think what's been going on is compute/$ has been exponentially rising for decades in a steady way and has recently passed the point that you can get human brain level compute for modest money. The tendency has been once the compute is there lots of bright PhDs get hired to figure algorithms to use it so that bit gets sorted in a few years. (as written about by Kurzweil, Wait But Why and similar).
So it's not so much brute forcing AGI so much that exponential growth makes it inevitable at some point and that point is probably quite soon. At least that seems to be what they are betting.
The annual global spend on human labour is ~$100tn so if you either replace that with AGI or just add $100tn AGI and double GDP output, it's quite a lot of money.
catmanjan
This has nothing to do with technology it is a purely financial and political exercise...
philomath_mn
But why drop $500B (or even $100B short term) if there is not something there? The numbers are too big
camel_Snake
this is an announcement not a cut check. Who knows how much they'll actually spend, plenty of projects never get started let alone massive inter-company endeavors.
rf15
because you put your own people on the receiving end too AND invite others to join your spending spree.
null
MetaWhirledPeas
> I don't know how to make sense of this level of investment.
The thing about investments, specifically in the world of tech startups and VC money, is that speculation is not something you merely capitalize on as an investor, it's also something you capitalize on as a business. Investors desperately want to speculate (gamble) on AI to scratch that itch, to the tune of $500 billion, apparently.
So this says less about, 'Are we close to AGI?' or, 'Is it worth it?' and more about, 'Are people really willing to gamble this much?'. Collectively, yes, they are.
mistrial9
note that the $500 Billion number is bravado / aspirational. MSFT Nadella said he has $80B to contribute? It is a live auction horse race in some ways, it seems.
heydenberk
~$125B per year would be 2-3% of all domestic investment. It's similar in scale to the GDP of a small middle income country.
If the electric grid — particularly the interconnection queue — is already the bottleneck to data center deployment, is something on this scale even close to possible? If it's a rationalized policy framework (big if!), I would guess there's some major permitting reform announcement coming soon.
constantcrying
They say this will include hundreds of thousands of jobs. I have little doubt that dedicated power generation and storage is included in their plans.
Also I have no doubt that the timing is deliberate and that this is not happening without government endorsement. If I had to guess the US military also is involved in this and sees this initiative as important for national security.
cmdli
Is there really any government involvement here? I only see Softbank, Oracle, and OpenAI pledging to invest $500B (over some timescale), but no real support on the government end outside of moral support. This isn't some infrastructure investment package like the IRA, it's just a unilateral promise by a few companies to invest in data centers (which I'm sure they are doing anyway).
diggan
> but no real support on the government end outside of moral support
The venture was announced at the White House, by the President, who has committed to help it by using executive orders to speed things up.
It might not have been voted by congress or whatever, but just those things makes it pretty clear the government provides more than just "moral support".
seanmcdirmid
I thought all the big corps had projects for the military already, if not DARPA directly, which is the org responsible for lots of university research (the counterpart to the NSF, which is the nice one that isn't funded by the military)?
tsujamin
It’s light on details, but from The Guardian’s reporting:
> The president indicated he would use emergency declarations to expedite the project’s development, particularly regarding energy infrastructure.
> “We have to get this stuff built,” Trump said. “They have to produce a lot of electricity and we’ll make it possible for them to get that production done very easily at their own plants.
https://www.theguardian.com/us-news/2025/jan/21/trump-ai-joi...
beezle
hundreds of thousands of jobs? I'll wait for the postmortem on that prediction. Sounds a lot like Foxconn in Wisconsin but with more players.
bruce511
On the one hand the number is a political thumb-suck which sounds good. It's not based in any kind of actual reality.
Yes, the data center itself will create some permanent jobs (I have no real feel for this, but guessing less than 1000).
There'll be some work for construction folk of course. But again seems like a small number.
I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?
What if we create a new post to leverage AI generally? Kinda like the way we have a marketing post, and a chunk of the daily work there is Adwords.
Once we start gustimamating the jobs created by the existence of an AI data center, we're in full speculation mode. Any number really can be justified.
Of course ultimately the number is meaningless. It won't create that many "local jobs" - indeed most of those jobs, to the degree they exist at all, will likely be outside the US.
So you don't need to wait for a post-mortem. The number is sucked out of thin air with no basis in reality for the point of making a good political sound bite.
seanmcdirmid
> hundreds of thousands of jobs?
I'm sure this will easily be true if you count AI as entities capable of doing jobs. Actually, they don't really touch that (if AI develops too quickly, there will be a lot of unemployment to contend with!) but I get the national security aspect (China is full speed ahead on AI, and by some measurements, they are winning ATM).
visarga
only $5M/job
SoftTalker
They plan to have 100,000s of people employed to run on treadmills to generate the power.
HPMOR
Well I currently pay to do this work for free. More than happy to __get__ paid doing it.
Edit: Hey we can solve the obesity crisis AND preserve jobs during the singularity!! Win win!
bsnsxd
Damn, 6 hours too slow to make this comment
nejsjsjsbsb
A hamster wheel would work better?
n2d4
Yes, Trump announced this as a massive foreign investment coming into the US: https://x.com/WatcherGuru/status/1881832899852542082
null
shrubble
Just as there is an AWS for the public, with something similar but only for Federal use, so it could be possible that there is AI cloud services available to the public and then a separate cloud service for Federal use. I am sure that military intelligence agencies etc. would like to buy such a service.
szvsw
AWS GovCloud already exists FYI (as you hinted) and it is absolutely used by the DoD extensively already.
cavisne
Gas turbines can be spun up really quickly through either portable systems (like xAI did for their cluster) [1] or actual builds [2] in an emergency. The biggest limitation is permits.
With a state like Texas and a Federal Government thats onboard these permits would be a much smaller issue. The press conference makes this seem more like, "drill baby drill" (drilling natural gas) and directly talking about them spinning up their own power plants.
[1] https://www.kunr.org/npr-news/2024-09-11/how-memphis-became-...
[2] https://www.gevernova.com/gas-power/resources/case-studies/t...
JumpCrisscross
> It's similar in scale to the GDP of a small middle income country
I’ve been advocating for a data centre analogue to the Heavy Press Programme for some years [1].
This isn’t quite it. But when I mapped out costs, $1tn over 10 years was very doable. (A lot of it would go to power generation and data transmission infrastructure.)
ethbr1
One-time capital costs that unlock a range of possibilities also tend to be good bets.
The Flood Control Act [0], TVA, Heavy Press, etc.
They all created generally useful infrastructure, that would be used for a variety of purposes over the subsequent decades.
The federal government creating data center capacity, at scale, with electrical, water, and network hookups, feels very similar. Or semiconductor manufacture. Or recapitalizing US shipyards.
It might be AI today, something else tomorrow. But there will always be a something else.
Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility [1]. That was a perfect opportunity for government investment to smooth out market demand spikes. Mass deployed US nuclear in 1980 would have been a game changer.
[0] https://en.m.wikipedia.org/wiki/Flood_Control_Act_of_1928
[1] https://en.m.wikipedia.org/wiki/Offshore_Power_Systems#Const...
chickenbig
> Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility
Yes, a very interesting project; similar power output to an AP1000. Would have really changed the energy landscape to have such a deployable power station. https://econtent.unm.edu/digital/collection/nuceng/id/98/rec...
thepace
It is not the just queue that is the bottleneck. If the new power plants designed specifically for powering these new AI data centers are connected to the existing electric grid, the energy prices for regular customers will also get affected - most likely in an upwardly fashion. That means, the cost of the transmission upgrades required by these new datacenters will be socialized which is a big problem. There does not seem to be a solution in sight for this challenge.
markus_zhang
Maybe they will invest in nuclear reactors.
Data center, AI and nuclear power stations. Three advanced technologies, that's pretty good.
UltraSane
They are trying. Microsoft wants to star the 3 Mile Island reactor. And other companies have been signing contracts for small modular reactors. SMRs are a perfect fit for modern data centers IF they can be made cheaply enough.
bakuninsbart
Wind, solar, and gas are all significantly cheaper in Texas, and can be brought online much quicker. Of course it wouldn't hurt to also build in some redundancy with nuclear, but I believe it when I see it, so far there's been lots of talk and little success in new reactors outside of China.
jonisgold
I think this is right- data centers powered by fission reactors. Something like Oklo (https://oklo.com) makes sense.
null
jiggawatts
Notably it is significantly more than the revenue of either of AWS or Azure. It is very comparable to the sum of both, but consolidated into the continental US instead distributed globally.
ericcumbee
watching the press conference and Onsite power production were mentioned. I assume this means SMRs and solar.
jazzyjackson
just as likely to be natural gas or a combination of gas and solar. I don't know what supply chain looks like for solar panels, but I know gas can be done quickly [1], which is how this money has to be spent if they want to reach their target of 125 billion a year.
The companies said they will develop land controlled by Wise Asset to provide on-site natural gas power plant solutions that can be quickly deployed to meet demand in the ERCOT.
The two firms are currently working to develop more than 3,000 acres in the Dallas-Fort Worth region of Texas, with availability as soon as 2027
[0] https://www.datacenterdynamics.com/en/news/rpower-and-wise-a...
[1.a] https://enchantedrock.com/data-centers/
[1.b] https://www.powermag.com/vistra-in-talks-to-expand-power-for...
toomuchtodo
US domestic PV module manufacturing capacity is ~40GW/year.
gunian
could something of this magnitude be powered by renewables only?
apsec112
I don't think any assembly line exists that can manufacture and deploy SMRs en masse on that kind of timeframe, even with a cooperative NRC
mikeyouse
There have been literally 0 production SMR deployments to date so there’s no possibility they’re basing any of their plans on the availability of them.
dhx
Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]
I doubt the US choice of energy generation is ideological as much a practicality. China absolutely dominates renewables with 80% of solar PV modules manufactured in China and 95% of wafers manufactured in China.[3] China installed a world record 277GW of new solar PV generation in 2024 which was a 45% year-on-year increase.[4] By contract, the US only installed ~1/10th this capacity in 2024 with only 14GW of solar PV generation installed in the first half of 2024.[5]
[1] https://en.wikipedia.org/wiki/Cost_of_electricity_by_source
[2] https://www.iea.org/data-and-statistics/charts/lcoe-and-valu...
[3] https://www.iea.org/reports/advancing-clean-technology-manuf...
[4] https://www.pv-magazine.com/2025/01/21/china-hits-277-17-gw-...
[5] https://www.energy.gov/eere/solar/quarterly-solar-industry-u...
margorczynski
> Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]
This completely ignores storage and the ability to control the output depending on needs. Instead of LCOE the LFSCOE number makes much more sense in practical terms.
cavisne
Much more likely is what xAI did, portable gas turbines until the grid catches up.
wujerry2000
For fun, I calculated how this stacks up against other humanity-scale mega projects.
Mega Project Rankings (USD Inflation Adjusted)
The New Deal: $1T,
Interstate Highway System: $618B,
OpenAI Stargate: $500B,
The Apollo Project: $278B,
International Space Station: $180B,
South-North Water Transfer: $106B,
The Channel Tunnel: $31B,
Manhattan Project: $30B
Insane Stuff.
krick
It's unfair, because we are talking in the hindsight about everything but Project Stargate, and it's also just your list (and I don't know what others could add to it) but it got me thinking. Manhattan Project goal is to make a powerful bomb. Apollo is to get to the Moon before soviets do (so, because of hubris, but still there is a concrete goal). South-North Water Transfer is pretty much terraforming, and others are mostly roads. I mean, it's all kinda understandable.
And Stargate Project is... what exactly? What is the goal? To make Altman richer, or is there any more or less concrete goal to achieve?
Also, few items for comparison, that I googled while thinking about it:
- Yucca Mountain Nuclear Waste Repository: $96B
- ITER: $65B
- Hubble Space Telescope: $16B
- JWST: $11B
- LHC: $10B
Sources:
https://jameswebbtracker.com/jwst/budget
spacephysics
AI race is arguably just as, and maybe even more important, than the space race.
From a national security PoV, surpassing other countries’ work in the field is paramount to maintaining US hegemony.
We know China performs a ton of corporate espionage, and likely research in this field is being copied, then extended, in other parts of the world. China has been more intentional in putting money towards AI over the last 4 years.
We had the chips act, which is tangentially related, but nothing as complete as this. For i think a couple years, the climate impact of data centers caused active political slowdown from the previous administration.
Part of this is selling the project politically, so my belief is much of the talk of AGI and super intelligence is more marketing speak aimed at a general audience vs a niche tech community.
I’d be willing to predict that we’ll get some ancillary benefits to this level of investment. Maybe more efficient power generation? Cheaper electricity via more investment in nuclear power? Just spitballing, but this is an incredible amount of money, with $100 billion “instantly” deployed.
philipwhiuk
AI is important but are LLMs even the right answer?
We're not spending money on AI as a field, we're spending a lot of money on one, quite possibly doomed, approach.
Dalewyn
>What is the goal?
Be the definitive first past the post in the budding "AI" industry.
Why? He who wins first writes the rules.
For an obvious example: The aviation industry uses feets and knots instead of metres because the US invented and commercialized aviation.
Another obvious example: Computers all speak ASCII (read: English) and even Unicode is based on ASCII because the US and UK commercialized computers.
If you want to write the rules you must win first, it is an absolute requirement. Runner-ups and below only get to obey the rules.
trillic
The aviation and maritime industries use knots because the nautical mile is closely tied to longitude/latitude.
A vessel traveling at 1 knot along a meridian travels one minute of geographic latitude per hour.
frontalier
okay, but what advantages do these rules bring to the winner? what would these look like in this context?
i guess what i'm asking is: what was the practical advantage of ascii or feet and knots that made them so important?
nopinsight
The goal is Artificial Superintelligence (ASI), based on short clips of the press conference.
It has been quite clear for a while we'll shoot past human-level intelligence since we learned how to do test-time compute effectively with RL on LMMs (Large Multimodal Models).
krick
Here we go again... Ok, I'll bite. One last time.
Look, making up a three-letter acronym doesn't make whatever it stands for a real thing. Not even real in a sense "it exists", but real in a sense "it is meaningful". And assigning that acronym to a project doesn't make up a goal.
I'm not claiming that AGI, ASI, AXY or whatever is "impossible" or something. I claim that no one who uses these words has any fucking clue what they mean. A "bomb" is some stuff that explodes. A "road" is some flat enough surface to drive on. But "superintelligence"? There's no good enough definition of "intelligence", let alone "artifical superintelligence". I unironically always thought a calculator is intelligent in a sense, and if it is, then it's also unironically superintelligent, because I cannot multiply 20-digit numbers in my mind. Well, it wasn't exactly "general", but so aren't humans, and it's an outdated acronym anyway.
So it's fun and all when people are "just talking", because making up bullshit is a natural human activity and somebody's profession. But when we are talking about the goal of a project, it implies something specific, measurable… you know, that SMART acronym (since everybody loves acronyms so much).
pinot
Those are all public projects except for one..
null
alpb
Yeah, I'm not sure why we're pretending this will benefit the public. The only benefit is that it will create employment, and datacenter jobs are among the lowest paid tech workers in the industry.
maxglute
Also note compute deprecates much faster than multi decade infra projects with chance of obsolecence. If deepseek keeps pace with releasing near SOTA models, those compute centres are going to have hard time recooping value / return on capital.
gizmondo
Building a lot of compute will likely end up more useful than Apollo & ISS, which were vanity projects.
fastball
Neom: $1.5T
null
383toast
Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...
philipwhiuk
MGX has at least $100bn: https://www.theinformation.com/articles/a-100-billion-middle...
This is Abu Dhabi money.
csomar
That's their total fund and I doubt they are going all in with it in the US. Still, to reach $500bn, they need $125bn every single year. I think they just put down the numbers they want to "see" invested and now they'll be looking for backers. I don't think this is going anywhere really.
petesergeant
This would be a large outlay even for UAE, who would be giving it to a direct competitor in the space: UAE is one of the few countries outside of the US who are in any way serious about AI.
themagician
Softbank is being granted a block of TRUMP MEMES, the price of which will skyrocket when they are included in the bucket of crypto assets purchased as part of the crypto reserve.
1oooqooq
how I wish that was a joke...
griomnib
Altman is pivoting from WorldCoin to TrumpCoin - your retina will shortly be wired into the fascist meme-o-verse.
themagician
It's actually wireless, via 5G as part of the AI designed MRNA vaccine.
notatoad
there doesn't appear to be any timeline announced here. the article says the "initial investment" is expected to be $100bn, but even that doesn't mean $100bn this year.
if this is part of softbank's existing plan to invest $100bn in ai over the next four years, then all that's being announced here is that Sama and Larry Ellison wanted to stand on a stage beside trump and remind people about it.
HotHotLava
The literal first sentence of the announcement is:
> The Stargate Project is a new company which intends to invest $500 billion over the next four years
ericjmorey
The project was announced a year ago so "new"
tmvphil
Not the first sentence of the new AP link target, which is much more vague. Kind of annoying for HN to swap it out like this.
ericjmorey
Seems like you nailed it.
TuringNYC
>> Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...
1. The outlays can be over many years.
2. They can raise debt. People will happily invest at modest yields.
3. They can raise an equity fund.
jameshart
Soooo this isn’t so much ‘announcing an investment’ as ‘announcing an investment opportunity’?
Why not continue:
4. They can start a kickstarter or go fund me
5. They can go on Dragons’ Den
…
TuringNYC
>> 4. They can start a kickstarter or go fund me
Debt/Equity Fundraising is basically a kickstarter! Remarkably similar.
griomnib
6. ??? 7. Profit.
b3lvedere
Maybe it's in Bison Dollars?
sangnoir
4. The US government can chip in via grants, tax breaks or contracts.
It's all very Dr. Strangelove. "Mr. President, we must not allow an AI gap! Now give us billions"
selimthegrim
Is Elon putting on some black leather?
LarsDu88
Quite possibly pulled out of their asses...
If Son can actually build a 500B Vision Fund it can only come from one of two places...
somehow the dollar depreciates radically OR Saudis
Vision Fund was heavily invested in by the Saudis so...
jhallenworld
Oracle's cash on hand is presumably irrelevant- I think they are on the receiving end of the money, in return for servers. No wonder Larry Ellison was so fawning.
Is this is a good investment by Softbank? Who knows.. they did invest in Uber, but also have many bad investments.
ansible
> `... they did invest in Uber, but also have many bad investments.`
The one I really don't get is that they funded Adam Neumann's new company after the collapse of WeWork. How stupid do you have to be to give that guy any more money?
handfuloflight
Sleight of hand with the phrasing "up to" $500B.
dkrich
Psst: it’s probably going to end up being a fraction of that but doesn’t make for as good a headline
mppm
Apart from my general queasiness about the whole AGI scaling business and the power concentration that comes with it, these are the exact four people/entities that I would not want to be at the tip of said power concentration.
mattlutze
Ellison should be nowhere near this:
https://arstechnica.com/information-technology/2024/09/omnip...
The man has the moral system of a private prison and the money to build one.
thelastgallon
<quote> Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person. </quote>
What is far more important to understand is to ignore all that nonsense and focus on who makes money? It will be Ellison and his buddies making tens of billions of dollars/year selling 'solutions' to local governments, all paid by your property taxes. This also enables an ecosystem of theft, where others benefit a lot more. With the nexus of Private Prisons, kids for cash judges (or judges investing in stock of prisons), DEA/police unions, DEA unions, small rural towns increasing prison population (because they get added to the total pop, and get funds allocated).
More importantly this is extremely attractive to police who can steal billions every day from civil forfeiture, they have access to anyone who makes a bank withdrawal or transacts in cash, all displayed in real time feeds, ready for grabbing!
null
spacechild1
> "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place.
Wow! It is genuinely frightening that these people should be in control of our future!
idiotsecant
Literal 'new world order' stuff here. Alex Jones and crew got so excited that their guy was in the driver's seat that they didn't notice the actual illuminati lizard people space lasers being deployed.
null
pj_mukh
I don't think we'll ever have a zero-crime society, neither should we aim to be one. But being left to the vagaries of police (and union) politics, culture and the complications of city budgets is clearly broken.
Example: Cities are being presented a false choice between accepting deadly high speed chases vs zero criminal accountability [1], which in the world of drones seems silly [2]
I don't want the police to have unfettered access to surveil any and all citizens but putting camera access behind a court warrant issued by a civilian elected judge doesn't feel that dystopian to me.
Is that what Ellison was alluding to? I have no idea, but we are no longer in a world where we should disregard this prima facie.
[1]: https://www.ktvu.com/news/controversial-oakland-police-pursu...
[2]: https://www.cbsnews.com/sanfrancisco/news/san-francisco-poli...
TomatoCo
> I don't think we'll ever have a zero-crime society, neither should we aim to be one.
This reminded me of https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...
mike_hearn
That's a pretty deceptive and ragebaity article.
If you look at the original video [1], starting at 1:09:00, he's talking specifically about police body/dashcams recording interactions with citizens during callouts and stops, not everyone all the time as that article strongly implies. The USA already decided to record what police see all the time during these events, so there's no new privacy issue posed by anything he's suggesting. The question is only how those videos are used. In particular, he points out that police are allowed to turn off bodycams for privacy reasons (e.g. bathroom breaks), which is a legitimate need but it can also be abused, and AI can fix this loophole.
In the same segment he also proposes using AI to watch CCTV at schools in real time to trigger instant response if someone pulls out a gun, and using AI to spot wildfires using drones. For some reason the media didn't condemn those ideas, just the part about supervising cop stops. How curious.
[1] https://www.oracle.com/events/financial-analyst-meeting-2024...
lenerdenator
We keep saying people like him shouldn't be involved in certain ventures, and yet, they still are. More than ever, actually.
oldpersonintx
[dead]
throw-the-towel
Do not fall into the trap of anthropomorphizing Larry Ellison.
aswanson
2025 is shaping up to be When the Villains Win year.
null
A4ET8a8uTh0_v2
Just Ellison alone brings unwelcome feeling of having Oracle craziness forced down our collective throats, but I share your concern about the unholy alliance generated in front of us.
DebtDeflation
My immediate reaction to the announcement was one of these is not like the others. OpenAI, a couple of big investment funds, Microsoft, Nvidia, and...............Oracle?
breadwinner
Oracle provides two things: A datacenter for Nvidia chips, and health data. Oracle Cerner had a 21.7% market share for inpatient hospital Electronic Health Records (EHR). Larry Ellison specifically mentioned healthcare when announcing it in the Whitehouse.
The announcement was funny because they weren't quite sure what they are going to do in the health space. Sam Altman was asked, and he immediately deferred to Ellison and Masayoshi. Ellison was vague... it seems they know they want to do something with Ellison's massive stash of health data... but they don't quite know what they are building yet.
Octoth0rpe
Oracle makes perfect sense in that they are 1) a massive datacenter company, and 2) sell a variety of saas products to enterprises, which is a major target market for AI.
A4ET8a8uTh0_v2
Sadly, it is not that unexpected given some of his recent interviews[1]. Any other day, I would agree it is a surprise.
[1] https://arstechnica.com/information-technology/2024/09/omnip...
rTX5CMRXIfFG
Oracle has a lot of valuable classified information about the state and its enemies due to its business.
freehorse
There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side right now. Moreover, we hear here and there that Trump "keeps his promises". A lot of the promises we do not know about and we may never will. These people did not spend money supporting his campaign for nothing. In other places and eras this would have been called corruption, now it is called "keeping his promises".
fsndz
What do you prefer ? Letting DeepSeek and China lead the AI war ? DeepSeek R1 is a big wake up call https://open.substack.com/pub/transitions/p/deepseek-is-comi...
bayindirh
Us vs. Them. My favorite perspective [0].
Regarding to your question, yes. I'd prefer a healthy counterbalance to what we have currently. Ideally, I'd prefer cooperation. A worldwide cooperation.
rpastuszak
Treating the world as a bunch of football teams is a great distraction though.
andy_ppp
Arguably the cooperation between the US and China has lead to the most economic growth and prosperity in human history, it's a shame the US and China are returning to a former time.
mppm
From what I've read about DeepSeek and its founder, I would very much prefer them, even with China factored in. At least if these particular Four Horsemen are the only alternative.
On a tangential note, those who wish to frame this as the start of the great AI war with China (in which they regrettably may be right), should seriously consider the possibility of coming out on the losing end. China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.
corimaith
>China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.
So those who framing this are correct and that we should matching their momentum here asap?
vbezhenar
China is much more peaceful nation compared to US. So, yes, I'd prefer China leading AI research any day. They are interested in mutual trade and prosperity, they respect local laws and culture, all unlike US.
jbaiter
"They respect local laws and culture" - I think people from Xinyang probably have a very different perspective on that........
infecto
Holy smokes. Do folks like you actually believe this? China has its own style of colonialism (whatever you want to call it) but it certainly exists as strong as the US flavor.
otabdeveloper4
> What do you prefer ? Letting DeepSeek and China lead the AI war ?
Me personally? Yes.
smeeger
the outcome would be exactly the same. AGI leads the human race off of a cliff, not in the direction of one human interest group vs another. the only difference would be that it was china that was responsible for the extinction if the human race rather than another country. i would prefer to die with dignity… the outcome we should all be advocating for is a global halt of AI research — not because it would be easy but because there is no other option.
whimsicalism
we need to cooperate and put aside our petty politicking right now. the potential downsides of ‘racing’ without building a safety scaffold are catastrophic.
amelius
I would love for Oracle to use AI to put their entire legal department out of work, though.
andy_ppp
So you want them to be infinitely more litigious?
A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful, the courts clearly won't be able to cope unless you have AI powered courts too? None of how these monumental changes will work has been thought through at all, let's hope AI is smart enough to tell us what to do...
miki123211
> A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful
It won't just be at the behalf of the powerful.
If lawyers are able to file 10x as many lawsuits per hour, the cost of filing a lawsuit is going to go down dramatically, and that's assuming a maximally-unfriendly regulatory environment where you still officially need a human lawyer in the loop.
This will enable people to e.g. use letters signed by an attorney at law, or even small claims court, as their customer support hotline, because that actually produces results in today.
Nobody is prepared for that. Not the companies, not the powerful, not the courts, nobody.
roenxi
Oracle could reasonably be hit with some sort of stick every time they filed a frivolous lawsuit until the AI got tuned appropriately. Then it'd be a situation where Oracle were continuously suing people who don't follow the law, following a reasonably neutral and well calibrated standard that is probably going to end up as similar to an intelligent and well practised barrister. That would be acceptable. If people aren't meant to be following the law that is a problem for the legislators.
SketchySeaBeast
I'm envisioning a future where there's a centralized "legal exchange", much like the NYSE, where high speed machines file micro-ligation billions of times faster than any human can, which is decided equally quickly, an unrelenting back and forth buzz of lawsuits and payouts as every corporation wages constant automated legal battle. Small businesses are consumed in seconds, destroyed by the filing of a million computerized grievances while the major players end up in a sort of zero-sum stalemate, where money is constantly moving, but it never shifts the balance of power.
... has anyone ever written a book about this? If not, I think I'm gonna call dibs.
ReptileMan
>A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful,
AI controlled cheap Chinese drones will start flying into their residencies carrying some trivial to make high explosives. With the class wars getting hotter in next few years we may be saying that Luigi Mangione had the right ideas towards the PMC, but he was underachiever.
maxlin
I don't like how OpenAI turned majorly from what it was founded upon and their bias training ... but when considering the actual opponent here is China, it's not the worst.
I think OpenAI was originally founded against that kind of force. Autocratic governments becoming masters of AI.
vineyardmike
I’m an American who was definitely raised in a “China Bad” world.
The last few months, between TikTok ban, RedNote, elections, United Healthcare CEO, etc I’ve seen so many people compare the US to China, and favor China. Which is of course crazy because China has things like forced labor and concentration camps of religious minorities, and far worse oppression than the US. But many people just view everything coming out of the US Gov’s mouth as bad.
Is the Chinese government worse than the US government? Probably. Do people universally think that still? Not really. The US Gov will have to contend with the reality that people -even citizens- are starting to view them and not their “enemy” as the “Bad Guys”.
portaouflop
I don’t get the good guys / bad guys mindset tbh. Sure china gov is pretty bad and the us is by many metrics better - but why center your whole worldview around things that probably don’t affect you that much in your daily life?
US also has forced labour, huge prison population, bombing civilians and journalists to oblivion, literally nuking other countries and religious fanatics — do I still think china would be less pleasant as our new overlord? Yes — Do I think the world is better off with US-American hegemony? I’m not so sure.
Maybe it’s a net good for the world if not one power is dominating — maybe it’s the start of a hellish ww3. I choose to believe the former.
edit: typos
ActionHank
By the time this project is done it will have been dead for 2 years.
Too many greedy mouths. Too many corporations. Too little oversight. Too broad an objective. Technology is moving too quickly for them to even guess at what to aim for.
nejsjsjsbsb
Need a bit of Zuck too
blantonl
Yeah, really the only thing missing from this initiative was the personal information of the vast majority of the United States population handed over on a silver platter.
roenxi
That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.
That said, this does look like dreadful policy at the first headline. There is a lot of money going in to AI, adding more money from the US taxpayer is gratuitous. Although in the spirit of mixing praise and condemnation, if this is the worst policy out of Trump Admin II then it'll be the best US administration seen in my lifetime. Generally the low points are much lower.
whimsicalism
Nietzsche wrote about these phenomena a long time ago in his Genealogy of Morality. there will never be someone who reaches the top who doesn’t become an object of ire in modern Western culture.
mppm
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.
I agree in principle. And realistically, there is no way Altman would not be part of this consortium, much as I dislike it. But rounding out the team with Ellison, Son and Abu Dhabi oil money in particular -- that makes for a profound statement, IMHO.
infecto
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.
Trying to process this but doesn’t his fall from grace have more to him increasing his real personality to the world? Sometime around calling that guy a pedo. Not much bothers me but at the very least his apparent lack of decision making calls into question many things.
anon84873628
Of all the sentiments that call for reflection, the parent's belief about why people don't like Elon is the one that needs it the most.
JKCalhoun
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.
Did we see the same fallout from the space-race from a couple generations ago?
I don't think so — certainly not in the way you're framing it. So I guess I don't accept your proposition as a guarantee of what will happen.
roenxi
A couple of generations ago we didn't have the internet and the only things people heard about were being managed. The big question was whether the media editors wanted to build someone up or tear them down.
The spoils of the space race would have gone to someone a lot like Musk. Or Ellison. Or Masayoshi Son. Or Sam Altman. Or the much worse old-moneyed types. The US space program was, famously, literally employing ex-Nazis. I doubt the beneficiaries of the money had particularly clean hands either
unethical_ban
Elon Musk was an internet darling when his top character trait was "space! EVs!". Then he went Kanye/alt-right and weaponized twitter. It didn't have to do with the fact he has a lot of money.
Many people dislike all billionaires, but some have escaped criticism more than others by successfully appearing to have some humanity left in them, like Gates and Cuban.
ghostzilla
This seems more like a move designed to frighten China -- or force them to spend money making LLMs -- then an actual threat. The clues are that Trump ceremonially blessed the deal but did not promise money (SoftBank et al will, supposedly), and then Musk said that's all fake because SoftBank doesn't have the money, and Altman countered that Musk should not be butthurt and should put America first. Who does that? I'm thinking, no one who has something real on his hands.
MichaelMoser123
The moon program was $318 billion in 2023 dollars, this one is $500 billion. So that's why the tech barons who were present at the inauguration were high as a kite yesterday, they just got the financing for a real moon shot!
aurareturn
To be fair, it’s not easy to monetize the moon program into profitability. This has a far better shot of sustaining profitability.
lvl155
It appears this basically locks out Google, Amazon and Meta. Why are we declaring OpenAI as the winner? This is like declaring Netscape the winner before the dust settled. Having the govt involved in this manner can’t be a good thing.
VectorLock
Since the CEOs of Google, Amazon and Meta were seated at the front row of the inauguration, IN FRONT OF the incoming cabinet, I'm pretty confident their techno -power-barrel will come via other channels.
jvm___
Broligarchs
skepticATX
Interestingly, there seems to be no actual government involvement aside from the announcement taking place at the White House. It all seems to be private money.
trhway
Government enforcing or laxing/fast tracking regulations and permits can kill or propel even a 100B project, and thus can be thought as having its own value on the scale of the given project’s monetary investment, especially in the case of a will/favor/whim-based government instead of a hard rules based deep state one.
cmdli
Isn't that a state and local-level thing, though? I can't imagine that there is much federal permitting in building a data center, unless it is powered by a nuclear reactor.
rcpt
Yeah but the linked article makes it seem like the current, one-day-old, administration is responsible for the whole thing.
janalsncm
The article also mentions that this all started last year.
HarHarVeryFunny
Trump just tore up Biden's AI safety bill, so this is OpenAI's thank-you - let Trump take some credit
null
modeless
I generally agree that government sponsorship of this could be bad for competition. But Google in particular doesn't necessarily need outside investment to compete with this. They're vertically integrated in AI datacenters and they don't have to pay Nvidia.
shuckles
Google definitely needs outside investment to spend $500b on capex.
modeless
They don't have to spend $500B to compete. Their costs should be much lower.
That said, I don't think they have the courage to invest even the lower amount that it would take to compete with this. But it's not clear if it's truly necessary either, as DeepSeek is proving that you don't need a billion to get to the frontier. For all we know we might all be running AGI locally on our gaming PCs in a few years' time. I'm glad I'm not the one writing the checks here.
jonas21
Over what time frame? They could easily spend that much over the next 5 to 10 years without outside investment (and they probably will).
chairmansteve
TFA says $100 billion. The $500 is maybe, eventually.
misiti3780
Probably not popular opinion - but I actually think Google is winning this now. Deep research is the most useful AI product I have used (Claud is significantly more useful than openAI)
OutOfHere
I am not sure if OpenAI will be the winner despite this investment. Currently, I see various DeepSeek AI models as offering much more bang for the buck at a vastly cheaper cost for small tasks, but not yet for large context tasks.
impulser_
Because this is Oracle's and OpenAI's project with SoftBank and MGX as investors.
jazzyjackson
It's who you know. Sam is buddies with Masa, simple as.
qgin
How involved is the government at all? I’m still having a hard time seeing how Trump or anyone in the government is involved except to do the announcement. These are private companies coming together to do a deal.
null
jnsaff2
I miss n gate so much. I asked AI to generate one for this thread.
"In a stunning display of fiscal restraint, Sam Altman only asks for $500 billion instead of his previous $7 trillion moonshot. Hackernews rejoices that the money will be spent in Texas, where the power grid is as stable as a cryptocurrency exchange. Oracle's involvement prompts lengthy discussions about whether Larry Ellison's surveillance dystopia will run on Java or if they'll need to purchase an enterprise license for consciousness itself. Meanwhile, SoftBank's Masayoshi Son continues his streak of funding increasingly expensive ways to turn electricity into promises, this time with added patriotism. The comments section devolves into a heated debate about whether this is technically fascism or just regular old corporatocracy, with several users helpfully pointing out that actually, the real problem is systemd."
cruffle_duffle
> The comments section devolves into a heated debate about whether this is technically fascism or just regular old corporatocracy, with several users helpfully pointing out that actually, the real problem is systemd.
I use arch, btw.
causal
Okay that's hilarious.
jparishy
I hear this joked about sometimes or used as a metaphor, but in the literal sense of the phrase, are we in a cold war right now? These types of dollars feel "defense-y", if that makes sense. Especially with the big focus on energy, whatever that ends up meaning. Defense as a motivation can get a lot done very fast so it will be interesting to watch, though it raises the hair on my arms
kube-system
Absolutely
for instance: https://en.wikipedia.org/wiki/2024_United_States_telecommuni...
jparishy
Right, but they've been doing that for a while, to everyone. The US is much quieter about it, right? But you can twist this move and see how the gov would not want to display that level of investment within itself as it could be interpreted as a sign of aggression. but it makes sense to me that they'd have no issue working through corporations to achieve the same ends but now able to deny direct involvement
kube-system
I don't think this administration is worried too much about showing aggression. If anything they are embracing it. Today was the first full day, and they have already threatened the sovereignty of at least four nations.
fooblaster
It's called a bubble. The level of spending now defines how fucked we are in 2-3 years.
toomuchtodo
You know those booths at events where money is blown around and the person inside needs to grab as much as they can before the timer runs out? This is that machine for technologists until the bubble ends. The fallout in 2-3 years is the problem of whomever invested or is holding bags when (if?) the bubble pops.
Make hay while the sun shines.
fooblaster
yeah. If the numbers are real, this might be the end of SoftBank.
distortionfield
We certainly are, if you ask me. Especially when you realize that we haven’t had official comms with Russia since the war in Ukraine broke out.
etblg
The US government and its media partners sure seem to think so.
We changed the URL from https://openai.com/index/announcing-the-stargate-project/ to a third-party report. Readers may want to read both. If there's a better URL, we can change it again.