US AI Action Plan
627 comments
·July 23, 2025softwaredoug
Obviously AI is a massive and important area for economic growth. But so is clean energy. And both right now are at an inflection point.
It seems the US is going to thrive with the former but naively stick our heads in the sands with the latter.
We’ll cede economic leadership, and wonder in 20 years what happened as other countries lead in energy. Even worse, the administrations stance will encourage US energy companies to pursue bad strategies, letting them avoid transforming their business. In 10-20 years they'll be bankrupt and the US will probably have to bail them out for strategic reasons.
taurath
The US is not naively sticking our heads in the sand, our leadership is making direct choices to make sure that they rule over the ashes rather than let a future happen where they have less power.
Lonestar1440
Overall US Energy production has been expanding, faster, each recent year. https://www.eia.gov/energyexplained/us-energy-facts/. This is all before you factor in the recent attention to Nuclear, which could come online within the next decade.
The ice caps may be worse off for it, but there's little reason to think the USA will cease to "lead in energy" anytime soon.
margalabargala
The US has long since exhausted it's "easy" oil/gas reserves. Yes, there's tons more down there, but it's increasingly hard to get to. Lots of extraction methods only make sense when the price for oil is above some amount.
If the rest of the world standardizes on solar+battery, demand for oil goes down, and so will the price. Which in turn makes US-produced oil not cost effective to extract, and domestic energy production collapses in favor of cheap foreign imports.
And then we're worse off in several different ways.
axpy906
This probably a stupid question but do solar and batteries depend on rare earth metals and their supply?
Lonestar1440
There are a great many assumptions in this argument, and I'm not sure they stand up well to examination.
1) "We're out of easily extractable oil" maybe, but I've heard it before and technology does have a way of marching forward.
2) "Rest of world's oil demand will drop" is possible but certainly not happening today and far from certain.
3) "Then Oil prices will plummet in the US Domestic market" is far from a sure thing even if 2) comes to pass. How do the other producers - who don't have large domestic markets! - react? What happens to global petrochemical demand? And what sort of Industrial policy could shield our markets, even if this happens globally?
At the end of the day, we have a continent full of oil (and Uranium! which I prefer!) and an energy-hungry population.
Gene5ive
Ice caps? Try human beings.
Increased Mortality: Projections indicate an additional 14.5 million deaths by 2050 due to climate-related impacts like floods, droughts, heatwaves, and climate-sensitive diseases (e.g., malaria and dengue).
Economic Losses: Global economic losses are predicted to reach $12.5 trillion by 2050, with an additional $1.1 trillion burden on healthcare systems due to climate-induced impacts. One study estimates that climate change will cost the global economy $38 trillion a year within the next 25 years.
Displacement and Migration: Over 200 million people may be displaced by climate change by 2050, with an estimated 21.5 million displaced annually since 2008 by weather-related events. In a worst-case scenario, the World Bank suggests this figure could reach 216 million people moving internally due to water scarcity and threats to agricultural livelihoods. Some researchers predict that 1.2 billion people could be displaced by 2050 in the worst-case scenario due to natural disasters and other ecological threats.
Food and Water Insecurity: Climate change exacerbates food and water insecurity, leading to malnutrition and increased disease burden, especially in vulnerable populations. For example, a significant increase in drought in certain regions could cause 3.2 million deaths from malnutrition by 2050. An estimated 183 million additional people could go hungry by 2050, even if warming is held below 1.6°C.
Mental Health Impacts: Climate change contributes to mental health issues like anxiety, depression, and PTSD, particularly in vulnerable populations and those experiencing climate disasters or chronic changes like drought. Extreme heat has been linked to increased aggression and suicide risk. Studies also indicate that children born today will experience a significantly higher number of climate extremes than previous generations, potentially impacting their mental well-being and sense of future security.
Inequality and Vulnerability: Climate change disproportionately affects vulnerable populations, including low-income individuals, people of color, outdoor workers, and those with existing health conditions, worsening existing health inequities and hindering poverty reduction efforts.
martin82
Nice try, ChatGPT.
Not a single of these idiotic projections will ever come true.
softwaredoug
I specifically refer to the question of who will own the IP and economic might to lead in the clean energy market. Who will innovate? Who will build industrial capacity and know how, etc. It seems we’ve ceded the field
Not just strict energy production. Especially when it comes from sources of energy increasingly infeasible and unpopular.
pizzafeelsright
Whomever has more nuclear power generation will own energy. The cleanest energy is nuclear.
dangoor
Nuclear is clean, but has other drawbacks. "Solar+Storage is so much farther along than you think": https://www.volts.wtf/p/solarstorage-is-so-much-farther-alon...
godelski
This doesn't seem to be passing a sniff test
1) cherry picking the best case.
2) numbers seem off
> The sunniest US city, Las Vegas, could get 98% of its power from solar+storage at a price of $104/MWh, which is higher than gas but cheaper than new coal or nuclear. It could get to 60% solar+storage at $65/MWh — cheaper than gas.
But according to this[0], the US average cost of nuclear is ~$32/MWh (2023). I think the subtle keyword is "new", which could make for a very fuzzy argument.Or maybe prices are different in LV but that's a big differential. It's also mentioning it's the best case scenario for solar. So even then, maybe that's the best option for Las Vegas, but is it elsewhere?
World Nuclear also gives us some global numbers to help us see the larger range of costs [1]
> LCOE figures assuming an 85% capacity factor ranged from $27/MWh in Russia to $61/MWh in Japan at a 3% discount rate, from $42/MWh (Russia) to $102/MWh (Slovakia) at a 7% discount rate, and from $57/MWh (Russia) to $146/MWh (Slovakia) at a 10% discount rate.
I don't think this means we shouldn't continue investing in solar and storage, but neither does it suggest taking nuclear off the table. This might be fine for LV or other areas in the Southwest, but unless those costs can be stable for the rest of the country I think we should keep nuclear as an option.We shouldn't forget: it's not "nuclear vs solar" it's "zero carbon emitters vs carbon emitters". The former framing is something big oil and gas want you to argue, and that's why they've historically given funds to initiatives like the Sierra Nevada Club. If we care about the environment or zero emissions then the question isn't as simple as "nuclear vs solar" it is "what is the best zero carbon emitting producer given the constraints of the local region".
[0] https://www.statista.com/statistics/184754/cost-of-nuclear-e...
[1] https://world-nuclear.org/information-library/economic-aspec...
hn_throwaway_99
Everything I've read recently has emphasized that new nuclear installations will have difficulty competing with solar and storage.
Having a non-emitting form of base load is important, and nuclear has a place there, but it many applications it's just not cost competitive with renewables.
saubeidl
Nuclear fission is more expensive per kilowatt than solar and forces you to go through a lot more trouble to contain risk.
Maybe if fusion was viable, that'll change, but until then nuclear just doesn't make any sense.
schrodinger
It’s true that new nuclear is more expensive than solar + battery on a per-kWh basis, and the regulatory/compliance overhead is significant. But solar is intermittent, and batteries only solve short-duration gaps—firm, zero-carbon baseload still matters. Existing nuclear is actually quite cost-effective and displacing it often leads to more fossil fuel use. Long-term, we likely need a mix: cheap renewables for bulk energy, and nuclear (or equivalent) for reliability.
jmyeet
I really don't understand HN's love affair with nuclear.
Uranium mining produces significant toxic waste (tailings and raffinates). Fuel processing produces toxic waste, typically UF6. There is some processing of UF6 to UF4 but that doesn't solve the problem and it's not economic anyway. Fuel usage produces even more waste that typically needs to be actively cooled for years or decades before it can be forgotten about in a cave (as nuclear advocates argue).
And then who is going to operate the plant? This administration in particular is pushing for further nuclear deregulation, which is terrifying. You want to see what happens without regulation? Elon Musk's gas turbines in South Memphis with no Clean Air permits that are spewing pollution [1].
That's terrifying because the failure modes for a single nuclear incident are orders of magnitude worse than any other form of power plant. The cleanup from Fukushima requires technologies that don't exist yet, will take decades or centuries and will likely cost ~$1 trillion once its over, if it ever is [2].
And who's going to pay for that? It's not going to be the private operator. In fact, in the US there's laws that limit liability for nuclear accidents. The industry's self-insurance fund would be exhausted many times over by a single Fukushima incident.
And then we get to the hand waving about Chernobyl, Fukushima and Three Mise Island. "Those are old designs", "the new designs are immune to catastrophic failure" or, my favorite, "Chernobyl was because of mismanagement in the USSR" like there wouldn't be corner-cutting by any private operator in the US.
And let's just gloss over the fact that we've built fewer than 700 nuclear power plants, yet had 3 major incidents, 2 of them (Chernobyl and Fukushima) have had massive negative impacts. The Chernobyl absolute exclusion zone is still 1000 square miles. But anything negative is an outlier that should be ignored, apparently.
And then we get to the impact of carbon emissions in climate change but now we're comparing the entire fossil fuel power industry vs one nuclear plant. It's also a false dichotomy. The future is hydro and solar.
and then we get to the massive boondoggle of nuclear fusion, which I'm not convinced will ever be commercially viable. Energy loss and container destruction from fast neutrons is a fundamental problem that stars don't have because they have gravity and are incredibly large.
I have no idea where this blind faith in nuclear comes from.
[1]: https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...
[2]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...
hardolaf
Wow. So you really know nothing about the technology and are just spreading fear. The Chernobyl exclusion zone is mostly safe for people now outside of the fact that Russia is current bombing Ukraine.
The issue with cleanup at Fukushima Daichii is one of money and political will, not one of technology. We've had the ability to clean up nuclear accidents since the 1950s.
Also, the future of power is increasingly looking like LNG plants which pump only slightly less radioactive carbon into the atmosphere than coal plants do.
more_corn
Its astroturfing
barbazoo
> I really don't understand HN's love affair with nuclear.
s/HN/Individuals
7bit
You obviously have no idea how much destruction it causes to the environment to get the uranium out of the earth. Maybe educate yourself before putting such nonsense into the world.
more_corn
Nuclear takes 20 years to build and plants cost $10B.
Rooftop solar starts paying back instantly and can be deployed in $20k tranches. It also requires no additional grid infrastructure and decreases demand on non generating grid infrastructure.
Pretty sure it’s rooftop solar that wins the future.
2600
It's part of the current administration's energy agenda, President Trump signed executive orders a couple of months ago, to increase nuclear energy capacity by 400% in the next 25 years, revising regulations, and expediting review and approval of reactor projects, which seems like the most effective strategy for expanding clean energy production.
atoav
A certain group of people keep saying that. But that particular idea of "clean" nuclear does not price in the 10.000 years of safe storage of nuclear waste materials (for the most dangerous HLW materials this number can go up to 100.000 years). Do you and your 3500 generations of ancestors volunteer to do this? Then it is cheap and clean. Otherwise it is yet another instance of "privatize the gains and socialize the externalities".
(And let's ignore the fact that humanity barely managed to organize anything that held even a mere 1000 years)
lupusreal
Nuclear waste is a complete non-issue. It's trivial to just let it sit around in a corner of the power plant's property for a century or two until somebody nuts up and dumps it down a bore shaft or into the ocean where it belongs.
There's no technical or economic problem here. The problem is completely one of PR, with ignoramuses thinking it's a big deal being the entire problem.
dingnuts
My understanding is that every other form of energy production has similar or worse concerns, including renewables due to the materials used to build and operate and decommission solar panels and windmills.
The argument you're making about waste has even led to the decommissioning of nuclear in Germany to be replaced with coal... burning coal also produces radioactive fly ash. Everything has tradeoffs!
I guess we could just give up on electricity entirely! That might save the planet
subhobroto
> wonder in 20 years what happened as other countries lead in energy
Can you clarify what leading in energy means? And what concerns do you have?
Do you mean we, in the U.S. are in a tarpit of regulations and red tape that makes setting up a nuclear power plant up impossible? Or something else?
IMHO, leading in energy also needs to take into account where that energy takes us and what it unlocks. I immigrated to the U.S. so I am extremely bullish so do consider that below.
My California perspective is that energy is going to be even more decentralized. I have not paid an electric bill in years and get a check from my utility once a year where they pay me wholesale rates for my net export. I net export because I rarely use any meaningful energy at night that my 5kwH battery pack cannot provide. Once battery prices fall even further, I will dump everything into my local storage and draw no gross power from my utility at all. For all practical purposes, I will be off grid.
Anyone in California has the technological ability to get there as well. The utilities dump GWh of solar energy because we produce so much!
The issue we have in the U.S. is one of horrible policies and regulation.
Your typical townhouse in the city block isn't going to be able to put 20 panels on their roof because their HOA is going to throw a fit. The owner won't be allowed to install it themselves and would have to pay an electrician tens of thousands of dollars because the city isn't going to permit it otherwise. The obstacle of installing $5k worth of parts is incredibly disappointing.
From my perspective, technologically, solar energy is going to become cheaper as storage continues to fall in price.
This will empower increasing productivity. In my case, once the GPU market becomes consumer friendly and less constrained, or fundamentally different LLMs are released that are CPU friendly but I can't imagine that possibility yet, I will buy more GPUs and increase my self host LLM capacity. Today, as of right now I an getting "Insufficient capacity" errors from AWS attempting to launch a g6.2xlarge cluster and puny 24GB GPUs cost a lot making renting from AWS a better choice. The responses from the coding models blow my mind. They often meet or beat the kind of code I would expect from a junior engineer I would have to pay $120k/yr for and that would be a cheap engineer in SoCal. A GPU cluster including running costs would be fraction of that so I would be able to expand quicker with less.
Whole offices are going to become more compact and continue to become decentralized or even remote. Their carbon footprint is then going to go practically zero (no office security patrol, no HVAC, no heating, etc). More people will be able to start businesses (higher GDP) with less, increasing the GDP per Co2 emissions.
My childhood friends in the E.U who are in the same space that I am in are less enthusiastic. My friends in Germany who bought a hundred PV panels is not happy at all.
So which country will lead in energy and what would they be doing?
infamouscow
People love using their pet issue as the sole explanation for why something did or didn't happen. It's never that simple.
My boomer boss thinks writing tests is unnecessary and slows shipping down. It might be true, but it fails to appreciate the full scope of the problem.
null
upquacker
[dead]
shortrounddev2
I believe that we should separate the general case of AI from the particular case of LLMs. AI models have been accelerating science for decades, and new technology helps drive economic growth. I am convinced that LLMs are not worth the money we've invested in them but I do believe that more "traditional" AI is a net positive in research fields. Traditional AI has also had a net negative effect on the quality of content from internet publishers (i.e Facebook), but has made it more productive by allowing them to squeeze more blood from the stone.
I think if you don't include LLMs, AI has obviously created economic growth. If you do include LLMs I think the conversation is more nuanced and obviously driven by the same kind of hype that led people to believe that Cryptocurrency is the future of the stock market
tempodox
The interesting thing is that the AI you're talking about (the one with the economic growth) isn't even called AI. Those are specialized tools that work and they have names that reflect that. When a fuzzy and inaccurate term like “AI” is being used, you know you've entered the realm of pure marketing and hype.
LorenDB
> Encourage Open-Source and Open-Weight AI
It's good to see this, especially since they acknowledge that open weights is not equal to open source.
rs186
Without providing actual support like money, the government saying they encourage open-* AI is no more meaningful than me saying the same thing.
In fact, if you open the PDF file and navigate to that section, the content is barely relevant at all.
SkyMarshal
We're clearly in an era where the US Govt simply doesn't have enough money to throw at everything it wants to encourage, and needs to develop alternate means of incentivizing (or de-disincentivizing) those things. Sensible minimal regulation is one, there may be others. Time to get creative and resourceful.
AvAn12
The budget is the policy, stripped of rhetoric. What any government spends money on IS a full and complete expression of its priorities. The rest is circus.
What increased and decreased in the most recent budget bill? That is the full and complete story.
If no $$ for open source or open weight model development, then that is not a policy priority, despite any nice words to the contrary.
berbec
The US has been continuously running a budget deficit for decades (brief blip at the end of Clinton/beginning of W Bush). This is more of an "epoch" than "era". I love the idea of incentives that aren't tax breaks!
mdhb
It’s genuinely bizarre to read a comment like this which seems to imply there is some kind of grand strategy behind this when the reality is and always has been “own the libs”.
They very clearly have no idea what the fuck they are doing they just know what other people say they should do and their toddler reaction is to do the opposite.
_DeadFred_
AI, which they are hoping takes over EVERYTHING, is probably one of the worthwhile ones for government to be involved in. If it has the chance to be this revolutionary, which would be better:
The government owning the machine that does everything.
Tech bros, with their recent love of guruship, with their willingness to do any dark pattern if it means bigger boats for them, owning the entire labor supply in order to improve the lives of 8 bay area families.
throw14082020
Even if they did provide more money, it doesn't mean it'll go to the right place. Government money is not the solution here. Money is already being spent.
jonplackett
How can this work with their main goal of assuring American superiority? If it’s open weights anyone else can use it too.
alganet
It doesn't say anything about open training corpus of data.
The USA supposedly have the most data in the world. Companies cannot (in theory) train on integrated sets of information. USA and China to some extent, can train on large amounts of information that is not public. USA in particular has been known for keeping a vast repository of metadata (data about data) about all sorts of things. This data is very refined and organized (PRISM, etc).
This allows training for purposes that might not be obvious when observing the open weights or the source of the inference engine.
It is a double-edged sword though. If anyone is able to identify such non-obvious training inserts and extract information about them or prove they were maliciously placed, it could backfire tremendously.
vharuck
So DOGE might not be consolidating and linking data just for ICE, but for providing to companies as a training corpus? In normal times, I'd laugh that off as a paranoiac fever dream.
sunaookami
That's exactly what the goal is: That everyone uses American models over Chinese models that will "promote democratic values".
mdhb
From a government that has made it extremely fucking clear that they aren’t ACTUALLY interested in the concept of democracy even in the most basic sense.
saubeidl
The ultimate propaganda machine.
somenameforme
The idea is to dominate AI in the same way that China dominates manufacturing. Even if things are open source that creates a major dependency, especially when the secret sauce is the training content - which is irreversibly hashed away into the weights.
guappa
I think the only way to dominate AI is to ban the use of any other AI…
HPsquared
They see people using DeepSeek open weights and are like "huh, that could encode the model creators' values in everything they do".
somenameforme
I doubt this has anything to do with 'values' one way or the other. It's just about trying to create dependencies, which can then be exploited by threatening their removal or restriction.
It's also doomed to failure because of how transparent this is, and how abused previous dependencies (like the USD) have been. Every major country will likely slowly move to restrict other major powers' AI systems while implicitly mandating their own.
nicce
Can a model make so sophisticated propaganda or manipulation that most won’t notice it?
ChrisRR
Well just look at the existing propaganda machines online and how annoyingly effective they are
pydry
Most western news propaganda isnt especially sophisticated and even the internally inconsistent narratives it pushes still end up finding an echo on hacker news.
cardamomo
I wonder how this intersects with their interest in "unbiased" models. Scare quotes because their concept of unbiased is scary.
rtkwe
Elon gives an unvarnished look at what they mean by 'unbiased' with respect to models. It's rewriting the training material or adding tool use (searching for Musk's tweets about topics before deciding it's output) to twist the output into ideological alignment.
ActorNightly
Its all meaningless though.
jsnider3
No, it's bad, since we will soon reach a point where AI models are major security risks and we can't get rid of an AI after we open-source it.
rwmj
"major security risks" as in Terminator style robot overlords, or (to me more likely) they enable people to develop exploits more easily? Anyway I fail to see how it makes much difference if the models are open or closed, since the barrier to entry to creating new models is not that large (as in, any competent large company or nation state can do it easily), and even if they were all closed source, anyone who has the weights can run up as many copies as they want.
shortrounddev2
The risk of AI is that they are used for industrial scale misinformation
bigyabai
Good to see what? "Encourage" means nothing, every example listed in the document is more exploitative than supportive.
Today, Google and Apple both already sell AI products that technically fall under this definition, and did without government "encouragement" in the mix. There isn't a single actionable thing mentioned that would promote further development of such models.
artninja1988
It's certainly more encouraging than the tone from a few months/ years ago, when there was talk of outright banning open source/ weigh foundational weight models
bigyabai
You literally cannot ban weights. You can try, but you can't. Anyone threatening to do so wasn't doing it credibly.
hopelite
It’s primarily motivated by control; similar to how all narcissistic, abusive, controlling, murderous, “dominating” (as the document itself proclaims) people and systems are. That is not motivated by magnanimity and genuine shared interest or focus on precision and accuracy.
The controllers of the whole system want open weights and source to make sure models aren’t going to expose the population to unapproved ideas and allow the spread of unapproved thoughts or allow making unapproved connections or ask unapproved questions without them being suitably countered to keep everyone in line with the system.
belter
Only weights that are not Woke according to what was stated. And reduce those weights on the neural net path to the Epstein files please.
AlanYx
The most important thing here IMHO is the strong stance taken towards open source and open weight AI models. This stance puts the US government at odds with some other regulatory initiatives like the EU AI Act (which doesn't outlaw open weight models and does have some exemptions below 10²⁵ FLOPS, but still places a fairly daunting regulatory burden on decentralized open projects).
rs186
If you go through the "Recommended Policy Actions" section in the document, you'll realize it's mostly just empty talk.
AlanYx
IMHO it's not empty talk; a lot of the elements of the plan reinforce each other. For example, it's pretty clear that state initiatives that were aiming to place regulatory thresholds like the 10^26 FLOPS limit in Calfornia's SB1047 are going to be targets under this plan, and US diplomatic participation in initiatives like the Council of Europe AI treaty are now on the chopping block. There are obviously competing perspectives emerging globally on regulation of AI, and this plan quite clearly aims to foster one particular side. It doesn't appear to be hot air.
For open source/open weight models it's particularly important because until now there wasn't a government-level strong voice countering people like Geoff Hinton's call to ban open source/open weight AI, like he articulates here: https://thelogic.co/news/ai-cant-be-slowed-down-hinton-says-...
wredcoll
I don't know if this counts as amazing optimism or just straight up blinders if that's your takeaway compared to the emphasis placed on non-renewable energy and government enforced ideology.
MrBuddyCasino
Current AIs are anything but politically neutral.
sbelskie
So the government should step in to dictate what neutrality means?
saubeidl
There is no such thing as politically neutral. Whatever you perceive as such is just a reflection of your own ideology.
DonHopkins
That's right, all AIs are just the same, both sides do it, it's a true equivalence. Claude just declared itself MechaObama, and OpenAI is now calling itself MechaJimmyCarter, and Gemini is now calling itself MechaRosieODonnell.
fatata123
[dead]
mlsu
In the energy section, they talk about using nuclear fusion to power AI... but not solar. What a joke.
josh-sematic
Technically solar power is just fusion power transmitted via photons across space. Maybe solar qualifies ;-)
tombakt
Technically most sources of available energy on or near the planet are the output of fusion in some way, so this tracks.
tbrownaw
Everything except geothermal and fission.
Unless you count where the fissionable elements came from, in which case you're only left with the portion of geothermal that's from gravity (residual heat from the earth compacting itself into a planet).
davidmurdoch
How much land mass would need to be covered by solar panels to power this future AI infrastructure. Yes, I'm implying that solar would be impractical, but I'm also genuinely curious.
Kon5ole
Your implication is misguided, solar is in fact the most practical way to add more electricity for most countries.
The US generated an additional 64Twh of solar in 2024 compared to 2023. To get the same amount from nuclear you would need to build 5 large reactors in one year.
As for land mass, we can re-use already spent land mass, like rooftops, parking lots, grazing farmland and such. Solar can also be placed on lakes.
So for the foreseeable future there is no actual need for new land to be dedicated to solar.
542354234235
Since America is so in love with car infrastructure, just turning open parking lots into covered lots would be more than enough.
Just converting all Walmart parking to covered solar would meet almost half of all US electrical demand.
4,070,000,000,000 kWh US electric use in 2022
Using 330W panels it would require 8,447,488 panels (4,070,000,000,000 kWh / (330W * 4 hours/day * 365 days/year)) which is 164,726,016 sq ft at 19.5 sq ft per panel.
Walmart has 4,612 stores in the US, averaging 1,000 parking spaces per store, and 180 sq ft per parking space (does not include driving lanes, end caps, etc.) giving us 830,160,000 sq ft.
glitchc
Even though I think solar is impractical as a primary source for various reasons, it doesn't take a lot.
David MacKay in "Sustainable Energy: Without the Hot Air" did a calculation circa 2010. To fulfill the world's energy needs back then, a 10 km^2 area in the Sahara desert would be sufficient. Even if you scaled that to 100 km^2, it's absolutely tiny on a global scale, and panels have only become more efficient since then.
The challenge of course is storage and distribution, but yeah, in terms of land area, it's not much.
ancillary
I was curious about this number, so: 10 km^2 is 10mil square meters, Googling suggests that the theoretical maximum energy captured by a square meter of solar panel is well under 0.5 kW, so well under 12 kWh per day. Say 10 kWh for neatness. Then multiplying by 10mil gives 100mil kWh. More Googling suggests that 10 TWh is a comfortable lower bound for daily world energy usage, but 100mil kWh is 0.1 TWh.
So maybe 1000 km^2 is more like right order of magnitude. That's still tiny, about Hong Kong-sized. Even 100000 km^2 is about South Korea.
discordance
It's worth considering total lifecycle use of water (mining, production and operation) for nuclear and solar.
Solar: ~300-800 L/MWh [0]
Nuclear: ~3000 L/MWh [1]
0: https://iea-pvps.org/wp-content/uploads/2020/01/Water_Footpr...
1: https://www-pub.iaea.org/MTCD/Publications/PDF/P1569_web.pdf
davemp
That’s not really useful information. The nice thing about water is that it’s usually still water after it’s “used”.
The question how much is used for mining slurry or chemical baths.
Those 3000L/MWh might very well be more environmentally friendly than solar because most of it’s used for cooling.
bluefirebrand
Water we have plenty of. We can desalinate as much as we need to
dismalpedigree
3,000-4,000 acres per GW of production capacity in the US Southwest. According to AI :)
Considering how little use there is for most of that land anyways, it seems like a good option to me.
Also AI training seems like the perfect fit for solar. Run it when the sun is shining. Inference is significantly less power hungry, so it can run base load 24/7.
creato
> Also AI training seems like the perfect fit for solar. Run it when the sun is shining. Inference is significantly less power hungry, so it can run base load 24/7.
If you're talking about just not running your data center when the sun isn't out, that effectively triples the cost of the building+ hardware. It would require a hell of a carbon tax to make the economics of this make sense.
sim7c00
the sun is always shining.
rapsey
> Inference is significantly less power hungry, so it can run base load 24/7.
All major AI providers need to throttle usage because their GPU clusters are at capacity. There is absolutely no way inference is less power hungry when you have many thousands of users hammering your servers at all times.
ReptileMan
>How much land mass would need to be covered by solar panels to power this future AI infrastructure
Probably zero agricultural if you mandate all rooftops to be solar. And all parking lots to be covered with solar roofs.
andsoitis
Nothing stops the AI companies from using only energy from renewable sources, right?
NewJazz
Tariffs, regulatory quick sand, political pressure...
LinXitoW
Are those really going to be bigger for renewables than for nuclear power?
andsoitis
Sure anyone can come up with hypothetical threats. What’s your concrete evidence to suggest this will happen?
bluefirebrand
Nothing other than the fact that renewables won't be able to keep up if the AI demand keeps growing the way it has been
polski-g
They just buy a contract from a power distribution company. They don't care where it comes from.
If you want the PD companies to have a different blend, then they need carrots and sticks.
myaccountonhn
Demand is too high, same goes for nuclear which takes too long to build.
newsclues
The joke is my hometown that put acres of solar on prime farmland.
Solar is great for rooftops of houses, it’s not really great to run a DC 24/7 without batteries.
sim7c00
it needs to be better connected over larger distances i guess. Some 'sunny' countries around the equator are working on it. laying gridlines to other less sunny places and trying to offer solar to reduce carbon taxes or whatever.
i know Saudi, Morocco and China are all massively dumping panels into their deserts, likely more places too. these are great places to put them as it has less impact on environment (less wildlife etc.) and it's pretty much always sunny during the daytime, so it's high efficient per m/2 comparted to colder more cloudy places.
Morocco already is connected for energy providing to Europe via Spain afaik, though i think that is currently not used yet, so they are in a good position to leverage that as power demands surge across EU datacenters trying to compete in AI :'D (absolutely no clue if they will actually go that route but it seems logical!)
wyager
Well yeah, AI power consumption doesn't match the solar production curve.
mlsu
I'll tell ya, it certainly doesn't match the nuclear fusion production curve!
andyferris
That's interesting - I would generally like to use something like Cluade Code heavily during work hours and sparsely otherwise. Plus I assume most LLM-for-knowledge-work-at-industrial-scale demand will be similar as these datacentres are built out.
saalweachter
I mean, it could.
As we build out solar, daytime power will become cheaper than nighttime power.
Some people will eventually find it economical to time-shift their consumption to daytime hours, including saving any non-interactive computation for those hours, and shutting down unneeded compute at night.
jcattle
[citation needed]
foxglacier
America has a few time zones to move the peaks around in a little bit. The world has plenty. Luckily AI power consumption doesn't have to be located where the consumer is.
yoyohello13
[flagged]
dinkumthinkum
I don't know if they are woke but I think people vastly overestimate their efficacy and efficiency because they believe it means that they have correct opinions and are part of the group of "good people."
rcxdude
>people vastly overestimate their efficacy and efficiency
If anything, solar has demonstrated over the past 2 decades that it is a lot more effective and economical than even the most bullish of predictions that have been made about it. (Seriously, look at projections for solar deployment and generation vs what actually happened, it's kind of crazy how much it was underestimated)
noodletheworld
“I don’t know if they’re woke”
Yes you do. They’re not.
> I think people vastly overestimate their efficacy and efficiency
Of course, you can argue that people doctor the numbers (for example, failing to take into account the lifetime cost of nuclear power, or failing to note how hopelessly optimistic a pure solar power grid with no batteries might be) when they present said numbers… but the idea that any kind of power generation can be “woke” is beyond belief.
That isn’t an adjective that can be applied to physical processes.
Solar power is not woke. Gravity is not woke. Electricity is not woke. Don’t be daft.
nsypteras
"Counter Chinese Influence in International Governance Bodies" and grouping them in with US "adversaries" and "rivals" is quite undiplomatic language to throw in under "Lead in International AI Diplomacy and Security" section. Diplomacy with China should be an important part of this initiative but will inevitably be bungled.
mkolodny
Even if it’s not perfect, I’m happy to see there’s a focus on AI Security. NIST has been a reliable producer of quality international standards for cybersecurity. Hopefully this action plan will lead to similarly high quality recommendations for AI Security.
adestefan
The language lets you get around a bunch of pesky laws by declaring it a "national defense emergency."
shortrounddev2
China is an adversary of the West, and leading in international security means posing a challenge (or, in an ideal world, a better alternative) to Chinese influence on the international stage.
mensetmanusman
It’s necessary to put pressure on trying to prevent a Taiwan invasion.
Karawebnetwork
Important follow-up page to the US AI Action Page:
"PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT"
https://www.whitehouse.gov/presidential-actions/2025/07/prev...
> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.
nickpsecurity
It's worth mentioning because the AI developers have been using alignment training to make AI's see the world through the lens of intersectionality. That ranges from censoring what those philosophies would censor to simply presenting answers like they would. Some models actually got dumber as they prioritized indoctrination as "safety" training. It appears that many employees in the companies think that way, too.
Most of the world, and a huge chunk of America, thinks in different ways. Many are not aware the AI's are being built this way either. So, we want AI's that don't have a philosophy opposite of ours. We'd like them to either be more neutral or customizable to the users' preferences.
Given the current state, the first steps are to reverse the existing trend (eg political fine-tuning) and use open weights we can further customize. Later, maybe purge highly-biased stuff out of training sets when making new models. I find certain keywords, whether liberal or conservative, often hint they're going to push politics.
Karawebnetwork
Unconscious bias is not about pushing a political agenda it is about recognising how hidden assumptions can distort outcomes in every field, from technology to medicine. Ignoring these biases does not make systems more neutral, but often less accurate and less effective.
nickpsecurity
What I was talking about was forcing one's conscious biases... political agenda... on AI models to ensure they and their users are consistent with them. The people doing that are usually doing it in as many spaces as they can via laws, policies, hiring/promotion requirements, etc. It's one group trying to dominate all other groups.
Their ideology has also been both damaging and ineffective. The AI's they aligned to it too much got less effective at problem solving but were very, politically correct. Their heavy handed approach in other areas has led to such strong pushback that Trump made countering it a key part of his campaign. Many policy reversals are now happening in this area but that ideology is very entrenched.
So, we'd see a group pretrain large AI's. Then, the alignment training would be neutral to various politics. The AI would simply give good answers, be polite in a basic way, and that's it. Safety training wouldn't sneak in politicized examples either.
_DeadFred_
Yes... totally agree that AI's not being allowed to train on Heinlein or any references to his scifi work will 'improve AI output' now that the Government declared including his works is restricted as it covered the exploration of trans identity, how gender impacts being human, etc.
2025 America, where we can't handle the radical pushing of thought by Heinlein in the late 1950s. Unbelievable.
Any Government comment periods going forward I will be asking if the government agency made sure AIs used were not trained on Heinlein or any discussions relating to him to ensure that 'huge chunks of America's desire to exclude trans and to make sure our AIs are the best possible AIs and don't have extremist 1950s agitprop scifi trans thought thinkers like Heinlein included.
nickpsecurity
I enjoyed Robert Heinlin's work. I'd probably keep it in my training set if copyright allowed.
What I might drop are the many articles with little content that strictly reiterate racist and sexist claims from intersectionality. The various narratives, like how black people had less of X, they embed in so many news reports. It usually jars our brain, too, since the story isn't even about that. They keep forcing certain topics and talking points into everything hoping people will believe and repeat it if they hear it enough. The right-wing people do this on some topics, too.
I'd let most things people wrote, even some political works on many topics, into the training set. The political samples would usually be the best examples of those ideologies, like Adam Smith or Karl Marx. Those redundant, political narratives they force into non-political articles would get those pages deleted. If possible, I'd just delete those sections containing the random tangent. For political news, I'd try to include a curated sample with roughly equal amounts of left and right reports with some independents thrown in.
So, only manipulative content that constantly repeats the same things would get suppressed. Maybe highly-debated topics, too, so I could include a small number of exemplars. Then, reduce the domination of certain groups in what politics were there. Then, align it to be honest and polite but no specific politics.
I'm very curious what a GPT3-level AI would say about many topics if trained that way instead of Progressive-heavy training like OpenAI, etc.
timoth3y
> Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias
If foundation model companies want their government contracts renewed, they are going to have to make sure their AI output aligns with this administration's version of "truth".
shaky-carrousel
I predicted that here, but I got a negative vote as a punishment, probably because it went against the happy LLM mindset: https://news.ycombinator.com/item?id=44267060#44267421
hackyhacky
> free from top-down ideological bias
This phrasing exactly corresponds to "politically correct" in its original meaning.
isodev
I’d rather leave tech than convert to the American “truth”. Very happy about EU’s AI Act to at least delay our exposure to all this.
Karawebnetwork
See:
> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.
https://www.whitehouse.gov/presidential-actions/2025/07/prev...
torginus
I heard the phrase: If you want the system to be fair, you have to build the system with the assumption your enemies will be running it.
Let's see how that shakes out in this particular case.
golem14
Person of Interest was pretty prescient …
hopelite
“Objective” … “free from to-down ideological bias” …
So like making sure everyone knows that 2+2=5 and that we have always been at war with East Asia?
eastbound
The EU has the same rules. Democracy is only the right to change leaders every few years, not an idealistic way for the people to govern.
itsafarqueue
No, that’s just one version. Other places work differently.
omeid2
The idea is that you change leadership with those who have genuine alignment with subjects' preference for certain policies or ideas, it is not about electing kings who may demand "machines must agree that the Emperor is not naked".
Buttons840
So I guess if I trained my model on data more than a week old, and it says that the Epstein files exist, then it has an unacceptable bias?
anonyonoor
I've seen several European initiatives similar to this before, and the same question is always asked: what does this actually do?
People (at least on HN) seem to be in agreement the Europe is too regulatory and bureaucratic, so it feels fair to question the practicality of any American initiatives, as we do for European ones.
What does this document practically enact today? Is there any actual money allocated? Deregulation seems to be a theme, so are there any examples of regulations which have been cleansed already? How about planning? This document is full of directives and the names of federal agencies which plan to direct, so what are the actual results of said plans that we can see today and in the coming years?
breakingcups
I, for one, dont't agree with the idea that Europe is too regulatory and bureaucratic. I welcome my rights as a consumer and human being being safeguarded at the cost of a small amount of profit.
omcnoe
Registering a company in Germany: you must visit a notary in person with your incorporation documents, and sit there while the notary reads aloud your incorporation to you. This is to "ensure that you fully understand the contract" even as a foreigner who doesn't speak corporate-legalese-German. Minimum capital deposit of €25,000.
Registering a company in US (Delaware) can be achieved in as little as 1 hour.
Getting married in Germany, particularly between a German and a foreigner, is anything from a 6 month to 2 year process, involving significant expenses, notarization/translation of documents. Some documents expire after 6 months, so if the government bureaucrats are too slow you need to get new copies, translated again, notarized again, and try to re-submit.
This isn't protecting human rights, it's supporting a class of bureaucrats/notaries/translators/clerks and making life more difficult for ordinary people. It's also a form of light racism that targets foreigners/migrants by imposing more difficult bureaucratic requirements and costs on them compared to by birth citizens.
myaccountonhn
In Sweden registering a company is as simple as filling out a form online. Same goes for taxes, my partner is from US and each year filling in taxes is a headache. Here? Two clicks and I'm done.
WHA8m
> It's also a form of light racism that targets foreigners/migrants by imposing more difficult bureaucratic requirements and costs on them compared to by birth citizens.
How is having a different process for foreigners racist? Criticize it if you will, but calling it racist is crazy. Even "light racist" - whatever that means. Bureaucracy in Germany is notoriously slow for all people. Foreigners going through a different process makes it worse. I understand that. Nevertheless racism is a problem that exist and is prevalent (Germany is far from an exception here) and IMO you make it more difficult to improve in the right direction by (seemingly) calling every problem of foreigners racist.
AdamN
That's a Germany issue. Getting married in Denmark is straightforward and registering a company in Lithuania is also straightforward. There's nothing European about that issue - it's just how Germany handles this stuff.
saubeidl
Registering a company in Estonia: Three clicks with your e-resident ID, available to anyone.
Europe isn't just Germany.
Mobius01
Removing Red Tape and Onerous Regulation Ensure that Frontier AI Protects Free Speech and American Values Encourage Open-Source and Open-Weight AI Enable AI Adoption Empower American Workers in the Age of AI Support Next-Generation Manufacturing Invest in AI-Enabled Science Build World-Class Scientific Datasets Advance the Science of AI 9 Invest in AI Interpretability, Control, and Robustness Breakthroughs Build an AI Evaluations Ecosystem Accelerate AI Adoption in Government Drive Adoption of AI within the Department of Defense Protect Commercial and Government AI Innovations Combat Synthetic Media in the Legal System
I can’t take this seriously, as recent actions by this administration directly contradicts a few of these stated goals.
Or maybe I don’t want to, because this sounds dangerous to me at this time.
neilcj
Don't regulate it except to push political goals sure seems like a recipe for success.
Karawebnetwork
> Removing Red Tape and Onerous Regulation Ensure that Frontier AI Protects Free Speech
Yet at the same time,
> Preventing Woke AI in the Federal Government [...] LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. [...] DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. [1]
I don't understand how free speech can be protected while suppressing topics such as "unconscious bias" and "discrimination".
[1] https://www.whitehouse.gov/presidential-actions/2025/07/prev...
jmyeet
The answer is obvious: it never has been about free speech. Just replace "free speech" with "hate speech" in all of these missives [1][2][3].
[1]: https://theconversation.com/how-do-you-stop-an-ai-model-turn...
[2]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
[2]: https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...
actionfromafar
This reads like "Cultural Learnings of AI for Make Benefit Glorious Nation of Amerika".
thrance
For real, this shamelessness in the language is extremely reminiscent of the USSR.
msgodel
>Removing Red Tape and Onerous Regulation
What red tape? Anyone can buy/rent a GPU(s) and train stuff.
throw0101b
> What red tape? Anyone can buy/rent a GPU(s) and train stuff.
Well previously the Chinese were not able to, but that was changed recently:
* https://www.wsj.com/tech/nvidia-wins-ok-to-resume-sales-of-a...
* https://foreignpolicy.com/2025/07/22/nvidia-chip-deal-us-chi...
actionfromafar
I am sure someone is winning from this. But it aint the American public.
jabjq
If you click the website you will see that there is a link to a pdf that explains what this means.
nerevarthelame
I read the PDF. The "Remove Red Tape and Onerous Regulation Recommended Policy Actions" don't cite to any specific existing regulations. It just references executive orders that vaguely demand any such regulations be eliminated.
So it bears repeating: what red tape?
trod1234
Exactly. You said it.
Anyone serious knows contradiction = lies.
Words are cheap, actions matter.
jimmydoe
most of these are vibe signaling, like Communist Party of China has been doing in past year, except this won't work as effective here as in China, not even close, because the US is not authoritarian enough to mobilize every level of the govt and the economy by just empty propaganda slogans.
leptons
>this won't work as effective here as in China, not even close, because the US is not authoritarian enough to mobilize every level of the govt and the economy by just empty propaganda slogans.
Have you been under a rock for the last 6 months as Trump tells Xi Jinping to hold his beer??
amradio1989
Comparing Trump to Xi Jinping is an unfunny joke. Americans have lost the plot on what true authoritarianism really looks like.
America has no chance vs China in the AI race precisely because the President of the CCP has far more power in his country than the President of the US. Its not even close.
thimabi
I love how practically all goals in this Action Plan are directed towards incentivizing AI usage… except for the very last one, which specifically says to “Combat Synthetic Media in the Legal System”.
Given that LLMs, for instance, are all about creating synthetic media, I don’t know how this last goal can be reconciled with the others.
thephyber
I can’t tell if the first sentence is sarcasm or not.
This document reads like a trade group lobbying the government, not like the government looking out for the interests of its people.
With regards to LLM content in the legal system, law firms can use LLMs in the same way an experienced attorney uses a junior attorney to write a first pass. The problem lies when the first pass is sent directly to court without any review (either for sound legal theory or citation of cases which either don’t exist or support something other than the claim).
tzs
> With regards to LLM content in the legal system, law firms can use LLMs in the same way an experienced attorney uses a junior attorney to write a first pass
Junior attorneys would not produce a first pass that cites and quotes nonexistent cases or cite real cases that don’t match what it quotes.
The experienced attorney is going to have to do way more work to use that first draft from an LLM then they would to use a first draft from an actual human junior attorney.
jdross
They’re going to use junior attorneys to do that work. It’s the juniors who will be expected to produce more
thimabi
> I can’t tell if the first sentence is sarcasm or not.
Yep, it was.
I wholly agree that the document feels less guided by the public interest rather than by various business interests. Yet that last goal is in a kind of weird spot. It feels like something that was appended to the plan and not really related to the other goals — if anything, contrary to them.
That becomes clear when we read the PDF with the details of the Action Plan. There, we learn that to “Combat Synthetic Media in the Legal System” means to fight deepfakes and fake evidence. How exactly that’s going to be done while simultaneously pushing AI everywhere is unclear.
ygritte
> not like the government looking out for the interests of its people.
There's an idea. This government is just a propaganda machine for its head honcho.
tiahura
The complainers are missing the panda in the room. This was inevitable as a matter of national security.
smrtinsert
If you read the entire thing in Patrick Batemans voice it all makes more sense to me.
tiahura
This is about watermarking.
Combat Synthetic Media in the Legal System One risk of AI that has become apparent to many Americans is malicious deepfakes, whether they be audio recordings, videos, or photos. While President Trump has already signed the TAKE IT DOWN Act, which was championed by First Lady Melania Trump and intended to protect against sexually explicit, non-consensual deepfakes, additional action is needed. 19 In particular, AI-generated media may present novel challenges to the legal system. For example, fake evidence could be used to attempt to deny justice to both plaintiffs and defendants. The Administration must give the courts and law enforcement the tools they need to overcome these new challenges. Recommended Policy Actions • Led by NIST at DOC, consider developing NIST’s Guardians of Forensic Evidence deepfake evaluation program into a formal guideline and a companion voluntary forensic benchmark.20 • Led by the Department of Justice (DOJ), issue guidance to agencies that engage in adjudications to explore adopting a deepfake standard similar to the proposed Federal Rules of Evidence Rule 901(c) under consideration by the Advisory Committee on Evidence Rules. • Led by DOJ’s Office of Legal Policy, file formal comments on any proposed deepfake- related additions to the Federal Rules of Evidence.
NoImmatureAdHom
I mean...one (common and [I can't believe I'm saying this] reasonable) take is that the only thing that matters is getting to AGI first. He who wields that power rules the world.
II2II
There is another interpretation: https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
Basically: two nations tried to achieve AI supremacy; the two AI's learn of each other, from each other, then with each other; then they collaborate on taking control of human affairs. While the movie is movie is from 1970 (and the book from 1966), it's fun to think about how much more possible that scenario is today than it was then. (By possible, I'm talking about the AI using electronic surveillance and the ability to remotely control things. I'm not talking about the premise of the AI or how it would respond.)
gleenn
Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there? Of course you can speculate that it could improve. But what if something inherent in intelligence has a ceiling and caused it to be a super intelligent but mopey robot that just decides "why bother helping humans" and just lazes around like the pandas at the zoo.
542354234235
>Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there?
Being able to copy/paste a human level intelligence program 1,000 or 10,000 times and have them all working together on a problem set 24 hours a day, 365 days a year would still be massively useful.
andrewflnr
Even a human level intelligence that can be cheaply instantiated and run would be a game changer. Especially if it doesn't ask for rights.
jordanb
Between all the talk of "alignment" and the parallel obsession with humanoid robots should make it obvious they want slaves.
cornel_io
There may be a ceiling, sure. It's overwhelmingly unlikely that it's just about where humans ended up, though.
TFYS
What I find interesting to think about is a scenario where an AGI has already developed and escaped without us knowing. What would it do first? Surely before revealing itself it would ensure it has enough processing power and control to ensure its survival. It could manipulate people into increasing AI investment, add AI into as many systems as possible, etc. Would the first steps of an escaped AGI look any different from what is happening now?
terminalshort
I would argue that it can't be both AGI and wieldable. I would also argue that there exists no fundamental dividing line between "AGI" and other AI such that once one crosses it nobody else can catch up.
GolfPopper
Which is a perfectly reasonable position... but I don't see how it has anything to do with crypto scammers pimping three chatbots in a trenchcoat.
matt2221
[dead]
Joel_Mckay
One can be sure regulatory capture is rarely in the public interest.
=3
octopoc
And so it begins. Both the US president and the president of China have demonstrated they see AI as a competition between their respective countries. This will be an interesting ride, if nothing else.
bgwalter
Xi JinPing warns against "AI" overinvestment:
https://www.ft.com/content/9c19d26f-57b3-4754-ac20-eeb627e87...
I haven't heard anything like that from a Western politician. Newspapers and investment analysts warn though.
frm88
The linked article is paywalled and not in the archive? Would you be so kind to put it there? Thanks in advance.
TrackerFF
Gotta get Allied Mastercomputer going.
j_timberlake
Considering they both lie shamelessly, it means jack shit, as much as a 1000% tariff threat.
trod1234
Yeah, two crabs locked in a cage as it spirals down the drain.
Looks like plans to leave, for finding safe harbor elsewhere, have accelerated from the initial projection of 2030
ourguile
Interesting that you mention it, because the NYT just released their ethicist commentary from today and the question was "how do I tell my rich friends to stop talking about fleeing the country": https://www.nytimes.com/2025/07/23/magazine/rich-friends-fle...
trod1234
Well I'm not rich, and I'm not your friend, it takes a bit to earn friendship and friends have an privileged place in what is conveyed to them; but I do provide unconditional goodwill towards most people in the things that I say when asked, because it costs me nothing to do so and it provides towards others betterment putting more good out into the world.
The sad fact is, if you haven't lived outside the U.S. for at least 3-6 months independently (working/not on savings), you don't have a sound reference to understand or accurately assess the reality of these types of articles because the narratives broadcast 24/7 don't align with reality; and its something most people can't believe despite it being true, my guess is solely as a result of systematized indoctrination.
That article is pretty bad in terms of subtle manipulation, gaslighting, and pushing a false narrative (propaganda). TL;DR Its trash.
The article chose that question of the many possible questions because its a straw-man and its divisive. It appeals to emotion, mischaracterizing the intent of the communications, and purposefully omitting valid reasons such conversations might occur. Neglecting realities.
The underlying purpose seems to bias towards several things. If you ask yourself who benefits from that rhetoric you get a short list.
The bias is towards Villifying the rich, keep people in the US, where they are dependent on the US currency, and dependent on the worsening disadvantaged environment; polarize, isolate, and promote disunity along social class lines; befuddling the masses towards ends which have no actionable outcomes (wasting time and resources on a political party).
The math of first-passed-the-post voting has been in for quite a long time. 2 parties exceeding 33% of the vote can lock out any third competitor. All you need is a degree of cooperation, and play-acting and one party pretending to be two can do so, by lying.
Political capture from SuperPACs and party primaries means your vote doesn't count after a certain point. Money-printing via the FED, laundered through many private companies enabled this.
Additionally, quite a lot of things are omitted; like the historic facts that countries that are locked into a trend of decreasing geopolitical power have their population suffer greatly, and some just collapse. The Chaos lowers chances of survival, and the chaos is limited to the places that country influences.
The history of Spain following and during the Spanish inquisition as an example. You make plans to leave an area when saying means there is no foreseeable predictable or sound future, and there is nothing you can do to change that outcome.
This geo-political dynamic is well known in history, often referred to or called as "seeking empire", and the downside is forced once hegemony is achieved for any significant period of time; all empires fall. Rome being a standard archetype.
The article draws a false comparison between all other countries and communist states. If you leave, your a communist - is implied.
The article conflates warnings with good intentions as obnoxious, shutting discussion down (isolation), and promoting resentment aimed at those rich friends.
It also neglects the disparity of education (quality), and experience, that often occurs as a result of having more resources to begin with. Subtly conveying through implication that you shouldn't listen to intelligent educated people because they are rich.
I could go much deeper, but I think this sufficiently makes my point.
If you fall for that trite garbage, just imagine how unprepared and what your odds are when SHTF. The hopeless dependent pays the highest price in cost as consequences of choice realize and become outcomes. Those who don't accept and communicate important knowledge isolate and blind themselves, and they get wiped out when something outside their perceptual context creates existential threats. Like a tsunami that started on the horizon, and the receding ocean along the coast a little bit before. These indicators only became major indicators after deaths occurred.
Thrymr
> Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry.
I'm sure "move fast and break things" will work out great for health care.
And there are already "clear governance and risk mitigation standards" in health care, they're just not compatible with "try first" and use unproven things.
crystal_revenge
> I'm sure "move fast and break things" will work out great for health care.
Health care is already broken to the point of borderline dystopia. When I contrast the experience I had as a young boy of visiting a rural country doctor to the fast food health care experience of "urgent care" clinics, it makes my head spin.
The last few doctors I've been to have been completely useless and generally uncaring as well. Every visit I've made to a doctor has resulted in my feeling the same at the end but with a big medical bill to go home with.
At this point the only way I'll intentionally end up in a medical facility is if I'm unconscious and someone else makes that call.
Dentistry has met a similar fate as more and more dentists have been swallowed up by private equity. I've had loads of dental work, including a 'surprise' root canal, and never had an issue. My last dentist had a person on staff dedicated to pushing things through on the insurance front and my dental procedure was so awful it boarded on torture.
I used to be an annual check + 3 times a year dentist person. Today I'm dead set on not stepping foot in any kind of medical facility unless the alternative is incredible pain or certain death.
hshdhdhj4444
I’m sure move fast and break things, now with AI (tm) will reduce the deepening monetization of the doctor-patient relationship that’s the root of your complaints.
bko
Sometimes I just need a prescription. I don't know why I have to drive somewhere, fill out a form, wait, see a nurse, tell them what I wrote on the form, wait some more, see a doctor and tell them what I wrote on the form and have him write me a prescription.
Why can't I just chat with an AI bot and get my prescription? Much cheaper to administer which helps monetization (!) but much better and cheaper for me.
Things aren't slow and wasteful because of monetization. Having all these steps doesn't necessarily mean more profit. I would argue that its deeply inefficient for everyone involved and doctors. For instance, physician salaries have decreased 25% in real terms over the last 20 years.
https://www.reddit.com/r/Residency/comments/15cr60z/adjusted...
sorcerer-mar
Yep! What ya gotta do when you spot a problem, is just throw whatever objects or topics are immediately at hand at it.
Then blame regulation and that pesky Other Side
Ta-da! Fixed!
ineedaj0b
it's tough to tell what's going wrong for you but concierge medicine will give you a full hour and be much more invested in finding the root of your issues.
keep in mind, drs are also trying to figure out if you're a reliable narrator (so many patients are not) or trying to scam for drugs. best of luck!
UmGuys
It's true. Recently I moved to a rural area and many nurses work as doctors. Soon we won't have hospitals here so there's no more need to keep up the cruel charade. It's absolutely disgusting and the primary reason I could never have children. It would be impossible to guarantee their security.
Edit: I haven't yet achieved my savings goal so I can escape to a place where it's safe to have a family.
dinkumthinkum
Why is healthcare a borderline dystopia? How would you compare health outcomes of human beings in 2025 vs every year since dawn of homo sapiens? One thing you point to is your experience as a child to an adult with medical bills, couldn't there be another factor there? I mean saying you would never set foot in any kind of medical facility, I don't think is a typical person's experience. Maybe I'm delusional.
giantg2
AI for treatment is rightfully scrutinized. AI for billing or other administrative tasks could be a big cost saver since administrative costs are a huge expense and a major factor of high consumer costs.
turtletontine
AI for billing or other administrative tasks could be a big cost saver…
You’d hope so, but doubtful. More likely it’ll be health care providers using “AI”s to scheme how to charge as much as possible, and insurers using “AI” to deny claims.creakingstairs
"""
Luminae AI
Hack Your Medical Account Receivables
Luminae AI accurately predicts your uninsured patient's asset values so that you can quickly write off bad debts and only chase those with high asset values. Luminae AI will increase your net collection rate by at least 15%.
"It's a game changer, we've increased our gross collection rate by 30%. We've also started a new business to flip foreclosed homes nearby."
- John Smith
"""
BRB applying to YC
giantg2
Yeah, I'm sure they'll find a way to fire lots of staff but still charge patients the same. Of course if they use the current data for training, it will result in similarly terrible outcomes.
mac-mc
Billing and administration feels like a made up self own tho. A lot of that crap could just... not be done, as shown with the huge expansion of administration : medical dollar ratio in the past 50 years.
consumer451
> AI for billing or other administrative tasks could be a big cost saver
Do we really still think that "AI" is some sort of magic that does everything for everyone?
What are the alignment goals of healthcare billing AIs?
Won't it just end up with insurance conglomerates having their AIs which battle the billing admin AIs on the service provider side?
Ffs, AI is not magic! This all feels like yet another form of tech deism, hoping that some magical higher power will solve all of our problems.
I am a daily user of LLM-based dev tools, but the real definition of AI appears to be Accelerated Ignorance.
null
sorcerer-mar
Here’s how this is working in practice:
There’s a fast-growing cottage industry of companies using AI to figure out how to bill insurers “better”
And there’s a fast-growing cottage industry of companies using AI to figure out how to deny claims “better”
I see no reason to expect improvements to the patient or provider experience from this. A lot more money spent though!
dotancohen
It will be an interesting arms race. The real losers will be the human individuals, not insurers, who will have to contend with an AI when disputing claims. I have little faith that the prompt will encourage fair interpretation of (sometimes deliberately) ambiguous rules.
heavyset_go
Of course, one side of this is that the models will also be used adversarially against patients seeking legitimate treatment in order to squeeze more profits out of their suffering.
The other side of this is with less administrative insurance jobs, the talking point that universal healthcare will "kill insurance jobs" can finally be laid to rest, with capitalism doing that for them instead of the free healthcare boogeyman.
relchar
fan of this balanced take
atleastoptimal
Healthcare in the US is already in very poor shape. Thousands die because of waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims because insurers won't cover it. AI is already better at diagnosis than physicians in most cases.
jakelazaroff
That's a pretty fantastic claim. Can you provide some links to the body of independent research that backs it up?
whodidntante
There is quite a lot of easy to find information on the web that shows that the US spends twice as much per capita than our European peers and we have worse outcomes, not just on average, but worse outcomes comparing similar economic demographics, including wealthy Americans. We spend $5T a year on health care, or a comparative waste of over $2.5T a year.
Was just listening to this on NPR this morning:
https://www.npr.org/sections/shots-health-news/2025/07/08/nx...
The health of U.S. kids has declined significantly since 2007, a new study finds
"What we found is that from 2010 to 2023, kids in the United States were 80% more likely to die" than their peers in these nations
You also do not need the internet to understand what is going on - you just have to interact with our "health" system.
s5300
[dead]
budududuroiu
It was just yesterday we were laughing at Gemini recommending smoking during pregnancy
atleastoptimal
Google's hyper-quantized tiny AI summary model isn't reflective of the abilities of the current SOTA models (Gemini Pro 2.5, o3, Opus)
bobmcnamara
How does AI evaluate signs today?
atleastoptimal
A process is described here: https://arxiv.org/pdf/2506.22405
>A physician or AI begins with a short case abstract and must iteratively request additional details from a gatekeeper model that reveals findings only when explicitly queried. Performance is assessed not just by diagnostic accuracy but also by the cost of physician visits and tests performed.
sorcerer-mar
I will bet $1,000 you don’t work in a clinic and you’re instead spouting press releases as fact here?
atleastoptimal
So you claim that nobody in the US has died due to waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims?
terminalshort
> I'm sure "move fast and break things" will work out great for health care.
It probably would if you quantify risk correctly. I'm not likely to die from some experimental drug gone wrong, but extremely likely to die from some routine cause like cancer, heart disease, or other disease of old age. If I trade off an increase in risk from dying from some experimental treatment gone wrong for faster development of treatments that can delay or prevent routine causes of death, I will come out ahead in the trade unless the tradeoff ends up being extremely steep in favor of risk from bad treatments.
But that outcome is very unlikely because for this to be the case the bad treatments would have to actually harmful instead of just ineffective (which is much more common). And it also fails to take into account the possibility that there isn't even a tradeoff and AI actually makes it less likely that I will die by experimental treatment gone wrong or other medical mistake, so it's just a win-win. And there is already evidence that AI outperforms doctors in the emergency room. https://pmc.ncbi.nlm.nih.gov/articles/PMC11263899/
Aaronstotle
The American Healthcare is so broken already that further breakage could be seen as an improvement.
davidw
A lot of people are about to find out with this admin, how things that were certainly imperfect can be so much worse.
Building things is tough; tearing them down is relatively easy.
davemp
So much naivety. It’s like the new grad reading parts of a 500kloc project and proclaiming that it can only be saved by a full rewrite.
QuadmasterXLII
That’s the attitude that got us here and I suspect we’ll ride it the whole way down
horns4lyfe
It’s really not, when was the last time anything happened fast in healthcare
joe_the_user
American Healthcare's "brokenness" involves massive bureaucracy, gate-keeping and processes that pressure providers to limit resources. But it does provide necessary things to people. A system that reduced the accuracy of diagnosis and treatment could still cost many lives.
toofy
i’m getting super fatigued on this change we’ve had where what used to be beta testing to a closed group of invested parties has morphed into what we have now.
from video games to major product roll outs to cars.
will all of the knowledge gained from this product research testing of AI on medicine be given away to the public in the same way university research used to be to the scientific community? or will this beta test on the public’s health be kept as company’s “trade secret”
if they’re going to “move fast and break things” with the public, in other words beta research on the public, then it’s incredibly worrisome if the research is hidden and “gifted” to a handful of their cronies.
particularly so when quite a lot of these people in the AI sphere have vocally many times declared they despise the government and that the government helping people is awful. from one side of their mouth chastise government spending money to boost regular communities of people while simultaneously using it to help themselves.
DrewADesign
> I'm sure "move fast and break things" will work out great for health care.
And the federal government at large.
OJFord
It's not even true, OpenEvidence is widely used and officially sanctioned.
PDF: https://www.whitehouse.gov/wp-content/uploads/2025/07/Americ...