Skip to content(if available)orjump to list(if available)

A conversation about AI for science with Jason Pruet

hbartab

> We certainly need to partner with industry. Because they are so far ahead and are making such giant investments, that is the only possible path.

And therein lies the risk: research labs may become wholly dependent on companies whose agendas are fundamentally commercial. In exchange for access to compute and frontier models, labs may cede control over data, methods, and IP—letting private firms quietly extract value from publicly funded research. What begins as partnership can end in capture.

monkeyelite

> research labs may become wholly dependent on companies

They already are. Who provides their computers and operating systems? Who provides their HR software? Who provides their expensive lab equipment?

Companies are not in some separate realm. They are how our society produces goods and services, including the most essential ones.

hdivider

I fail to understand the sentiment here.

This is the intention of tech transfer. To have private-sector entities commercialize the R&D.

What is the alternative? National labs and universities can't commercialize in the same way, including due to legal restrictions at the state and sometimes federal level.

As long as the process and tech transfer agreements are fair and transparent -- and not concentrated in say OpenAI or with underhanded kickbacks to government -- commercialization will benefit productive applications of AI. All the software we're using right now to communicate sits on top of previous, successful, federally-funded tech transfer efforts which were then commercialized. This is how the system works, how we got to this level.

dekhn

What do you mean universities can't commercialize in the same way (I may have misunderstood what you meant)? Due to Bayh-Dole, Universities can patent and license the tech they develop under contract for the government- often helping professors start up companies with funding, while simultaneously charging those companies to license the tech. This is also true for National labs run by universities (Berkeley and a few others). the other labs run under contract by external for-profit companies.

worldsayshi

> What is the alternative?

Reasonably there should be a two way exchange? It might be okay for companies to piggyback on research funds if that also means that more research insight enters public knowledge.

rapind

I’d be happy if they just paid their fair share of tax and stopped acting like they were self-made when they really just piggybacked on public funds and research.

There’s zero acknowledgment or appreciation of public infra and research.

delusional

> As long as the process and tech transfer agreements are fair and transparent

I think that's the crux of the guy you're responding to's point. He does not believe it will be done fairly and transparently, because these AI corporations will have broad control over the technology.

hdivider

If so, yes indeed, fair point by him/her. It's up to ordinary folks like us to push against unfair tech transfer because yes, federal labs and research institutions would otherwise provide the incumbents an extreme advantage.

Having been in this world though, I didn't see a reluctance in federal labs to work with capable entrepreneurs with companies at any level of scale. From startup to OpenAI to defense primes, they're open to all. So part of the challenge here is simply engaging capable entrepreneurs to go license tech from federal labs, and go create competitors for the greedy VC-funded or defense prime incumbents.

BurningFrog

R&D results should be buried under a crystal obelisk at the bottom of the ocean, to warn to future generations.

hahajk

In the case of huge frontier LLMs, the public labs will likely never be able to compete. In my experience, govt orgs are ardent rule-followers and wouldn't be as willing to violate copyright.

godelski

There's a risk but there's also great reward if it is done properly. The only way to maximize utility of any individual player is to play cooperatively[0]. A single actor might get a momentary advantage by defecting from cooperation, but it decreases their total eventual rewards and frankly it quickly becomes a net negative in many cases.

That said, I'm not very confident such a situation would happen in reality. I'm not confident current industry leaders can see past a quarter and nearly certain they can't see past 4. Current behavior already indicates that they are unwilling to maximize their own profits. A rising tide lifts all ships, but many will forgo the benefit this gives them to set out to explore for new and greater riches and instead are only able to envy the other ships rising with them. It can be easy to lose sight of what you have if you are too busy looking at others.

[0] Simplified example illustrated by Iterative Prisoner's Dilemma: https://www.youtube.com/watch?v=Ur3Vf_ibHD0

[0.1] Can explain more if needed but I don't think this is hard to understand.

catigula

What's the "reward"?

I want to interrogate AI optimist type people because even if AI is completely safe and harmless I literally see only downsides.

Is your perception that living in theorized extreme comfort correlates to "reward"?

christophilus

You really see only downsides? I’m no AI optimist, but it is a useful tool, and it’s here to stay.

quantified

WILL end in capture. Profit demands it.

hbartab

Indeed.

shagie

The point of https://www.nrel.gov/index is to research how to do renewable energy. Likewise, the research done by https://www.nrel.gov/hpc/about-hpc and its data center https://www.nrel.gov/computational-science/hpc-data-center is to pioneer ways to reuse its waste heat (and better cool existing data centers).

I'm kind of disappointed that their dashboard has been moved or offline or something for the past few years. https://b2510207.smushcdn.com/2510207/wp-content/uploads/202... is what it used to look like.

mdhb

This is literally THE scam Elon, Thiel, Sacks and others are running as they gut the government.

Sell assets like government real estate to themselves at super cheap rates and then set up as many dependencies as they can where the government has to buy services from them because they have nowhere else to turn.

To give an example this missile dome bullshit they are talking about building which is a terrible idea for a bunch of reasons.. but there is talks at the moment of having this run by a private company who will sell it as a subscription service. So in this scenario the US military can’t actually fire the missiles without the explicit permission of a private company.

This AI thing is the same scam.

FilosofumRex

Right on target, publicly funded research always ends up in the hands of private profiteers via private university labs.

If LLM/AI is critical to national security, then it should be funded solely via the Dep of Defense budget, with no IP or copy right derivatives allowed.

tantalor

I was a bit puzzled what "1663" is. Here's what I found:

> The Lab's science and technology digital magazine presents the most significant research initiatives and accomplishments from national-security-related programs as well as projects that advance the frontiers of basic science. Our name is an homage to the Lab's historic role in the nation's service: During World War II, all that the outside world knew of the top-secret laboratory was the mailing address - P.O. Box 1663, Santa Fe, New Mexico.

https://researchlibrary.lanl.gov/about-the-library/publicati...

senderista

Clearly AI is worthy of public investment, but given the capture of this administration by tech interests, how can we be sure that public AI funding isn't just handouts to the president's cronies?

candiddevmike

How about we fix global warming and switch 100% to clean energy, and then invest in AI?

ben_w

To the extent that further improvements to AI remain economically useful, "let's do these other things first" means your economy trails behind those of whoever did work on the AI.

To the extent that further improvements to AI are either snake oil or just hard to monopolise on, doing everything else first is of course the best idea.

Even though I'm more on the side of finding these things impressive, it's not at all clear to me that the people funding their development will be able to monopolise the return on the investment - https://en.wikipedia.org/wiki/Egg_of_Columbus etc.

Also: the way the biggest enthusiasts are talking about the sectoral growth and corresponding electrical power requirements… well, I agree with the maths for the power if I assume the growth, but they're economically unrealistic on the timescales they talk about, and that's despite that renewables are the fastest %-per-year-growth power sector and could plausibly double global electrical production by the early 2030s.

haswell

> To the extent that further improvements to AI remain economically useful, "let's do these other things first" means your economy trails behind those of whoever did work on the AI.

The major question is: at what point will unaddressed climate change nullify these economic gains and make the fact that anyone worried about them feel silly in retrospect?

Or put another way, will we even have the chance collectively enjoy the benefits of that work?

grey-area

Is generative AI economically useful? More economically useful than switching to renewable energy?

ngangaga

Well yes, nationalism will be the dagger in the heart of humanity. But AI won't do anything to address this; in fact, leaning into the concept of competing rather than cooperating economies will accelerate pushing the dagger in.

CooCooCaCha

That’s why I wonder if a planetary government is inevitable sometime in the future. We can’t address species-wide issues if we’re constantly worried about competition, and if market forces aren’t going to work then the only other solution I can think of is a bigger, more powerful entity laying down the law.

dale_glass

Who "we"?

The people qualified to fix global warming aren't the same people qualified to work on ML.

threeseed

Yes they are.

I've worked with hundreds of Data Scientists and every one had the ability to work on different problem areas. And so they were working on models to optimise truck rollouts or when to increase compute for complex jobs.

If as a society we placed an increased emphasis on efficiency and power consumption we would see a lot more models being used in those areas.

XorNot

Don't you know? Humanity can only solve one problem at a time in order of importance.

And it's corollary: something being in the news or social media means everyone else has stopped working on other problems and is now solely working on whatever that headline's words say.

85392_school

You'd probably meet the talking point that if we don't accelerate AI development China will win.

null

[deleted]

whatever1

This is the plan. Build all the clean infrastructure with the fake promise of AI and once the bubble bursts, boom. We have spare clean capacity for everyone.

null

[deleted]

bcoates

1. Build atomic power plants sufficient to supply electricity needs for projected future AI megaprojects

2. Inevitable AI winter

3. Keep running the plants, clean energy achieved, stop burning coal, global warming solved

null

[deleted]

_heimdall

We can't just switch to clean energy, we would need to drastically reduce our energy use per capita.

dlivingston

Absolutely not. We would be moving backwards as a society. Increased energy usage is a bellwether of societal advancement. See the Kardashev scale and Dyson sphere for example.

[0]: https://en.wikipedia.org/wiki/Kardashev_scale

[1]: https://en.wikipedia.org/wiki/Dyson_sphere

threeseed

Which is actually a problem AI is perfect for.

engineer_22

Let's also cure cancer and stop all wars while we're at it.

threeseed

There is no one cancer but we are working to cure as many variations as we can.

madaxe_again

Don’t forget world hunger.

I don’t understand this line of reasoning - it’s like saying “you’re not allowed steam engines until you drain all of your mines”. It’s moralistic, rather than pragmatic.

conradev

The DOE has been building supercomputers for a while now: https://en.m.wikipedia.org/wiki/Oak_Ridge_Leadership_Computi...

godelski

Even more importantly, they are GPU based. The US has 3 exascale computers (out of 3 in the world). I should stress that these measurements are based on LINPACK, and are at fp64 precision. This is quite a different measurement than others might be thinking of with recent announcements in AI (which are fp8)

https://www.top500.org/lists/top500/2024/11/

giardini

LLMs seem to be plateauing. I'd rather let the markets chase AI.

swalsh

How do you make that assessment? I'll admit, the knowledge base is not 10x every few months anymore, but the agent capabilities keep getting better. The newer models can do a lot of useful work accurately for a while. That wasn't true several months ago.

overgard

Wake me up when they solve hallucination.

voidspark

"LLM" is not mentioned anywhere in the article.

nyarlathotep_

There's a serious issue around naming here, I'll agree.

I assume "AI" in contemporary articles, especially as it pertains to investments, means "Generative AI, especially or exclusively LLMs."

therealpygon

LLMs, maybe. AI? Hardly.

apwell23

>AI? Hardly.

what are some examples of 'hardly' ?

voidspark

The article explains that the lab would support universities by providing infrastructure.

woah

HN commenters in 1960:

> Clearly computer networking is worthy of public investment, but given the capture of this administration by military industrial interests, how can we be sure that public networking funding isn't just handouts to the president's cronies?

myhf

There was literally a vaporware "AI" hype cycle in 1960. Propositional logic programming was poison to investors for 50 years because of that one, just like LLMs will be poison to investors for 50 years because of this one.

dekhn

Check out the history of BBN, who was deeply involved in the creation of the modern internet. There was an open revolving door between BBN employees and granting agencies, and BBN was even charged with contract fraud by the government . It's owned by Raytheon- a classic defense company.

Our country's tight relationship between the government, military, academia, and industrial has paid off repeatedly, even if it has some graft.

b59831

[dead]

paradox460

Fwiw, LANL saw some of it's heaviest layoffs this year, even heavier than those that happened under Nanos in the post Cerro Grande investigation. From what I gather, the feeling up on the hilltop is one of anxiety

newfocogi

Another recent AI article out of LANL: https://www.lanl.gov/media/publications/1663/1269-earl-lawre...

And discussed on HN: https://news.ycombinator.com/item?id=43765207

This does feel like a step change in the rate at which modern AI technologies and programs are being pushed out in their PR.

zkmon

I like how he says that AI is a general-purpose technology like electricity.

quakeguy

They should invest in natural intelligence first.

LAsteNERD

PR in here for sure, but some smart context on the scientific and nat security potentional the DOE and National Labs see in AI.

andy99

The real title is "Q&A with Jason Pruet"

lp251

wonder if they still train all of their models using Mathematica because it was impossible to get pytorch on the classified systems

pphysch

AFAIK that was mostly due to a silly detail about MD5 hashing being restricted on FIPS compliant systems? Or something like that. I'm pretty sure there's an easy workaround(s).

lp251

there were a bunch of reasons. couldn’t bring compiled binaries onto the red, so you had to bring the source + all deps onto a machine with no external internet.

it was unpleasant.

candiddevmike

Just have Hegseth run PyTorch for them

levocardia

>pip install *

null

[deleted]

stonogo

The actual reason is "because they're being told to." Before that, there was a massive public-cloud push DOE-wide. Nobody outside of ASCR is interested in computing, and there's a lot of money to be made if you can snag an eternal rent check for hosting federal infrastructure.