Skip to content(if available)orjump to list(if available)

US Export Control Framework for Artificial Intelligence Diffusion

chriskanan

I have no idea if comments actually have any impact, but here is the comment I left on the document:

I am Christopher Kanan, a professor and AI researcher at the University of Rochester with over 20 years of experience in artificial intelligence and deep learning. Previously, I led AI research and development at Paige, a medical AI company, where I worked on FDA-regulated AI systems for medical imaging. Based on this experience, I would like to provide feedback on the proposed export control regulations regarding compute thresholds for AI training, particularly models requiring 10^26 computational operations.

The current regulation seems misguided for several reasons. First, it assumes that scaling models automatically leads to something dangerous. This is a flawed assumption, as simply increasing model size and compute does not necessarily result in harmful capabilities. Second, the 10^26 operations threshold appears to be based on what may be required to train future large language models using today’s methods. However, future advances in algorithms and architectures could significantly reduce the computational demands for training such models. It is unlikely that AI progress will remain tied to inefficient transformer-based models trained on massive datasets. Lastly, many companies trying to scale large language models beyond systems like GPT-4 have hit diminishing returns, shifting their focus to test-time compute. This involves using more compute to "think" about responses during inference rather than in model training, and the regulation does not address this trend at all.

Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk. Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.

Without careful refinement, these rules risk stifling innovation, especially for small companies and academic researchers, while leaving important developments unregulated. I urge policymakers to engage with industry and academic experts to refocus regulations on specific applications rather than broadly targeting compute usage. AI regulation must evolve with the field to remain effective and balanced.

---

Of course, I have no skin in the game since I barely have any compute available to me as an academic, but the proposed rules on compute just don't make any sense to me.

energy123

  "First, it assumes that scaling models automatically leads to something dangerous"
The regulation doesn't exactly make this assumption. Not only are large models stifled, the ability to serve models via API to many users, and the ability to have many researchers working in parallel on upgrading the model is also stifled. It wholesale stifles AI progress for the targeted nations.

This is an appropriate restriction on what will likely be a core part of military technology in the coming decade (eg drone piloting).

Look, if Russia didn't invade Ukraine and China didn't keep saying they wanted to invade Taiwan, I wouldn't have any issues with sending them millions of Blackwell chips. But that's not the world we live in. Unfortunately, this is the foreign policy reality that exists outside of the tech bubble we live in. If China ever wants to drop their ambitions over Taiwan then the export restrictions should be dropped, but not a moment sooner.

babkayaga

right. China. but Switzerland? Israel? what is going on here?

jagrsw

It could be related to 14eyes with modifications (finland and ireland, plus close asian allies).

https://res.cloudinary.com/dbulfrlrz/images/w_1024,h_661,c_s... (from https://protonvpn.com/blog/5-eyes-global-surveillance).

Israel, Poland, Portugal and Switzerland are also missing from it

energy123

> Switzerland? Israel?

I hope someone with a better understanding of the details can jump in, but they are both Tier 2 (not Tier 3) restricted, so maybe there are some available loopholes or Presidential override authority or something. Also I believe they can still access uncapped compute if they go via data centers built in the US.

tivert

> Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk.

I'm disinclined to let that be a barrier to regulation, especially of the export-control variety. It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.

> Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.

How to you envision that working, specifically? Especially when a lot of models are pretty general and not very application-specific?

iugtmkbdfil834

<< It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.

Am I missing something? I am not an expert in the field, but from where I sit, there literally is no barn door at this point to even close too late..

ben_w

> First, it assumes that scaling models automatically leads to something dangerous.

The impression I had is with reversed causation: that it can't be all that dangerous if it's smaller than this.

Assuming this alternative interpretation is correct, the idea may still be flawed, for the same reasons you say.

SubiculumCode

I also suspect that the only real leverage the U.S. has is on big compute (i.e. requires the best chips), and less capable chips are not as controllable.

SubiculumCode

While your critiques most likely have some validity (and I am not positioned to judge their validity), you failed to offer a concrete policy alternative. The rules were undoubtedly made with substantial engagement from the industry and academic researchers, as there is too much at stake for them not to engage, and vigorously. Likely there were no perfect policy solutions, but decided to not let the perfect stop the good enough since timeliness matters as much or more than the policy specifics.

behnamoh

You wrote three separate comments for this course. Can you just combine them?

p1necone

The three comments are communicating three separate things, I think it's clearer that way.

chriskanan

The most salient thing in the document is that it put export controls on releasing the weights of models trained with 10^26 operations. While there may be some errors in my math, I think that corresponds to training a model with over 70,000 H100s for a month.

I personally think the regulation is misguided, as it assumes we won't identify better algorithms/architectures. There is no reason to assume that the level of compute leads to these problems.

Moreover, given the emphasis on test-time compute nowadays and that it seems like a lot of companies have hit a wall with performance gains with trying to scale LLMs at train-time, I especially think this regulation isn't especially meaningful.

parsimo2010

Traditional export control applied to advanced hardware is because the US doesn't want its adversaries to have access to things that erode the US military advantage. But most hardware is only controlled at the high-end of the market. Once a technology is commodotized, the low-end stuff is usually widely proliferated. Night vision goggles are an example, only the latest generation technology is controlled, and low-end stuff can be bought online and shipped worldwide.

Applying this to your thoughts about AI, is that as the efficiency of training gets better, the ability to train models is commodotized, and those models would not be considered to be advantageous and would not need to be controlled. So maybe setting the export control based on the number of operations is a good idea- it naturally allows efficiently trained models to be exported since they wouldn't be hard to train in other countries anyway.

As computing power scales maybe the 10^26 limit will need to be revised, but setting the limit based on the scale of the training is a good idea since it is actually measurable. You couldn't realistically set the limit based on the capability of the model since benchmarks seem become irrelevant every few months due to contamination.

ein0p

I wonder what makes people believe that the US currently enjoys any kind of a meaningful "military advantage" over e.g. China? After failing to defeat the Taliban and running from the Houthis especially. This seems like a very dangerous belief to have. China has 4x the population and outproduces us 10:1 in widgets (2:1 in dollars). Considering just e.g. steel, China produces about 1 billion metric tons of it per year. We produce 80 million tons. Concrete? 2.4B tons vs 96M tons. 70+% of the world's electronics. Their shipbuilding industry is 230x more productive (not a typo). Etc, etc.

The short term profits US businesses have been enjoying over the past 25 years came at a staggering long term cost. The sanctions won't even slow down the Chinese MIC, and in the long run they will cause them to develop their own high end silicon sector (obviating the need for our own worldwide). They're already at 7nm, at a low yield. That is more than sufficient for their MIC, including the AI chips used there, currently and in the foreseeable future.

thorum

The practical problem I see is that unless US AI labs have perfect security (against both cyber attacks and physical espionage), which they don’t, there is no way to prevent foreign intelligence agencies from just stealing the weights whenever they want.

kube-system

Of course. They're mitigations, not preventions. Few defenses are truly preventative. The point is to make it difficult. They know bad actors will try to circumvent it.

This isn't lost on the authors. It is explicitly recognized in the document:

> The risk is even greater with AI model weights, which, once exfiltrated by malicious actors, can be copied and sent anywhere in the world instantaneously.

thorum

> The point is to make it difficult.

Does it, though?

iugtmkbdfil834

This. We put toasters on the internet and are no longer surprised, when services we use send us breach notices at regular intervals. The only thing this regulation would do, as written, is add an interesting choke point for compliance regulators to obsess over.

etiam

Could be nice with some artificial pressure to use more efficient algorithms though. The current game of just throwing in more data centers and power plants may be kind of convenient for those who can afford it, but it's also intellectually embarrassing.

null

[deleted]

permo-w

this is like saying that regulating automatic weapons is misguided because someone might invent a gun that is equally dangerous without being automatic

iugtmkbdfil834

This appears to be a very shallow take and lazy argument that does not capture even basic nuance of the issue at hand. For the sake of expanding it a little and hopefully moving it in the right direction, I will point out that BIS framework discusses use of advanced models as dual use goods ( ie. not automatically automatic weapons ).

edit(removed exasparated sigh; it does not add anything )

HeatrayEnjoyer

We can't let perfect be the enemy of good, regulations can be updated. Capping FLOPs is a decent starter reg.

reaperman

Counterpoint would be the $7.25 minimum wage. It can be updated, but politicians aren't good at doing that. In both cases (FLOPS and minimum wage), at least a lower bound for inflation should be included:

Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.

Cyph0n

I don’t see an issue here, because our legislators probably care more about FLOPS than humans.

tivert

> Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.

Why would you want to automatically increase the cap algorithmically like that?

The purpose of a regulation like this is totally different than the minimum wage. If the point is to keep and adversary behind, you want them to stay as far behind as you can manage for as long as possible.

So if you increase the cap, you want only increase it when it won't help the adversary (because they have alternatives, for instance).

geuis

This smells a lot like the misguided crypto export laws in the 90s that hampered browser security for years.

philjohn

And don't forget the amazing workaround Zimmerman of PGP fame came up with - the source code in printed form was protected 1A speech, so it was published, distributed, and then scanned and OCR'd outside the US - https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...

mmaunder

And don’t forget Thawte which ended up selling strong SSL outside the US, cornering the international market thanks to US restrictions, and getting bought by Verisign for $600M.

rangestransform

I hope this time we finally get a Supreme Court ruling that export controls on code are unconstitutional, instead of the feds chickening out like last time

cheald

It might be somewhat prohibitive to print the model weights for any sufficiently large model, though.

galangalalgol

Using 2d barcodes you can fit ~20MB per page. Front and back you could probably fit a model that violated the rule on less than a thousand pages.

Edit: maybe 10k pages

slt2021

how about printing URL to download the weights file?

z2

I for one would love to see model weights published in hardcover book form.

dkga

What a throwback to the time when some edgy folks would share printed codes in Pascal… I even remember seeing a hard copy of a binary in hex which was best not to execute.

kube-system

It hampered the security of a lot of things. That wasn't misguided -- that was the point.

China, Russia, and Iran used Internet Explorer too.

chriskanan

I'm not sure why the link no longer works, but this one works. The link should be updated to this one: https://www.federalregister.gov/documents/2025/01/15/2025-00...

cshimmin

@dang

layer8

“@dang” doesn’t do anything. You need to email hn@ycombinator.com.

cube2222

It’s worth noting that this splits countries into three levels - first without restrictions, second with medium restrictions, third with harsh restrictions.

And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

pjmlp

I see the return of cold war computing model, where many countries had their own computer platforms and programming languages.

Which apparently might be a good outcome to FOSS operating systems, with national distributions like Kylin.

As European I vote for SuSE.

consumer451

This smells about as well informed as the genius move that forced Ukrainian owner, Max Polyakov, to divest from Firefly Aerospace. A US government position that was widely derided by space industry watchers, and has now been reversed.

This might be a product of the USA being a gerontocracy.

surfingdino

Someone still sees Eastern Europe as a provider of cheap brainpower. This is insulting.

Chance-Device

But is it untrue?

inglor_cz

includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

Yeah, this is really a bit insulting.

tivert

>> And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

> Yeah, this is really a bit insulting.

So you're insulted some country of other wasn't included in:

> First, this rule creates an exception in new § 740.27 for all transactions involving certain types of end users in certain low-risk destinations. Specifically, these are destinations in which: (1) the government has implemented measures to prevent diversion of advanced technologies, and (2) there is an ecosystem that will enable and encourage firms to use advanced AI models to advance the common national security and foreign policy interests of the United States and its allies and partners.

?

IMHO, it's silly to get insulted over something like that. Your feelings are not a priority for an export control law.

Taiwan, even though it's a US ally, is only allowed limited access to certain sensitive US technology it deploys (IIRC, something about Patriot Missile seeker heads, for instance), because their military is full of PRC spies (e.g. https://www.smh.com.au/technology/chinese-spies-target-taiwa...), so if they had access the system would likely be compromised. It's as simple as that.

intunderflow

We're sorry, an error has occurred A general error occurred while processing your request.

mlfreeman

What do the regulators writing this intend for this to slow down/stop?

I can't seem to find any information about that anywhere.

HeatrayEnjoyer

Obviously to prevent proliferation of dual-use technologies to potentially adversarial actors. The same intent behind restricting high-fidelity infrared camera and phased radar equipment.

slt2021

China is leading AI race with their open source deepseek-v3. It is laughable to think they this regulation with stop them. USA should actually collaborate, not isolate.

China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example, to get access to as many compute as they want. or to use consumer grade compute, or their homegrown Chinese CPUs/GPUs.

USA should actually embrace open source and collaborate together, as we are still in the very beginning of AI revolution

kube-system

The entire point is to not collaborate, because this tech is being used for military purposes. The US wants to throw up roadblocks to make it more difficult. Obviously, against a foreign military, anything is a mitigation and not a prevention.

> China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example

That's why they're also moving towards KYC for cloud providers.

https://www.federalregister.gov/documents/2024/01/29/2024-01...

chatmasta

Maybe they intend for it to speed up/start implementation of federal agencies and regulations. The intent is to exert control over an emerging market while it’s still comprised of cooperative participants. Regulators want to define the regulatory frameworks rather than relying on self-policing before it’s “too late.”

Let’s see if this survives the next administration. Normally I’d be skeptical, but Musk has openly advocated about the “dangers” of AI and will likely embrace attempts to regulate it, especially since he’s in a position to benefit from the regulatory capture. In fact he’s doubly well-placed to take advantage of it. Regardless of his politics, xAI is a market leader and would already be naturally predisposed to participate in regulatory capture. But now he also enjoys unprecedented influence over policymaking (Mar a Lago) and regulatory reform (DOGE). It’s hard to see how he wouldn’t capitalize on that position.

chronic4930018

> Regardless of his politics, xAI is a market leader

Lol what?

The only people who think this are Elon fanboys.

I guess you think Tesla is the self-driving market leader, too. Okay.

chatmasta

I don’t even use it. But in terms of funding, it’s in the top 5, according to Crunchbase data [0].

[0] https://news.crunchbase.com/ai/startup-billion-dollar-fundra...

DSingularity

Hard to take comments like this seriously when you can’t even be bothered to be associated with it from your primary account.

resters

Strong opposition to this regulation seems to be one of the main things that led a16z, Oracle, etc. to go all in for Donald Trump. It's interesting that Meta too fought the regulation by its unprecedented open sourcing of model weights.

Regardless of who is currently in the lead, China has its own GPUs and a lot of very smart people figuring out algorithmic and model design optimizations, so China will likely be in the lead more obviously within 1-2 years, both in hardware and model design.

This law is likely not going to be effective in its intended purpose, and it will prevent peaceful collaboration between US and Chinese firms, the kind that helps prevent war.

The US is moving toward a system where government controls and throttles technology and picks winners. We should all fight to stop this.

tokioyoyo

> The US is moving toward a system where government controls and throttles technology and picks winners

What else can it do? They don’t want to lose their lead, and whatever restrictions they’ve been putting on China et al. have let the exact desired outcomes so far. The idea is to try to slow down the beast that has very set goals (e.g. to become high tech manufacturing and innovation center), and try to play catch up (like on-shoring some manufacturing).

Personally, I’m skeptical that it will work, because by raw number of hands on deck, they have the advantage. And it’s fairly hard when your institutional knowledge of doing big things is a bit outdated. I would argue, a good bet in North America would be finding a financially engineered solution to get Asian companies bring their workers and knowledge to ramp us up. Kinda like the TSMC factory. Basically the same thing as China did in 2000s with western companies.

kube-system

> The US is moving toward a system where government controls and throttles technology and picks winners.

Moving towards? The US has a pretty solid history of doing a great deal of this (and more) in the 20th century. But so did all of the world's powers... as they all continue to do today. It seems to be an inherent part of being a world power.

blackeyeblitzar

I agree this law won’t be effective in its intended purpose, and that China will develop models of their own that are sufficiently competitive (as we’ve already seen). However, I think seeking “peaceful collaboration” between the US (or Europe or many others) and China - either between governments or private firms - is a naive strategy that will simply lead to the US being replaced by a more dangerous superpower that does not respect the values of free and democratic societies.

I also think that to a great extent, we’re already at war. China has not respected intellectual property rights, conducted espionage against both companies and government agencies, repeatedly performed successful cyberattacks, helped Russia in the Ukraine conflict, severed telecommunications cables, and more. They’ve also built up the world’s largest navy, expanded their nuclear arsenal, and are working on projects to undermine the status of the US Dollar. All of this should have been met with a much stronger and forceful reaction, since clearly it does not fit into the notion of “peaceful collaboration”.

China’s unpeaceful actions aren’t limited to the West. China annexed much of its current territory illegally and through force (see Xinjiang and Tibet). When Hong Kong was handed back, it was under a treaty that China now says is not valid. China has been trying to steal territory from neighboring countries repeatedly, for example with Bhutan or India. They’ve also threatened to take over Taiwan many times now, and may do so soon. They’re about to build a dam that will prevent water from reaching Bangladesh and force them to become subjugated. The only peaceful and just outcome is for those territories to be freed from the control of China - which will require help from the West (sanctions, tariffs, blockades, and maybe even direct intervention).

Even within China, the CCP rules with an iron fist and violates virtually all principles of free societies and classically liberal values that we value in the West. I don’t see that changing. And if it doesn’t, how can they be trusted with more economic and military power? That’s why I don’t think we should seek peaceful collaboration with China. We just need smarter strategies than this hasty AI declaration.

9283409232

I don't think you're wrong but Big Tech is bending the knee to Trump because he will be picking the winners.

clhodapp

One interesting geopolitical fact about this document that's not being discussed much is the way it includes Taiwan in lists of "countries".

Usually, the US government tries not to do that.

veggieroll

The compute limit is dead on arrival, because models are becoming more capable with less training anyways. (See DeepSeek, Phi-4)

pjmlp

It is going to be like it was in the 1990s with PGP and such all over again.