Skip to content(if available)orjump to list(if available)

Framework for Artificial Intelligence Diffusion

chriskanan

I have no idea if comments actually have any impact, but here is the comment I left on the document:

I am Christopher Kanan, a professor and AI researcher at the University of Rochester with over 20 years of experience in artificial intelligence and deep learning. Previously, I led AI research and development at Paige, a medical AI company, where I worked on FDA-regulated AI systems for medical imaging. Based on this experience, I would like to provide feedback on the proposed export control regulations regarding compute thresholds for AI training, particularly models requiring 10^26 computational operations.

The current regulation seems misguided for several reasons. First, it assumes that scaling models automatically leads to something dangerous. This is a flawed assumption, as simply increasing model size and compute does not necessarily result in harmful capabilities. Second, the 10^26 operations threshold appears to be based on what may be required to train future large language models using today’s methods. However, future advances in algorithms and architectures could significantly reduce the computational demands for training such models. It is unlikely that AI progress will remain tied to inefficient transformer-based models trained on massive datasets. Lastly, many companies trying to scale large language models beyond systems like GPT-4 have hit diminishing returns, shifting their focus to test-time compute. This involves using more compute to "think" about responses during inference rather than in model training, and the regulation does not address this trend at all.

Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk. Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.

Without careful refinement, these rules risk stifling innovation, especially for small companies and academic researchers, while leaving important developments unregulated. I urge policymakers to engage with industry and academic experts to refocus regulations on specific applications rather than broadly targeting compute usage. AI regulation must evolve with the field to remain effective and balanced.

---

Of course, I have no skin in the game since I barely have any compute available to me as an academic, but the proposed rules on compute just don't make any sense to me.

energy123

  "First, it assumes that scaling models automatically leads to something dangerous"
The regulation doesn't exactly make this assumption. Not only are large models stifled, the ability to serve models via API to many users, and the ability to have many researchers working in parallel on upgrading the model is also stifled. It wholesale stifles AI progress for the targeted nations.

This is an appropriate restriction on what will likely be a core part of military technology in the coming decade (eg drone piloting).

Look, if Russia didn't invade Ukraine and China didn't keep saying they wanted to invade Taiwan, I wouldn't have any issues with sending them millions of Blackwell chips. But that's not the world we live in. Unfortunately, this is the foreign policy reality that exists outside of the tech bubble we live in. If China ever wants to drop their ambitions over Taiwan then the export restrictions should be dropped, but not a moment sooner.

logicchains

Limiting US GPU exports to unaligned countries is completely counterproductive as it creates a market in those countries for Chinese GPUs, accelerating their development even faster. Because a mediocre Huawei GPU is better than no GPU. And it harms the revenue of US-aligned GPU companies, slowing their development.

dr_dshiv

Interesting theory. Any evidence that this is how the world really works? (And, is there a catchy name for the phenomenon?)

babkayaga

right. China. but Switzerland? Israel? what is going on here?

thatcat

Israel is a known industrial espionage threat to the us, how'd you think they got nuclear weapons? some analysts say they're the largest threat after china. Not to mention theyre currently using ai in targeting systems while under investigation for war crimes.

jagrsw

It could be related to 14eyes with modifications (finland and ireland, plus close asian allies).

https://res.cloudinary.com/dbulfrlrz/images/w_1024,h_661,c_s... (from https://protonvpn.com/blog/5-eyes-global-surveillance).

Israel, Poland, Portugal and Switzerland are also missing from it

energy123

> Switzerland? Israel?

I hope someone with a better understanding of the details can jump in, but they are both Tier 2 (not Tier 3) restricted, so maybe there are some available loopholes or Presidential override authority or something. Also I believe they can still access uncapped compute if they go via data centers built in the US.

tivert

> Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk.

I'm disinclined to let that be a barrier to regulation, especially of the export-control variety. It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.

> Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.

How to you envision that working, specifically? Especially when a lot of models are pretty general and not very application-specific?

iugtmkbdfil834

<< It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.

Am I missing something? I am not an expert in the field, but from where I sit, there literally is no barn door at this point to even close too late..

ben_w

> First, it assumes that scaling models automatically leads to something dangerous.

The impression I had is with reversed causation: that it can't be all that dangerous if it's smaller than this.

Assuming this alternative interpretation is correct, the idea may still be flawed, for the same reasons you say.

SubiculumCode

I also suspect that the only real leverage the U.S. has is on big compute (i.e. requires the best chips), and less capable chips are not as controllable.

SubiculumCode

While your critiques most likely have some validity (and I am not positioned to judge their validity), you failed to offer a concrete policy alternative. The rules were undoubtedly made with substantial engagement from the industry and academic researchers, as there is too much at stake for them not to engage, and vigorously. Likely there were no perfect policy solutions, but decided to not let the perfect stop the good enough since timeliness matters as much or more than the policy specifics.

logicchains

Doing nothing is a better alternative, because these restrictions will just encourage neutral countries to purchase Chinese GPUs, because their access to US GPUs is limited by these regulations. This will accelerate the growth of Chinese GPU companies and slow the growth of US-aligned ones; it's basically equivalent to the majority of nations in the world placing sanctions on NVidia.

15155

> these rules risk stifling innovation

These rules intentionally "stifle innovation" for foreigners - this is a feature, not a bug.

behnamoh

You wrote three separate comments for this course. Can you just combine them?

p1necone

The three comments are communicating three separate things, I think it's clearer that way.

null

[deleted]

chriskanan

The most salient thing in the document is that it put export controls on releasing the weights of models trained with 10^26 operations. While there may be some errors in my math, I think that corresponds to training a model with over 70,000 H100s for a month.

I personally think the regulation is misguided, as it assumes we won't identify better algorithms/architectures. There is no reason to assume that the level of compute leads to these problems.

Moreover, given the emphasis on test-time compute nowadays and that it seems like a lot of companies have hit a wall with performance gains with trying to scale LLMs at train-time, I especially think this regulation isn't especially meaningful.

parsimo2010

Traditional export control applied to advanced hardware is because the US doesn't want its adversaries to have access to things that erode the US military advantage. But most hardware is only controlled at the high-end of the market. Once a technology is commodotized, the low-end stuff is usually widely proliferated. Night vision goggles are an example, only the latest generation technology is controlled, and low-end stuff can be bought online and shipped worldwide.

Applying this to your thoughts about AI, is that as the efficiency of training gets better, the ability to train models is commodotized, and those models would not be considered to be advantageous and would not need to be controlled. So maybe setting the export control based on the number of operations is a good idea- it naturally allows efficiently trained models to be exported since they wouldn't be hard to train in other countries anyway.

As computing power scales maybe the 10^26 limit will need to be revised, but setting the limit based on the scale of the training is a good idea since it is actually measurable. You couldn't realistically set the limit based on the capability of the model since benchmarks seem become irrelevant every few months due to contamination.

ein0p

I wonder what makes people believe that the US currently enjoys any kind of a meaningful "military advantage" over e.g. China? After failing to defeat the Taliban and running from the Houthis especially. This seems like a very dangerous belief to have. China has 4x the population and outproduces us 10:1 in widgets (2:1 in dollars). Considering just e.g. steel, China produces about 1 billion metric tons of it per year. We produce 80 million tons. Concrete? 2.4B tons vs 96M tons. 70+% of the world's electronics. Their shipbuilding industry is 230x more productive (not a typo). Etc, etc.

The short term profits US businesses have been enjoying over the past 25 years came at a staggering long term cost. The sanctions won't even slow down the Chinese MIC, and in the long run they will cause them to develop their own high end silicon sector (obviating the need for our own worldwide). They're already at 7nm, at a low yield. That is more than sufficient for their MIC, including the AI chips used there, currently and in the foreseeable future.

parsimo2010

a) just because the government has policies that doesn’t mean they are 100% effective

b) export controls aren’t expected to completely prevent a country from gaining access to a technology, just make it take longer and require more resources to achieve

You may also be misunderstanding how much money China will spend to develop their semiconductor industry. Sure, they will eventually catch up to the West, but the money they spend along the way won’t be spent on fighter jet, missiles, and ships. It’s still preferable (from the US perspective) to having no export controls and China being able to import semiconductor designs, manufacturing hardware, and AI models trained using US resources. At least this way China is a few months behind and will have to spend a few billion Yuan to achieve it.

airstrike

Everyone also thought Russia had a strong military yet look how that worked out

thorum

The practical problem I see is that unless US AI labs have perfect security (against both cyber attacks and physical espionage), which they don’t, there is no way to prevent foreign intelligence agencies from just stealing the weights whenever they want.

kube-system

Of course. They're mitigations, not preventions. Few defenses are truly preventative. The point is to make it difficult. They know bad actors will try to circumvent it.

This isn't lost on the authors. It is explicitly recognized in the document:

> The risk is even greater with AI model weights, which, once exfiltrated by malicious actors, can be copied and sent anywhere in the world instantaneously.

thorum

> The point is to make it difficult.

Does it, though?

iugtmkbdfil834

This. We put toasters on the internet and are no longer surprised, when services we use send us breach notices at regular intervals. The only thing this regulation would do, as written, is add an interesting choke point for compliance regulators to obsess over.

logicchains

>The most salient thing in the document is that it put export controls on releasing the weights of models trained with 10^26 operations.

Does this affect open source? If so, it'll be absolutely disastrous for the US in the longer term, as eventually China will be able to train open weights models with more than that many operations, and everyone using open weights models will switch to Chinese models because they're not artificially gimped like the US-aligned ones. China already has the best open weights models currently available, and regulation like this will just further their advantage.

gyre

"consistent with its general practice, BIS will not require a license for the export of the model weights of open-weight models"

etiam

Could be nice with some artificial pressure to use more efficient algorithms though. The current game of just throwing in more data centers and power plants may be kind of convenient for those who can afford it, but it's also intellectually embarrassing.

null

[deleted]

permo-w

this is like saying that regulating automatic weapons is misguided because someone might invent a gun that is equally dangerous without being automatic

iugtmkbdfil834

This appears to be a very shallow take and lazy argument that does not capture even basic nuance of the issue at hand. For the sake of expanding it a little and hopefully moving it in the right direction, I will point out that BIS framework discusses use of advanced models as dual use goods ( ie. not automatically automatic weapons ).

edit(removed exasparated sigh; it does not add anything )

HeatrayEnjoyer

We can't let perfect be the enemy of good, regulations can be updated. Capping FLOPs is a decent starter reg.

reaperman

Counterpoint would be the $7.25 minimum wage. It can be updated, but politicians aren't good at doing that. In both cases (FLOPS and minimum wage), at least a lower bound for inflation should be included:

Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.

tivert

> Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.

Why would you want to automatically increase the cap algorithmically like that?

The purpose of a regulation like this is totally different than the minimum wage. If the point is to keep and adversary behind, you want them to stay as far behind as you can manage for as long as possible.

So if you increase the cap, you want only increase it when it won't help the adversary (because they have alternatives, for instance).

Cyph0n

I don’t see an issue here, because our legislators probably care more about FLOPS than humans.

geuis

This smells a lot like the misguided crypto export laws in the 90s that hampered browser security for years.

philjohn

And don't forget the amazing workaround Zimmerman of PGP fame came up with - the source code in printed form was protected 1A speech, so it was published, distributed, and then scanned and OCR'd outside the US - https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...

mmaunder

And don’t forget Thawte which ended up selling strong SSL outside the US, cornering the international market thanks to US restrictions, and getting bought by Verisign for $600M.

rangestransform

I hope this time we finally get a Supreme Court ruling that export controls on code are unconstitutional, instead of the feds chickening out like last time

tzs

I doubt that would work for model weights because they are generated algorithmically rather than being written by humans, which probably means that they are not speech.

mbil

Not to mention how much paper it would take to print 500B weights

z2

I for one would love to see model weights published in hardcover book form.

dkga

What a throwback to the time when some edgy folks would share printed codes in Pascal… I even remember seeing a hard copy of a binary in hex which was best not to execute.

cheald

It might be somewhat prohibitive to print the model weights for any sufficiently large model, though.

galangalalgol

Using 2d barcodes you can fit ~20MB per page. Front and back you could probably fit a model that violated the rule on less than a thousand pages.

Edit: maybe 10k pages

slt2021

how about printing URL to download the weights file?

kube-system

It hampered the security of a lot of things. That wasn't misguided -- that was the point.

China, Russia, and Iran used Internet Explorer too.

cube2222

It’s worth noting that this splits countries into three levels - first without restrictions, second with medium restrictions, third with harsh restrictions.

And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

pjmlp

I see the return of cold war computing model, where many countries had their own computer platforms and programming languages.

Which apparently might be a good outcome to FOSS operating systems, with national distributions like Kylin.

As European I vote for SuSE.

surfingdino

Someone still sees Eastern Europe as a provider of cheap brainpower. This is insulting.

15155

The value of a service is whatever someone is willing to pay for it - and conversely, the price someone is willing to render the service for.

These folks aren't "forced" to provide "cheap brainpower:" they are offering services at their market rate.

Chance-Device

But is it untrue?

consumer451

This smells about as well informed as the genius move that forced Ukrainian owner, Max Polyakov, to divest from Firefly Aerospace. A US government position that was widely derided by space industry watchers, and has now been reversed.

This might be a product of the USA being a gerontocracy.

inglor_cz

includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

Yeah, this is really a bit insulting.

tivert

>> And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

> Yeah, this is really a bit insulting.

So you're insulted some country of other wasn't included in:

> First, this rule creates an exception in new § 740.27 for all transactions involving certain types of end users in certain low-risk destinations. Specifically, these are destinations in which: (1) the government has implemented measures to prevent diversion of advanced technologies, and (2) there is an ecosystem that will enable and encourage firms to use advanced AI models to advance the common national security and foreign policy interests of the United States and its allies and partners.

?

IMHO, it's silly to get insulted over something like that. Your feelings are not a priority for an export control law.

Taiwan, even though it's a US ally, is only allowed limited access to certain sensitive US technology it deploys (IIRC, something about Patriot Missile seeker heads, for instance), because their military is full of PRC spies (e.g. https://www.smh.com.au/technology/chinese-spies-target-taiwa...), so if they had access the system would likely be compromised. It's as simple as that.

intunderflow

We're sorry, an error has occurred A general error occurred while processing your request.

mlfreeman

What do the regulators writing this intend for this to slow down/stop?

I can't seem to find any information about that anywhere.

HeatrayEnjoyer

Obviously to prevent proliferation of dual-use technologies to potentially adversarial actors. The same intent behind restricting high-fidelity infrared camera and phased radar equipment.

slt2021

China is leading AI race with their open source deepseek-v3. It is laughable to think they this regulation with stop them. USA should actually collaborate, not isolate.

China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example, to get access to as many compute as they want. or to use consumer grade compute, or their homegrown Chinese CPUs/GPUs.

USA should actually embrace open source and collaborate together, as we are still in the very beginning of AI revolution

kube-system

The entire point is to not collaborate, because this tech is being used for military purposes. The US wants to throw up roadblocks to make it more difficult. Obviously, against a foreign military, anything is a mitigation and not a prevention.

> China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example

That's why they're also moving towards KYC for cloud providers.

https://www.federalregister.gov/documents/2024/01/29/2024-01...

chatmasta

Maybe they intend for it to speed up/start implementation of federal agencies and regulations. The intent is to exert control over an emerging market while it’s still comprised of cooperative participants. Regulators want to define the regulatory frameworks rather than relying on self-policing before it’s “too late.”

Let’s see if this survives the next administration. Normally I’d be skeptical, but Musk has openly advocated about the “dangers” of AI and will likely embrace attempts to regulate it, especially since he’s in a position to benefit from the regulatory capture. In fact he’s doubly well-placed to take advantage of it. Regardless of his politics, xAI is a market leader and would already be naturally predisposed to participate in regulatory capture. But now he also enjoys unprecedented influence over policymaking (Mar a Lago) and regulatory reform (DOGE). It’s hard to see how he wouldn’t capitalize on that position.

chronic4930018

> Regardless of his politics, xAI is a market leader

Lol what?

The only people who think this are Elon fanboys.

I guess you think Tesla is the self-driving market leader, too. Okay.

chatmasta

I don’t even use it. But in terms of funding, it’s in the top 5, according to Crunchbase data [0].

[0] https://news.crunchbase.com/ai/startup-billion-dollar-fundra...

DSingularity

Hard to take comments like this seriously when you can’t even be bothered to be associated with it from your primary account.

wslh

This feels like déjà vu from the crypto wars (1990s). If that experience helps, it is impossible to repress knowledge without violence, and it motivates more people to hack the system. Good times ahead "PGP released its source code as a book to get around US export law" <https://news.ycombinator.com/item?id=7885238>

Hizonner

Not the same situation at all. PGP would run on any computer you happened to have around. The source code was small enough to fit in a book. The people who already had the code wanted to release it. Lots of people could have rewritten it relatively quickly.

The ML stuff they're worried about takes a giant data center to train and an unusually beefy computer even to run. The weights are enormous and the training data even more enormous. Most of the people who have the models, especially the leading ones, treat them as trade secrets and also try to claim copyright in them. You can only recreate them if you have millions to spend and the aforementioned big data center.

wslh

> The ML stuff they're worried about takes a giant data center to train and an unusually beefy computer even to run.

Now, consider this: the Palm [1] couldn’t even create an RSA [2] public/private key pair in “user time” The pace of technological advancement is astonishing, and new techniques continually emerge to overcome current limitations. For example, in 1980, Intel was selling mathematical coprocessors [3] that were cutting-edge at the time but would be laughable today. It’s reasonable to expect that the field of machine learning will follow a similar trajectory, making what seems computationally impractical now far more accessible in the future.

[1] https://en.wikipedia.org/wiki/Palm_(PDA)

[2] https://en.wikipedia.org/wiki/RSA_(cryptosystem)

[3] https://es.wikipedia.org/wiki/Intel_8087

clhodapp

One interesting geopolitical fact about this document that's not being discussed much is the way it includes Taiwan in lists of "countries".

Usually, the US government tries not to do that.

casebash

Most of the comments here only make sense under a model where AI isn't going to become extremely powerful AI in the near term.

If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.

On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.

veggieroll

The compute limit is dead on arrival, because models are becoming more capable with less training anyways. (See DeepSeek, Phi-4)