Skip to content(if available)orjump to list(if available)

Anthropic irks White House with limits on models’ use

impossiblefork

Very strange writing from semafor.com

>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.

This is of course quite false. They of course know the restriction when they sign the contract.

bri3d

This whole article is weird to me.

This reads to me like:

* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace

* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.

* Anthropic rejected the redline.

* Someone got mad and went to Semafor.

It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.

The article is also full of other weird nonsense like:

> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.

While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).

giancarlostoro

> due to a desire to extract more money from those customers

If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.

matula

There are (or at least WERE) entire divisions dedicated to reading every letter of the contract and terms of service, and usually creating 20 page documents seeking clarification for a specific phrase. They absolutely know what they're getting into.

darknavi

I have a feeling in today's administration which largely "leads by tweet" that many traditional "inefficient" steps have been removed from government processing, probably including software on-boarding.

bt1a

Perhaps it's the finetune of Opus/Sonnet/whatever that is being served to the feds that is the source of the refusal :)

andsoitis

Don’t tech companies change ToS quite frequently and sometimes in ways that’s against the spirit of what the terms were when you started using it?

jdminhbg

Are you sure that every restriction that’s in the model is also spelled out in the contract? If they add new ones, do they update the contract?

mikeyouse

The contracts will usually say “You agree to the restrictions in our TOS” with a link to that page which allows for them to update the TOS without new signatures.

giancarlostoro

Usually, contracts will note that you will be notified of changes ahead of time, if it's a good faith contract and company that is.

owenthejumper

This feels like a hit piece by semafor. A lot of the information in there is purely false. For example, Microsoft's AI Agreemeent says (prohibits):

"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."

saulpw

Gosh, I guess the SaaS distribution model might give companies undesirable control over how their software can be used.

Viva local-first software!

nathan_compton

In general I applaud this attitude but I am glad they are saying no to doing surveillance.

saulpw

Me too, actually, but this is some "leopards ate their face" schaudenfraude that I'm appreciating for the moment.

_pferreir_

EULAs can impose limitations on how you use on-premises software. Sure, you can ignore the EULA, but you can also do so on SaaS, to an extent.

ronsor

With SaaS, you can be monitored and banned at any moment. With EULAs, at worse you can be banned from updates, and in reality, you probably won't get caught at all.

MangoToupe

Are EULAs even enforceable? SaaS at least have the right to terminate service at will.

sfink

First, contracts often come with usage restrictions.

Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.

And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?

This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."

LeoPanthera

One of the very few tech companies who have refused to bend the knee to the United States' current dictatorial government.

jschveibinz

This is a false statement and doesn't belong on this forum

tene80i

Which part?

jimbo808

It's startling how few are willing to. I'm rooting for them.

chrsw

Can we trust this though? “Cooperate with us and we’ll leak fake stories about how frustrated we are with you as cover”.

And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.

null

[deleted]

chatmasta

Are government agencies sending prompts to model inference APIs on remote servers? Or are they running the models in their own environment?

It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?

bri3d

1) Anthropic are US based, maybe you're thinking of Mistral?

2) Are government agencies sending prompts to model inference APIs on remote servers?

Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).

3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.

There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.

jjice

> It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.

"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.

chatmasta

Ah my mistake. I thought they were French. I got them confused with Mistral.

The concern remains even if it’s a US corporation though (not government owned servers).

toxik

Anthropic is pretty clearly using the Häagen-Dasz approach here, call yourself Anthropic and your product Claude so you seem French. Why?

bt1a

Everyone spies and abuses individuals' privacy. What difference does it make? (Granted I would agree with you if Anthropic were indeed a foreign based entity, so am I contradicting myself wonderfully?)

jjice

Ah yes - Mistral is the largest of the non-US, non-Chinese AI companies that I'm aware of.

> The concern remains even if it’s a US corporation though (not government owned servers).

Very much so, I completely agree.

null

[deleted]

itsgrimetime

Anthropic is US-based - unless you meant something else by "foreign corporation"?

SilverbeardUnix

Honestly makes me think better of Anthropic. Lets see how long they stick to their guns. I believe they will fold sooner rather than later.

null

[deleted]

Filligree

[flagged]

sho_hn

Tough doing business in an authoritarian banana republic.

rsynnott

Especially one where, realistically, the banana in chief might be put back in his box (crate?) sometime next year. Like, his approval rating is now actually _lower_ than at the same time in his first term, and in his first term the midterms didn't exactly go great for him.

It's particularly awkward to be in the position of having to complement the emperor on his new clothes when the emperor has a limited shelf life.

docdeek

What would be an example of such criminal charges being brought by this administration? Is there a case that stands out as clear retaliation?

Edited to add: I can think of the mortgage fraud cases being discussed/brought against some high-profile people, but can’t think of any corporate world leadership being charged.

Filligree

I wasn't thinking of this administration necessarily. There's been cases going back to IIRC the early 00s, for instance Joseph Nacchio.

sandworm101

Saying no today also means you can say yes tomorrow. Then you are a hero, a dealmaker, as opposed to the "weak" who never put up a fight. This is schoolyard rules.

pineaux

Yes! Everybody goose-step in unison as to not irk the administration? /s

People should behave more like the invertebrates we are and show some semblance of a spine. Now most have more semblance with snails and jellyfish. Yes they will survive but only because there are so many of them.

jjtheblunt

vertebrates