Skip to content(if available)orjump to list(if available)

Anthropic agrees to pay $1.5B to settle lawsuit with book authors

aeon_ai

To be very clear on this point - this is not related to model training.

It’s important in the fair use assessment to understand that the training itself is fair use, but the pirating of the books is the issue at hand here, and is what Anthropic “whoopsied” into in acquiring the training data.

Buying used copies of books, scanning them, and training on it is fine.

Rainbows End was prescient in many ways.

florbnit

> Buying used copies of books, scanning them, and training on it is fine.

Buying used copies of books, scanning them, and printing them and selling them: not fair use

Buying used copies of books, scanning them, and making merchandise and selling it: not fair use

The idea that training models is considered fair use just because you bought the work is naive. Fair use is not a law to leave open usage as long as it doesn’t fit a given description. It’s a law that specifically allows certain usages like criticism, comment, news reporting, teaching, scholarship, or research. Training AI models for purposes other than purely academic fits into none of these.

varenc

> Rainbows End was prescient in many ways.

Agreed. Great book for those looking for a read: https://www.goodreads.com/book/show/102439.Rainbows_End

The author, Vernor Vinge, is also responsible for popularizing the term 'singularity'.

Taylor_OD

RIP to the legend. He has a lot of really fun ideas spread across his books.

mdp2021

> Buying used copies of books

It remains deranged.

Everyone has more than a right to freely have read everything is stored in a library.

(Edit: in fact initially I wrote 'is supposed to' in place of 'has more than a right to' - meaning that "knowledge is there, we made it available: you are supposed to access it, with the fullest encouragement").

mvdtnz

Huh?

riquito

I think he implies that because one can borrow hypothetically any book for free from a library, one could use them for legal training purposes, so the requirement of having your own copy should be moot

ants_everywhere

I wonder what Aaron Swartz would think if he lived to see the era of libgen.

klntsky

He died (2013) after libgen was created (2008).

ants_everywhere

I had no idea libgen was that old, thanks!

arcanemachiner

Yeah but did he die before anybody actually knew about it?

jimmydoe

Google scanned many books quite a while ago, probably way more than LibGen. Are they good to use them for training?

johanyc

If they legally purchased them I dont think why not. IIRC they did borrow from libraries so probably not every book in Google Books

ortusdux

They litigated this a while ago and my understanding was that they were able to claim fair use, but I'm no expert.

What I'm wondering is if they, or others, have trained models on pirated content that has flowed through their networks?

mips_avatar

I imagine the problem there is they primarily scanned library books so I doubt they have the same copyright protections here as if they bought them

xnx

All those books were loaned by a library or purchased.

shortformblog

Thanks for the reminder that what the Internet Archive did in its case would have been legal if it was in service of an LLM.

kennywinker

LLM’s are turning out to be a real get-out-of-legal-responsibilities card, hey?

therobots927

It is related to scalable mode training, however. Chopping the spine off books and putting the pages in an automated scanner is not scalable. And don't forget about the cost of 1) finding 2) purchasing 3) processing and 4) recycling that volume of books.

debugnik

I guess companies will pay for the cheapest copies for liability and then use the pirated dumps. Or just pretend that someone lent the books to them.

Onavo

> Chopping the spine off books and putting the pages in an automated scanner is not scalable.

That's how Google Books, the Internet Archive, and Amazon (their book preview feature) operated before ebooks were common. It's not scalable-in-a-garage but perfectly scalable for a commercial operation.

hamdingers

We hem and haw about metaphorical "book burning" so much we forget that books themselves are not actually precious.

The books that are destroyed in scanning are a small minority compared to the millions discarded by libraries every year for simply being too old or unpopular.

knome

I remember them having a 3D page unwarping tech they built as well so they could photograph rare and antique books without hacking them apart.

therobots927

Oh I didn't know that. That's wild

zer00eyz

> It’s important in the fair use assessment to understand that the training itself is fair use,

I think that this is a distinction many people miss.

If you take all the works of Shakespeare, and reduce it to tokens and vectors is it Shakespeare or is it factual information about Shakespeare? It is the latter, and as much as organizations like the MLB might want to be able to copyright a fact you simply cannot do that.

Take this one step further. IF you buy the work, and vectorize it, thats fine. But if you feed it in the vectors for Harry Potter so many times that it can reproduce half of the book, it becomes a problem when it spits out that copy.

And what about all the other stuff that LLM's spit out? Who owns that. Well at present, no one. If you train a monkey or an elephant to paint, you cant copyright that work because they aren't human, and neither is an LLM.

If you use an LLM to generate your code at work, can you leave with that code when you quit? Does GPL3 or something like the Elastic Search license even apply if there is no copyright?

I suspect we're going to be talking about court cases a lot for the next few years.

Imustaskforhelp

Yes. Someone on this post mentioned that switzerland allows downloading copyrightable material but not distributing them.

So things get even more dark because what becomes distribution can have a really vague definition and maybe the AI companies will only follow the law just barely, just for the sake of not getting hit with a lawsuit like this again. But I wonder if all this case did was maybe compensate the authors this one time. I doubt if we can see a meaningful change towards AI companies attitude's towards fair use/ essentially exploiting authors.

I feel like that they would try to use as much legalspeak as possible to extract as much from authors (legally) without compensating them which I feel is unethical but sadly the law doesn't work on ethics.

arcticfox

> And what about all the other stuff that LLM's spit out? Who owns that. Well at present, no one. If you train a monkey or an elephant to paint, you cant copyright that work because they aren't human, and neither is an LLM.

This seems too cute by half, courts are generally far more common sense than that in applying the law.

This is like saying using `rails generate model:example` results in a bunch of code that isn't yours, because the tool generated it according to your specifications.

tomrod

I mean, sort of. The issue is that the compression is novel. So anything post tokenization could arguably be considered value add and not necessarily derivative work.

GodelNumbering

Settlement Terms (from the case pdf)

1. A Settlement Fund of at least $1.5 Billion: Anthropic has agreed to pay a minimum of $1.5 billion into a non-reversionary fund for the class members. With an estimated 500,000 copyrighted works in the class, this would amount to an approximate gross payment of $3,000 per work. If the final list of works exceeds 500,000, Anthropic will add $3,000 for each additional work.

2. Destruction of Datasets: Anthropic has committed to destroying the datasets it acquired from LibGen and PiLiMi, subject to any legal preservation requirements.

3. Limited Release of Claims: The settlement releases Anthropic only from past claims of infringement related to the works on the official "Works List" up to August 25, 2025. It does not cover any potential future infringements or any claims, past or future, related to infringing outputs generated by Anthropic's AI models.

privatelypublic

Don't forget: NO LEGAL PRECEDENT! which means, anybody suing has to start all over. You only settle in this scenario/point if you think you'll lose.

Edit: I'll get ratio'd for this- but its the exact same thing google did in it's lawsuit with Epic. They delayed while the public and courts focused in apple (oohh, EVIL apple)- apple lost, and google settled at a disadvantage before they had a legal judgment that couldn't be challenged latter.

ignoramous

Or, if you think your competition, also caught up in the same quagmire, stands to lose more by battling for longer than you did?

privatelypublic

A valid touche! I still think google went with delaying tactics as public and other pressures forced Apple's case forward at greater velocity. (Edit: implicit "and then caved when apple lost"... because they're the same case)

manbash

Thank you. I assumed it would be quicker to find the link to the case PDF here, but your summary is appreciated!

Indeed, it is not only payout, but the destruction of the datasets. Although the article does quote:

> “Anthropic says it did not even use these pirated works,” he said. “If some other generative A.I. company took data from pirated source and used it to train on and commercialized it, the potential liability is enormous. It will shake the industry — no doubt in my mind.”

Even if true, I wonder how many cases we will see in the near future.

gooosle

So... it would be a lot cheaper to just buy all of the books?

gpm

Yes, much.

And they actually went and did that afterwards. They just pirated them first.

privatelypublic

Few. This settlement potentially weakens all challenges to the use of copyrighted works in training LLM's. I'd be shocked if behind closed doors there wasn't some give and take on the matter between Executives/investors.

A settlement means the claimants no longer have a claim, which means if they're also part of- say, the New York Times affiliated lawsuit- they have to withdraw. A neat way of kneecapping a country wide decision that LLM training on copy written material is subject to punitive measures don't you think?

freejazz

That's not even remotely true. Page 4 of the settlement describes released claims which only relate to the pirating of books. Again, the amount of misinformation and misunderstanding I see in copyright related threads here ASTOUNDS.

privatelypublic

The permission to buy them was already settled by Google Books in the 00's.

_alternator_

They did, but only after they pirated the books to begin with.

testing22321

I’m an author, can I get in on this?

A_D_E_P_T

I have the same question. I have reason to believe that they trained on one of my technical books.

mdp2021

(Sorry, meta question: how do we insert in submissions that "'Also' <link> <link>..." below the title and above the comment input? The text field in the "submit" page creates a user's post when the "url" field is also filled. I am missing something.)

arjunchint

Wait so they raised all that money just to give it to publishers?

Can only imagine the pitch, yes please give us billions of dollars. We are going to make a huge investment like paying of our lawsuits.

Wowfunhappy

From the article:

> Although the payment is enormous, it is small compared with the amount of money that Anthropic has raised in recent years. This month, the start-up announced that it had agreed to a deal that brings an additional $13 billion into Anthropic’s coffers. The start-up has raised a total of more than $27 billion since its founding in 2021.

slg

Maybe small compared to the money raised, but it is in fact enormous compared to the money earned. Their revenue was under $1b last year and they projected themselves as likely to make $2b this year. This payout equals their average yearly revenue of the last two years.

masterjack

I thought they were projecting 10B and said a few months ago they have already grown from a 1B to 4B run rate

dkdcio

maybe I’m bad at math but paying >5% of your capital raised for a single fine doesn’t seem great from a business perspective

ryao

If it allowed them to move faster than their completion, I imagine management would consider it money well spent. They are expected to spend absurd amounts of money to get ahead. They were never expected to spend money efficiently if it meant taking additional months/years to get results.

siliconpotato

It's VC money, I don't think anyone believes it's real money

bongodongobob

Yeah it does, cost of materials is way more than that if they were building something physical like a new widget or something. Same idea, they paid for their raw materials.

xnx

The money they don't pay out in settlements goes to Nvidia.

non_aligned

You're joking, but that's actually a good pitch. There was a significant legal issue hanging over their heads, with some risk of a potentially business-ending judgment down the line. This makes it go away, which makes the company a safer, more valuable investment. Both in absolute terms and compared to peers who didn't settle.

freejazz

It just resolves their liability with regards to books they purported they did not even train the models on, which is all that was left in this case after summary judgment. Sure the potential liability was company ending, but it's all a stupid business decision when it is ultimately for books they did not even train on.

It basically does nothing for them besides that. Given the split decisions so far, I'm not sure what value the Alsup decision is going to bring to the industry, moving forward, when it's in the context of books that Anthropic physically purchased. The other AI cases are generally not fact patterns where the LLM was trained with copyrighted materials that the AI company legally purchased copies of.

freejazz

They wanted to move fast and break things. No one made them.

GMoromisato

If you are an author here are a couple of relevant links:

You can search LibGen by author to see if your work is included. I believe this would make you a member of the class: https://www.theatlantic.com/technology/archive/2025/03/searc...

If you are a member of the class (or think you are) you can submit your contact information to the plaintiff's attorneys here: https://www.anthropiccopyrightsettlement.com/

r_lee

One thing that comes to mind is...

Is there a way to make your content on the web "licensed" in a way where it is only free for human consumption?

I.e. effectively making the use of AI crawlers pirating, thus subject to the same kind of penalties here?

gpm

Yes to the first part. Put your site behind a login wall that requires users to sign a contract to that effect before serving them the content... get a lawyer to write that contract. Don't rely on copyright.

I'm not sure to what extent you can specify damages like these in a contract, ask the lawyer who is writing it.

7952

Maybe some kind of captcha like system could be devised that could be considered a security measure under the DMCA and not allowed to be circumvented. Make the same content available under a licence fee through an API.

Wowfunhappy

I'd argue you don't actually want this! You're suggesting companies should be able to make web scraping illegal.

That curl script you use to automate some task could become infringing.

Cheer2171

No. Neither legally or technically possible.

shadowgovt

I'm sure one can try, but copyright has all kinds of oddities and carve-outs that make this complicated. IANAL, but I'm fairly certain that, for example, if you tried putting in your content license "Free for all uses public and private, except academia, screw that ivory tower..." that's a sentiment you can express but universities are under no obligation legally to respect your wish to not have your work included in a course presentation on "wild things people put in licenses." Similarly, since the court has found that training an LLM on works is transformative, a license that says "You may use this for other things but not to train an LLM" couldn't be any more enforceable than a musician saying "You may listen to my work as a whole unit but God help you if I find out you sampled it into any of that awful 'rap music' I keep hearing about..."

The purpose of the copyright protections is to promote "sciences and useful arts," and the public utility of allowing academia to investigate all works(1) exceeds the benefits of letting authors declare their works unponderable to the academic community.

(1) And yet, textbooks are copyrighted and the copyright is honored; I'm not sure why the academic fair-use exception doesn't allow scholars to just copy around textbooks without paying their authors.

golly_ned

> "The technology at issue was among the most transformative many of us will see in our lifetimes"

A judge making on a ruling based on his opinion of how transformative a technology will be doesn't inspire confidence. There's an equivocation on the word "transformative" here -- not just transformative in the fair use sense, but transformative as in world-changing, impactful, revolutionary. The latter shouldn't matter in a case like this.

> Companies and individuals who willfully infringe on copyright can face significantly higher damages — up to $150,000 per work

Settling for 2% is a steal.

> In June, the District Court issued a landmark ruling on A.I. development and copyright law, finding that Anthropic’s approach to training A.I. models constitutes fair use,” Aparna Sridhar, Anthropic’s deputy general counsel, said in a statement.

This is the highest-order bit, not the $1.5B in settlement. Anthropic's guilty of pirating.

Ekaros

Printing press, audio recording, movies, radio, television were also transformative. Did not get rid of copyright or actually brought them.

I feel it is insane that authors do not receive some sort of standard compensation for each training use. Say a few hundred to a few thousand depending on complexity of their work.

verdverm

Why would they earn more from models reading their works than I would pay to read it?

petralithic

This is sad for open source AI, piracy for the purpose of model training should also be fair use because otherwise only the big companies who can afford to pay off publishers like Anthropic will be able to do so. There is no way to buy billions of books just for model training, it simply can't happen.

bcrosby95

Fair use isn't about how you access the material, its about what you can do with it after you legally access it. If you don't legally access it, the question of fair use is moot.

sefrost

I wonder how much it would cost to buy every book that you'd want to train a model.

GMoromisato

500,000 x $20 = $10 million

Obviously there would be handling costs + scanning costs, so that’s the floor.

Maybe $20 million total? Plus, of course, the time it would take to execute.

dbalatero

This implies training models is some sort of right.

542458

No, it implies that having the power to train AI models exclusively consolidated into a handful of extremely powerful companies is bad.

JoshTriplett

That's true. Those handful of companies shouldn't get to do it either.

johanyc

No. It means model training is transformative enough to be fair use. They should just be asked to pay them back plus reimbursement/punishment, say pay 10x the price of the pirated books

robterrell

As a published author who had works in the training data, can I take my settlement payout in the form of Claude Code API credits?

TBH I'm just going to plow all that money back into Anthropic... might was well cut out the middleman.

MaxikCZ

See kids? Its okay to steal if you steal more money than the fine costs.

ascorbic

They're paying $3000 per book. It would've been a lot cheaper to buy the books (which is what they actually did end up doing too).

null

[deleted]

ajross

That metaphor doesn't really work. It's a settlement, not a punishment, and this is payment, not a fine. Legally it's more like "The store wasn't open, so I took the items from the lot and paid them later".

It's not the way we expect people to do business under normal circumstances, but in new markets with new products? I guess I don't see much actually wrong with this. Authors still get paid a price they were willing to accept, and Anthropic didn't need to wait years to come to an agreement (again, publishers weren't actually selling what AI companies needed to buy!) before training their LLMs.

qqbooks

So if a startup wants to buy book PDFs legally to use for AI purposes, any suggestions on how to do that?