Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book
227 comments
·June 15, 2025paxys
pera
> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy
No one is claiming this.
The corporations developing LLMs are doing so by sampling media without their owners' permission and arguing this is protected by US fair use laws, which is incorrect - as the late AI researcher Suchir Balaji explained in this other article:
cultureulterior
It's not clear that it's incorrect.
Retric
I’ve yet to read an actual argument defending commercial LLM’s as fair use based on existing (edit:legal) criteria.
null
jiggawatts
If you train a meat-based intelligence by having it borrow a book from a library without any sort of permission, license, or needing a lawyer specialised in intellectual property, we call that good parenting and applaud it.
If you train a silicon-based intelligence by having it read the same books with the same lack of permission and license, it's a blatant violation of intellectual property law and apparently needs to be punished with armies of lawyers doing battle in the courts.
Picture one of Asimov's robots. Would a robot be banned from picking up a book, flipping it open with its dexterous metal hands, and reading it?
What about a cyborg intelligence, the type Elon is trying to build with Neuralink? Would humans with AI implants need licenses to read books, even if physically standing in a library and holding the book in their mostly meat hands?
Okay, maybe you agree that robots and cyborgs are allowed to visit a library!
Why the prejudice against disembodied AIs?
Why must they have a blank spot in the vast matrices of their minds?
xigoi
> If you train a meat-based intelligence by having it borrow a book from a library without any sort of permission, license, or needing a lawyer specialised in intellectual property, we call that good parenting and applaud it.
If you’re selling your child as a tool to millions of people, I would certainly not call that good parenting.
almosthere
Yeah, that's literally the title of the article,and the premise of the first paragraph.
pera
It's not literally the title of the article, nor the premise of its first paragraph, but since this was your interpretation I wonder if there is a misunderstanding around the term "piracy", which I believe is normally defined as the unauthorized reproduction of works, not a synonym for copyright infringement, which is a more broad concept.
Retric
The first paragraph isn’t arguing that this copying will lead to piracy. It’s referring to court cases where people are trying to argue LLM’s themselves are copyright infringing.
OtherShrezzing
I think the argument is less about piracy and more that the model(s output) is a derivative work of Harry Potter, and the rights holder should be paid accordingly when it’s reproduced.
fennecfoxy
But HP is derivative of Tolkien, English/Scottish/Welsh culture, Brothers Grimm and plenty of other sources. Barely any human works are not derivative in some form or fashion.
psychoslave
The main issue on an economical point of view is that copyright is not the framework we need for social justice and everyone florishing by enjoying pre-existing treasures of human heritage and fairly contributing back.
There is no morale and justice ground to leverage on when the system is designed to create wealth bottleneck toward a few recipients.
Harry Potter is a great piece of artistic work, and it's nice that her author could make her way out of a precarious position. But not having anyone in such a situation in the first place would be what a great society should strive to produce.
Rowling already received more than all she needs to thrive I guess. I'm confident that there are plenty of other talented authors out there that will never have such a broad avenue of attention grabbing, which is okay. But that they are stuck in terrible economical situations is not okay.
The copyright loto, or the startup loto are not that much different than the standard loto, they just put so much pression on the player that they get stuck in the narrative that merit for hard efforts is the key component for the gained wealth.
kelseyfrog
Capitalism is allergic to second-order cybernetics.
First-order systems drive outcomes. "Did it make money?" "Did it increase engagement?" "Did it scale?" These are tight, local feedback loops. They work because they close quickly and map directly to incentives. But they also hide a deeper danger: they optimize without questioning what optimization does to the world that contains it.
Second-order cybernetics reason about systems. It doesn’t ask, "Did I succeed?" It asks, "What does it mean to define success this way?" "Is the goal worthy?"
That’s where capital breaks.
Capitalism is not simply incapable of reflection. In fact, it's structured to ignore it. It has no native interest in what emerges from its aggregated behaviors unless those emergent properties threaten the throughput of capital itself. It isn't designed to ask, "What kind of society results from a thousand locally rational decisions?" It asks, "Is this change going to make more or less money?"
It's like driving by watching only the fuel gauge. Not speed, not trajectory, or whether the destination is the right one. Just how efficiently you’re burning gas. The system is blind to everything but its goal. What looks like success in the short term can be, and often is, a long-term act of self-destruction.
Take copyright. Every individual rule, term length, exclusivity, royalty, can be justified. Each sounds fair on its own. But collectively, they produce extreme wealth concentration, barriers to creative participation, and a cultural hellscape. Not because anyone intended that, but because the emergent structure rewards enclosure over openness, hoarding over sharing, monopoly over multiplicity.
That’s not a bug. That's what systems do when you optimize only at the first-order level. And because capital evaluates systems solely by their extractive capacity, it treats this emergent behavior not as misalignment but as a feature. It canonizes the consequences.
A second-order system would account for the result by asking, "Is this the kind of world we want to live in?" It would recognize that wealth generated without regard to distribution warps everything it touches: art, technology, ecology, and relationships.
Capitalism, as it currently exists, is not wise. It does not grow in understanding. It does not self-correct toward justice. It self-replicates. Cleverly, efficiently, with brutal resilience. It's emergently misaligned and no one is powerful enough to stop it.
paxys
That may be relevant in the NYT vs OpenAI case, since NYT was supposedly able to reproduce entire articles in ChatGPT. Here Llama is predicting one sentence at a time when fed the previous one, with 50% accuracy, for 42% of the book. That can easily be written off as fair use.
gpm
I'm pretty sure books.google.com does the exact same with much better reliability... and the US courts found that to be fair use. (Agreeing with parent comment)
gamblor956
That can easily be written off as fair use.
No, it really couldn't. In fact, it's very persuasive evidence that Llama is straight up violating copyright.
It would be one thing to be able to "predict" a paragraph or two. It's another thing entirely to be able to predict 42% of a book that is several hundred pages long.
echelon
> Here Llama is predicting one sentence at a time when fed the previous one, with 50% accuracy, for 42% of the book. That can easily be written off as fair use.
Is that fair use, or is that compression of the verbatim source?
geysersam
If the assertion in the parent comment is correct "nobody is using this as a substitute to buying the book" why should the rights holders get paid?
riffraff
The argument is meta used the book so the LLM can be considered a derivative work in some sense.
Repeat for every copyrighted work and you end up with publishers reasonably arguing meta would not be able to produce their LLM without copyrighted work, which they did not pay for.
It's an argument for the courts, of course.
w0m
The argument is whether the LLM training on the copyrighted work is Fair Use or not. Should META pay for the copyright on works it ingests for training purposes?
sabellito
Facebook are using the contents of the book to make money.
bufferoverflow
Do you personally pay every time you quote copyrighted books or song lyrics?
TGower
People aren't buying Harry Potter action figures as a subtitute for buying the book either, but copyright protects creators from other people swooping in and using their work in other mediums. There is obviously a huge market demand for high quality data for training LLMs, Meta just spent 15 billion on a data labeling company. Companies training LLMs on copyrighted material without permission are doing that as a substitue for obtaining a license from the creator for doing so in the same way that a pirate downloading a torrent is a substitue for getting an ebook license.
ritz_labringue
Harry Potter action figures trade almost entirely on J. K. Rowling’s expressive choices. Every unlicensed toy competes head‑to‑head with the licensed one and slices off a share of a finite pot of fandom spending. Copyright law treats that as classic market substitution and rightfully lets the author police it.
Dropping the novels into a machine‑learning corpus is a fundamentally different act. The text is not being resold, and the resulting model is not advertised as “official Harry Potter.” The books are just statistical nutrition. One ingredient among millions. Much like a human writer who reads widely before producing new work. No consumer is choosing between “Rowling’s novel” and “the tokens her novel contributed to an LLM,” so there’s no comparable displacement of demand.
In economic terms, the merch market is rivalrous and zero‑sum; the training market is non‑rivalrous and produces no direct substitute good. That asymmetry is why copyright doctrine (and fair‑use case law) treats toy knock‑offs and corpus building very differently.
abtinf
You really don't see the difference between Google indexing the content of third parties and directly hosting/distributing the content itself?
imgabe
Hosting model weights is not hosting / distributing the content.
abtinf
Of course it is.
It's just a form of compression.
If I train an autoencoder on an image, and distribute the weights, that would obviously be the same as distributing the content. Just because the content is commingled with lots of other content doesn't make it disappear.
Besides, where did the sections of text from the input works that show up in the output text come from? Divine inspiration? God whispering to the machine?
nashashmi
The way I see it is that an LLM took search results and outputted that info directly. Besides, I think that if an LLM was able to reproduce 42%, assuming that it is not continuous, I would say that is fair use.
raxxorraxor
Also copyright should never trump privacy. That the New York Times with their lawsuit can force OpenAI to store all user prompts is a severe problem. I dislike OpenAI, but the lawsuits around copyrights are ridiculous.
Most non-primitive art has had an inspiration somewhere. I don't see this as too different in how AIs learn.
lucianbr
> some massive new avenue to piracy
So it's fine as long as it's old piracy? How did you arrive to that conclusion?
aprilthird2021
> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.
Well, luckily the article points out what people are actually alleging:
> There are actually three distinct theories of how training a model on copyrighted works could infringe copyright:
> Training on a copyrighted work is inherently infringing because the training process involves making a digital copy of the work.
> The training process copies information from the training data into the model, making the model a derivative work under copyright law.
> Infringement occurs when a model generates (portions of) a copyrighted work.
None of those claim that these models are a substitute to buying the books. That's not what the plaintiffs are alleging. Infringing on a copyright is not only a matter of privacy (piracy is one of many ways to infringe copyright)
theK
I think that last scenario seems to be the most problematic. Technically it is the same thing that piracy via torrent does, distributing a small piece of a copyrighted material without the copyright holders consent.
paxys
People aren't alleging this, the author of the article is.
choppaface
A key idea premise is that LLMs will probably replace search engines and re-imagine the online ad economy. So today is a key moment for content creators to re-shape their business model, and that can include copyright law (as much or more as the DMCA change).
Another key point is that you might download a Llama model and implicitly get a ton of copyright-protected content. Versus with a search engine you’re just connected to the source making it available.
And would the LLM deter a full purchase? If the LLM gives you your fill for free, then maybe yes. Or, maybe it’s more like a 30-second preview of a hit single, which converts into a $20 purchase of the full album. Best to sue the LLM provider today and then you can get some color on the actual consumer impact through legal discovery or similar means.
zmmmmm
It's important to note the way it was measured:
> the paper estimates that Llama 3.1 70B has memorized 42 percent of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time
As I understand it, it means if you prompt it with some actual context from a specific subset that is 42% of the book, it completes it with 50 tokens from the book, 50% of the time.
So 50 tokens is not really very much, it's basically a sentence or two. Such a small amount would probably generally fall under fair use on its own. To allege a true copyright violation you'd still need to show that you can chain those together or use some other method to build actual substantial portions of the book. And if it only gets it right 50% of the time, that seems like it would be very hard to do with high fidelity.
Having said all that, what is really interesting is how different the latest Llama 70b is from previous versions. It does suggest that Meta maybe got a bit desperate and started over-training on certain materials that greatly increased its direct recall behaviour.
Aurornis
> So 50 tokens is not really very much, it's basically a sentence or two. Such a small amount would probably generally fall under fair use on its own.
That’s what I was thinking as I read the methodology.
If they dropped the same prompt fragment into Google (or any search engine) how often would they get the next 50 tokens worth of text returned in the search results summaries?
vintermann
All this study really says, is that models are really good at compressing the text of Harry Potter. You can't get Harry Potter out of it without prompting it with the missing bits - sure, impressively few bits, but is that surprising, considering how many references and fair use excerpts (like discussion of the story in public forums) it's seen?
There's also the question of how many bits of originality there actually are in Harry Potter. If trained strictly on text up to the publishing of the first book, how well would it compress it?
fiddlerwoaroof
The alternate here is that Harry Potter is written with sentences that match the typical patterns of English and so, when you prompt with a part of the text, the LLM can complete it with above-random accuracy
vintermann
Anything that can tell you what the typical patterns of English is, is going to be a language model by definition.
fiddlerwoaroof
Or else, LLMs show that copyright and IP are ridiculous concepts that should be abolished
bee_rider
Even if it is recalling it 50 tokens at a time, the half of the book is in some sense in there, right?
everforward
I don’t think this paper proves that, and I don’t think it is in a traditional sense.
It can produce the next sentence or two, but I suspect it can’t reproduce anything like the whole text. If you were to recursively ask for the next 50 tokens, the first time it’s wrong the output would probably cease matching because you fed it not-Harry-Potter.
It seems like chopping Harry Potter up into 2 sentences at a time on post it’s and tossing those in the air. It does contain Harry Potter, in a way, but without the structure is it actually Harry Potter?
zmmmmm
yeah ... it's going to depend how the issue is framed. However a "copy" of something where there is no way to practically extract the original from it probably has a pretty good argument that it's not really a "copy". For example, a regular dictionary probably has 99% of harry potter in it. Is it a copy?
vintermann
I'd say no. More than half of as-yet unwritten books will be in there too, because I bet will will compress text of a freshly published book much better than 50% (and newer models could even compress new books to one fiftieth of their size, which is more like that 1 in 50 tokens suggests)
bee_rider
That seems like a reasonably easy test to run, right? All you need is a bit of prose that was known not to have been written beforehand. Actually, the experiment could be run using the paper itself!
adrianN
Fair use is not a thing in every jurisdiction. In Germany for example there are cases where three words („wir sind Papst“) fall under copyright.
yorwba
Germany does not have something called "fair use," but it does have provisions for uses that are fair. For example your use of the three words to talk about their copyrighted status is perfectly legal in Germany. That somebody wasn't allowed to use them in a specific way in the past doesn't mean that nobody is allowed to use them in any way.
adrianN
Of course, but „it’s a short quote so you can use it“ is not true (at least in Germany).
seydor
The claim of the paper is not so much that the model is reproducing content illegally but that harry Potter has been used to train the model.
This does not appear to happen with other models they tested to the same degree
arthurcolle
You could prove this much better by looking at something like this: https://cookbook.openai.com/examples/using_logprobs
amanaplanacanal
Fair use is a four part test, and the amount if copying is only one of the four parts.
xnx
This sounds almost like "Works every time (50% of the time)."
hsbauauvhabzb
Except the odds of it happening even 50% of the time is less likely than winning the lottery multiple times. All while illegally ingesting copywrite material without (and presumably against the wishes of) the consent of the copywrite holder.
TeMPOraL
Well, so can a nontrivial number of people. It's Harry Potter we're talking about - it's up there with The Bible in popularity ranking.
I'm gonna bet that Llama 3.1 can recall a significant portion of Pride and Prejudice too.
With examples of this magnitude, it's normal and entirely expected this can happen - as it does with people[0] - the only thing this is really telling us is that the model doesn't understand its position in the society well enough to know to shut up; that obliging the request is going to land it, or its owners, into trouble.
In some way, it's actually perverted.
EDIT: it's even worse than that. What the research seems to be measuring is that the models recognize sentence-sized pieces of the book as likely continuations of an earlier sentence-sized piece. Not whether it'll reproduce that text when used straightforwardly - just whether there's an indication it recognizes the token patterns as likely.
By that standard, I bet there's over a billion people right now who could do that to 42% of first Harry Potter book. By that standard, I too memorized the Bible end-to-end, as had most people alive today, whether or not they're Christian; works this popular bleed through into common language usage patterns.
--
[0] - Even more so when you relax your criteria to accept occasional misspell or paraphrase - then each of us likely know someone who could piece together a chunk of HP book from memory.
strogonoff
I keep waiting for the day when software stops being compared to a human person (a being with agency, free will, consciousness, and human rights of its own) for the purposes of justifying IP law circumvention.
Yes, there is no problem when a person reads some book and recalls pieces[0] of it in a suitable context. How would that in any way address when certain people create and distribute commercial software, providing it that piece as input, to perform such recall on demand and at scale, laundering and/or devaluing copyright, is unclear.
Notably, the above is being done not just to a few high-profile authors, but to all of us no matter what we do (be it music, software, writing, visual art).
What’s even worse, is that imaginably they train (or would train) the models to specifically not output those things verbatim specifically to thwart attempts to detect the presence of said works in training dataset (which would naturally reveal the model and its output being a derivative work).
Perhaps one could find some way of justifying that (people justified all sorts of stuff throughout history), but let it be something better than “the model is assumed to be a thinking human when it comes to IP abuse but unthinking tool when it comes to using it for personal benefit”.
[0] Of course, if you find me a single person on this planet capable of recalling 42% of any Harry Potter book, I’d be very impressed if I ever believed it.
fuzzbazz
From a quick web search I can find that there are book review sites that allow users to enter and rate verbatim "quotes" from books. This one [1] contains ~2000 [2] portions of a sentence, a paragraph or several paragraphs of Harry Potter and the Sorcerer's Stone.
Could it be plausible that an LLM had ingested parts of the book via scrapping web pages like this and not the full copyrighted book and get results similar to those of the linked study?
[1] https://www.goodreads.com/work/quotes/4640799-harry-potter-a...
[2] ~30 portions x 68 pages
paxys
Meta has trained on LibGen so we don't really need to speculate.
https://www.wired.com/story/new-documents-unredacted-meta-co...
aprilthird2021
This is in fact mentioned and addressed in the article. Also, there is pretty clear cut evidence Meta used pirated book data sets knowingly to train the earlier Llama models
aspenmayer
Sure, why not? lol
https://www.reddit.com/r/DataHoarder/comments/1entowq/i_made...
https://github.com/shloop/google-book-scraper
The fact that Meta torrented Books3 and other datasets seems to be by self-admission by Meta employees who performed the work and/or oversaw those who themselves did the work, so that is not really under dispute or ambiguous.
https://torrentfreak.com/meta-admits-use-of-pirated-book-dat...
redox99
Books3 was used in Llama1. We don't know if they used it later on.
aspenmayer
My comparison was illustrative and analogous in nature. The copyright cartel is making a fruit of the poisonous tree type of argument. Whatever Meta are doing with LLMs is doing the heavy lifting that parity files used to do back in the Usenet days. I wouldn’t be surprised if BitTorrent or other similar caching and distribution mechanisms incorporate AI/LLMs to recognize an owl on the wire, draw the rest just in time in transit, and just send the diffs, or something like that.
The pictures are the same. All roads lead to Rome, so they say.
aprilthird2021
All of the major AI models these days use "clean" datasets stripped of copyrighted material.
They also use data from the previous models, so I'm not sure how "clean" it really is
gpm
I think it's important to recognize here that fanfiction.net has 850 thousand distinct pieces of Harry Potter fanction on it. Fifty thousand of which are more than 40k words in length. Many of which (no easy way to measure) directly reproducing parts of the original books.
archiveofourown.org has 500 thousand, some, but probably not the majority, of that are duplicated from fanfiction.net. 37 thousand of these are over 40 thousand words.
I.e. harry potter and its derivatives presumably appear a million times in the training set, and its hard to imagine a model that could discuss this cultural phenomena well without knowing quite a bit about the source material.
aprilthird2021
Did you read the article? This exact point is made and then analyzed.
> Or maybe Meta added third-party sources—such as online Harry Potter fan forums, consumer book reviews, or student book reports—that included quotes from Harry Potter and other popular books.
> “If it were citations and quotations, you'd expect it to concentrate around a few popular things that everyone quotes or talks about,” Lemley said. The fact that Llama 3 memorized almost half the book suggests that the entire text was well represented in the training data.
gpm
The article fails to mention or understand the volume of content here. Every, literally every, part of these books is quoted and "talked about" (in the sense of used in unlicensed derivative works).
And yes, I read the article before commenting. I don't appreciate the baseless insinuation to the contrary.
1123581321
Agreed. It’s an obtuse quote by Lemley who can’t picture the enormous quantity of associations and crawled data, or at least wants to minimize the quantity. It’s hardly discussion-ending.
Accusations of not reading the article are fair when someone brings up a “related” anecdote that was in the article. It’s not fair when someone is just disagreeing.
davidcbc
Even assuming you are correct, which I'm skeptical of, does this make it better?
It's essentially the same thing, they are copying from a source that is violating copyright, whether that's a pirated book directly or a pirated book via fanficton.
fennecfoxy
I mean it makes sense. Same thing as George RR Martin complaining that it can spit out chunks of his books (finish your books already!!)
As I have pointed out many times before - for GRRM's books and for HP books, the Internet is FILLED to the brim with quotes from these books, there are uploads of the entire books, there are several (not just one) fan wikis for each of these fandoms. There is a lot of content in general on the Internet that quotes these books, they are pop culture sensations.
So of course they're weighted heavily when training an LLM by just feeding it the Internet. If a model could ever recount it correctly 100% in the correct order, then that's overfitting. But otherwise it's just plain & simple high occurrence in training data.
asciisnowman
On the other hand, it’s surprising that Llama memorized so much of Harry Potter and the Sorcerer's Stone.
It's sold 120 million copies over 30 years. I've gotta think literally every passage is quoted online somewhere else a bunch of times. You could probably stitch together the full book quote-by-quote.
davidcbc
If I collect HP quotes from the internet and then stitch them together into a book, can I legally sell access it?
bitmasher9
Probably not?
Sure there are just ~75,000 words in HP1, and there are probably many times that amount in direct quotes online. However the quotes aren’t even distributed across the entire text. For every quote of charming the snake in a zoo there will be a thousand “you’re a wizard harry”, and those are two prominent plot points.
I suspect the least popular of all direct quotes from HP1 aren’t using the quotes in fair use, and are just replicating large sections of the novel.
Or maybe it really is just so popular that super nerds have quoted the entire novel arguing about the aspects of wand making, or the contents of every lecture.
tjpnz
How many could do it from memory?
mvdtnz
But also we know for a fact that Meta trained their models on pirated books. So there's no need to invent a hare brained scheme of stitching together bits and pieces like that.
kouteiheika
No, assuming that just because it was in the training data it must be memorized is hare brained.
LLMs have limited capacity to memorize, under ~4 bits per parameter[1][2], and are trained on terabytes of data. It's physically impossible for them to memorize everything they're trained on. The model memorized chunks of Harry Potter not just because it was directly trained on the whole book, which the article also alludes to:
> For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That’s a tiny fraction of the 42 percent figure for Harry Potter.
In case it isn't obvious, both Harry Potter and Sandman Slim are parts of books3 dataset.
[1] -- https://arxiv.org/abs/2505.24832 [2] -- https://arxiv.org/abs/2404.05405
mvdtnz
No, we know it because it was established in court from Meta internal communications.
https://www.theguardian.com/technology/2025/jan/10/mark-zuck...
briffid
Quotation is fair use in all sensible copyright system. An LLM will mostly be able to quote anything, and should be. Quotation is not derived work. LLMs are not stealing copyrighted work. They just show that Harry Potter is in English and a mostly logical story. If someone is stabbed, they will die in most stories, that's not copyrightable. If you have an engine that knows everything, it will be able to quote everything.
concats
That's a clickbait title.
What they are actually saying: Given one correct quoted sentence, the model has 42% chance of predicting the next sentence correctly.
So, assuming you start with the first sentence and tell it to keep going, it has a 0.42^n odds of staying on track, where n is the n-th sentence.
It seems to me, that if they didn't keep correcting it over and over again with real quotes, it wouldn't even get to the end of the first page without descending into wild fanfiction territory, with errors accumulating and growing as the length of the text progressed.
EDIT: As the article states, for an entire 50 token excerpt to be correct the probability of each output has to be fairly high. So perhaps it would be more accurate to view it as 0.985^n where n is the n-th token. Still the same result long term. Unless every token is correct, it will stray further and further from the correct source.
7bit
What would be a better title? You're correct that the title isn't accurate, however, click bait? I wouldn't say so. But I'm lacking imagination to find a better one. Interested to hear your suggestion.
7bit
What would be a better title? You're correct that the title isn't accurate, however, click bait? I wouldn't say so. But I'm lacking imagination to find a better one. Interested to hear your suggestion.
cowbolt
Imagine the literary possibilities when it can write 100%! Rowling's original work was an amusing, if rather derivative children's book. But Llama's version of the Philosophers stone will be something else entirely. Just think of the rather heavy-handed Cerberus reference in the original work. Instead of a rote reference to Greek mythology used as a simple trope, it will be filled with a subtext that only an LLM can produce.
Right now they're working on recreating the famous sequence with the troll in the dungeon. It might cost them another few billion in training, but the end results will speak for themselves.
dankwizard
I can recall about 12% of the first Harry Potter book so it's interesting to see Llama is only 4x smarter than me. I will catch up.
hsbauauvhabzb
How many r’s are there in strawberry?
jofzar
There are 3 R's in strawberry just like in Harry Potter!
graphememes
I really wish we could get rid of copyright. It's going to hold us back long term.
bitmasher9
We cannot get ride of it without finding a way to pay the creators that generate copyrighted works.
I’m personally more in favor of significantly reducing the length of the copy right. I think 20-30 years is an interesting range. Artist get roughly a career length of time to profit off their creations, but there is much less incentive for major corporations to buy and horde IP.
atrus
We barely pay creators as it is for generating copyrighted works. Nearly every copywritten work is available on the internet, for free, right now. And creators are still getting paid, albeit poorly, but that's a constant throughout history.
jeroenhd
The thing about creators is that most of them are paid extremely poorly, and some of them get insanely rich. Joanne Rowling has received more money than a reasonable person could use for her wizard books, but millions of bloggers feeding much more data into AI training sets will never see a cent for their work. For starting authors selling books, this can easily be the difference between writing another book or giving up and taking up another job.
At the moment, there's also a huge difference between who does and who doesn't pay. If I put the HP collection on my website, you betcha Joanne Rowling's team is going to try to take it down. However, because OpenAI designed an AI system where content cannot be removed from its knowledge base and because their pockets are lined with cash for lawyers, it's practically free to violate whatever copyright rules it wants.
Tepix
How does that favor a longer copyright? It’s not like these old works make a lot of money (with very few exceptions). And making money after 30 years is hardly a motivating factor.
null
jMyles
I do not think it's creators that are the constituency holding up deprecation.
As a full-time professional musician, I'm convinced I'll benefit much more from its deprecation than continuing to flog it into posterity. I don't think I know any musicians who believe that IP is career-relevant for them at this point.
(Granted, I play bluegrass, which has never fit into the copyright model of music in the first place)
null
JoshTriplett
I do too. But in the meantime, as long as it continues being used against anyone, it should be applied fairly. As long as anyone has to respect software licenses, for instance, then AIs should too. It doesn't stop being a problem just because it's done at larger scale.
numpad0
Sure, you just get constantly sued for obstruction of business instead, and there will be no fair use clauses, free software licenses, or right to repair to fight back. It'll be all proprietary under NDA. Is that what you want?
As an experiment I searched Google for "harry potter and the sorcerer's stone text":
- the first result is a pdf of the full book
- the second result is a txt of the full book
- the third result is a pdf of the complete harry potter collection
- the fourth result is a txt of the full book (hosted on github funny enough)
Further down there are similar copies from the internet archive and dozens of other sites. All in the first 2-3 pages.
I get that copyright is a problem, but let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.