Not all tokens are meant to be forgotten
26 comments
·June 4, 2025pixl97
> However, they tend to memorize unwanted information, such as private or copyrighted content,
I mean humans don't forget copyrighted information. We just typically adjust it enough (some of the time) to avoid getting a copyright strike while modifying it in some way useful.
We don't forget 'private' information either. We might not tell other people that information, but it still influences our thoughts.
The idea of a world where we have AI minds forget vast amounts of information that humans have to deal with every day is concerning and dystopian to me.
squidbeak
I agree. As far as copyrighted and artistic works go, I've never fully understood what the objection is. If the work is being remixed not copied then it surely falls under fair use? Meanwhile, if it creates something new in an artist's style, it's only doing what talented imitators routinely do. There's the economic argument. But if that's accepted, then for fairness it would have to be extended to every other profession which stands to be wiped out by AI, which would be daft.
New works in familiar styles are something I can't wait for. The idea that the best Beethoven symphony hasn't been composed yet, or that the best Basquiat hasn't been painted yet, or that if the tech ever gets far enough, Game of Thrones might actually be done properly with the same actors, is a pretty mouthwatering prospect. Also styles we haven't discovered, that AI can anticipate. How's it to do that without a full understanding of culture? Hobbling the delight it could bring generally for the sake of protected classes will just make the tech less human and a lot less exciting.
wat10000
If it's remixed then it would be a derivative work and you'd need permission from the original copyright holder, just like if you literally remixed a song, or made a movie based on a novel.
IMO the only reason there's even a question about whether LLMs can legally be trained on copyrighted works without permission is that the training is being done by (agents working on behalf of) rich people. If you or I scraped up every copyrighted work we could get our hands on without ever asking permission, trained an LLM on it, and then tried to sell access to the result? Just ask Aaron Swartz how that sort of thing goes, and his actions were orders of magnitude less.
Humans don't forget copyrighted material but we also don't normally memorize it. It takes substantial time and effort to be able to reproduce copyrighted material with just your brain.
wizardforhire
Mind if I ask a few questions? Whats your current address, dob, ssn or NINO or equivalent, your full legal name, mothers maiden name, fathers place of birth, mothers place of birth, country of origin, do you drive? Whats your license number? How about a bank? Could I have your account and routing number as well as the answers to any security questions? How about investments I’m gonna need your accounts and passwords for these as well…
> As far as copyrighted and artistic works go, I've never fully understood what the objection is … > But if that's accepted, then for fairness it would have to be extended to every other profession which stands to be wiped out by AI, which would be daft. … > Hobbling the delight it could bring generally for the sake of protected classes will just make the tech less human and a lot less exciting.
So let me get this straight, you want to ruin the livelihoods of everyone so you can have a fancier toy to play with?
When your life is ruined and can’t make a living you’ll have the answers you desire and understand the objections to why you can’t have fancier toys.
But heres the thing, and with the way the world is going atm, not being able to make a living is going to be the least of your and everyone else’s worries that feel the way you do if ya’ll get your way.
People don’t like having their livelihoods taken away, and when you threaten the livelihoods of their children… people tend towards violence.
I really wish there was a more polite way to put this. Alas what you’re proposing is all out war for what? A better game of thrones?
squidbeak
Violent artists with pitchforks, eh? Aside from their supposed predisposition to vengeful bloodlust, is there any other reason these protected classes should enjoy a different status to any other worker?
null
johnjreiser
I'd counter with an anecdote; I had a colleague that boasted how he memorized a classmate's SSN in college and would greet him by SSN when seeing him years later. Is the goal of AI to replicate the entirety of the human experience (including social pressures, norms, and shame) or a tool to complement human decision making?
While, yes, you can argue the slippery slope, it may be advantageous to flag certain training material as exempt. We as humans often make decisions without perfect knowledge, and "knowing more" isn't a guarantee that it produces better outcomes, given the types of information consumed.
lmm
Knowing more might not improve your accuracy but it's not going to harm it. Forcibly forgetting true parts of your knowledge seems far more likely to have unintended consequences.
conception
Counterpoint: There are plenty examples of breakthroughs from folks who are ignorant of the “right” way to go about it. A fresh take isn’t always bad.
Dylan16807
I disagree. Actively fighting against your memory will slow you down in any context where some memorized idea is similar to what you're doing but you shouldn't be using the memorized idea.
lou1306
One obvious consequence: the model might still produce copyright infringement because it thinks its creative ideas are novel.
lynx97
The goal of AI is to make money. All the moralisation is very human, but also extremely naive.
BTW, I don't really understand what "social pressure" and "shame" has to do with your story? In my book, the person with a good memory isn't to blame. They're just demonstrating a security issue, which is a good thing.
falcor84
In that example, the mnemonist should be demonstrating the security issue to the government, and not to their friend. We have social taboos for this reason. As an extreme example, I wouldn't greet a person by their penis size after noticing it in the locker room - some information should still be considered private, regardless of how we came to obtain it.
Same with an LLM, when it got sensitive information in its weights, regardless of how it obtained it, I think we should apply pressure/shame/deletion/censorship (whatever you call it) to stop it from using that information in any future interactions.
Sleepthinker
[dead]
curtisszmania
[dead]
There’s a related paper that Meta published a couple of days ago that is worth looking at:
> How much do language models memorize?
— https://arxiv.org/abs/2505.24832
— https://news.ycombinator.com/item?id=44171363
It shows that models are limited in how much they can memorise (~3.6 bits per parameter), and once that threshold is reached, the model starts to generalise instead of memorise.