Skip to content(if available)orjump to list(if available)

Benn Jordan's AI poison pill and the weird world of adversarial noise

Imnimo

Any new "defense" that claims to use adversarial perturbations to undermine GenAI training should have to explain why this paper does not apply to their technique: https://arxiv.org/pdf/2406.12027

The answer is, almost unfailingly, "this paper applies perfectly to our technique because we are just rehashing the same ideas on new modalities". If you believe it's unethical for GenAI models to train on people's music, isn't is also unethical to trick those people into posting their music online with a fake "defense" that won't actually protect them?

nyrikki

You are assuming input-transformation based defenses in the image domain transfer to the music recognition domain, when we know they don't automatically even transfer to the speech recognition domain.

But 'protection' of any one song isn't the entire point. It only takes less than a fraction of a percent of corpus data to have persistent long term effects in the final model, or increase costs and review requirements to those stealing their content.

As most training is unsupervised, because the cost and limited access to quality, human labeled data, it wouldn't take much if even some obscure, limited market, older genres which still have active fan bases, like Noise rock to start filtering into recommendation engines and impact user satisfaction.

Most of the speech protections, just force attacks to be in the perceptible audio range, with lo-fi portions like those of TripHop, that would be non-detectable without the false positive rate going way up. With bands like Arab On Radar, Shellac, or The Oxes, it wouldn't be detectable.

But it is also like WAFs/AV software/IDS. The fact that it can't help with future threats today is immaterial. Any win of these leaches has some value.

Obviously any company intentionally applying even the methods in your linked paper to harvest protected images would be showing willful intent to circumvent copyright protections and I am guessing most companies will just toss any file that it thinks has active protections just because how sensitive training is.

Most musicians also know that copyright only protects the rich.

tptacek

We talked to Nicholas Carlini about this attack (he's one of the authors) in what is one of my top 3 episodes of SCW:

https://securitycryptographywhatever.com/2025/01/28/cryptana...

jjulius

I am ignorant here, this is a genuine question - is there any reason to assume that a paper solely about image mimicry can be blanket-applied, as OP is doing, to audio mimicry?

mk_stjames

To add, all the new audio models (partially) use diffusion methods that are exactly the same methods as used on images - the audio generation can be thought of as an image generation of a spectrogram of an audio file.

For early experiments people literally took Stable Diffusion and fine tuned it on labelled spectrograms of music snippets, then used the fine tuned model to generate new images of spectrograms guided by text, and then took those images and turned them back into audio via re-synthesis of that spectral image to a .wav.

Riffusion was one of the first to experiment with this, 2 years ago now: https://github.com/riffusion/riffusion-hobby

The more advanced music generators out now I believe have more of a 'stems' approach and a larger processing pipeline to increase fidelity and add tracking vocal capability but the underlying idea is the same.

Any adversarial attack to hide information in the spectrograph to fool the model into categorizing the track as something it is not isn't different than the image adversarial attacks which have been found to have ways to be mitigated.

Various forms of filtering for inaudible spectral information coupled with methods that destroy and re-synthesize/randomize phase information would likely break this poisoning attack.

Imnimo

The short answer is that they are applying the same defense to audio as to images, and so we should expect that the same attacks will work as well.

More specifically, there are a few moving parts here - the GenAI model they're trying to defeat, the defense applied to data items, and the data cleaning process that a GenAI company may use to remove the defense. So we can look at each and see if there's any reason to expect things to turn out differently than they did in the image domain. The GenAI models follow the same type of training, and while they of course have slightly different architectures to ingest audio instead of images, they still use the same basic operations. The defenses are exactly the same - find small perturbations that are undetectable to humans but produce a large change in model behavior. The cleaning processes are not particularly image-specific, and translate very naturally to audio. It's stuff like "add some noise and then run denoising".

Given all of this, it would be very surprising if the dynamics turned out to be fundamentally different just because we moved from images to audio, and the onus should be on the defense developers to justify why we should expect that to be the case.

pixl97

>find small perturbations that are undetectable to humans but produce a large change in model behavior.

What artists don't realize by this they are just improving the models relative to human capabilities. The adversarial techniques like, for example making a stop sign look like something else, well likely be weeded out of the model by a convergence of model performance to average or above average human performance.

jjulius

Thanks!

nickpadge

Some of the sibling comments had questions around purposefully releasing defenses which don’t work. I think Carlini’s (one of the paper authors) post can add some important context: https://nicholas.carlini.com/writing/2024/why-i-attack.html.

TLDR: Once these defenses are broken, all previously protected work is perpetually unprotected, so they are flawed at a foundational level.

Ignoring these arguments and pretending they don’t exist is pretty unethical.

null

[deleted]

nemomarx

I'm sure everyone involved wants the defense to work, so it seems a logical leap to say they know it doesn't and are doing this as a scheme?

pixl97

>o it seems a logical leap to say they know it doesn't and are doing this as a scheme?

In some of the earlier image protection articles the people involved seemed rather shady about the capabilities. Would have to do some HN searching for those articles.

But everything at the end of the day will be a scheme if the end result is for humans to listen to it. You cannot make a subset of music that can be heard by humans (and actually sounds good) that cannot be prefiltered to be learned by AI. I've said the same thing about images, the same thing will be true about audio, movies, actions in real leave, et al.

These schemes will likely work for a few of the existing models, then fall apart quickly the moment a new model arrives. What is worse for defense is audio quality for humans is remaining the same while GPU speeds and algorithms increase in speeds over time meaning the time until a model beats the new defense will trend to unity.

nemomarx

Right, but that just makes it a failed defense, not a scheme to dupe artists into false confidence. Maybe the result will be similar but I don't think the intent here is a con, it sounds pretty genuine.

null

[deleted]

janalsncm

I like Benn Jordan because he’s clearly got a grasp on a functional understanding of machine learning, but that’s not his primary background. He comes from a music production background, so his focus is more practical and results-oriented.

It will be really interesting as this knowledge percolates into more and more fields, what domain experts do with it. I see ML as more of a bag of tricks that can be applied to many fields.

dingnuts

>He comes from a music production background, so his focus is more practical and results-oriented

It's his art and his livelihood too, so it's also personal. These people want to steal his art and create a world full of soulless cheap muzak, while simultaneously putting him out of work.

Get 'em, Benn! I should go buy one of his albums.

visarga

Are you sure you mean "stealing"? As in deprive him of his own recordings?

I am curious if anyone read Harry Potter in bootleg form from a LLM. I mean, LLMs are the worst tools for infringing - they are approximate, expensive and slow, while copying is instant, perfect and free. You can apply the same logic for other modalities.

Moreover, who's got the time to see someone else's AI shit when they can generate their own, perfectly customized to their liking? I personally generated a song about my cat and kid. It had zero commercial value but was fun for 2-3 people to listen.

mitthrowaway2

I can steal a company's codebase without depriving them of their code.

I can steal an invention without wiping the inventor's memory.

These are other kinds of stealing, which deprives the creator not of the art itself, but the other benefits of having created it.

viraptor

See the AI generated music channels on YouTube. They get lots of views and a significant part of them would be an actual song stream instead. So yeah, they're taking money away from the artists with content learned from the artists.

kjkjadksj

They’ve been doing that since the recording studio process developed the model in the 1920s or so. They would hire songwriters to make generic pop music with generic lyrics, and keep it in the vault until you have some attractive young singer you want to use for marketing then you give them an album of these songs to sing. And they are sure to sell because you’ve been priming the american ear for these chord progressions for a long time, and you fill all the air in the room with your marketing for this singer leaving people little option but to hear the latest carefully crafted earworm. Still happens today maybe even more perfected with psychological studies intersecting with music and marketing. The best musicians have never and will never be a product of that machine. Seek out live music.

briandear

Soulless cheap Muzak already exists and has for a long time.

Any musician these days that thinks there is money in music by selling songs is delusional. Sad but true.

EvanAnderson

Not only that, but if you do have success "selling" songs you increasingly run the risk of litigating from the interest who already "own" the corpus of existing work.

rcarmo

Benn is one of my fave subscriptions on YouTube--both for the (now more occasional) music gear stuff and for the in-depth music industry education. The fact that he has been hacking away at IP and AI stuff for ages is just icing on the cake.

dale_glass

All this stuff is snake oil, either already, or eventually.

There's new models showing up regularly. Civitai recognizes 33 image models at this point, and audio will also see multiple developments. Any successful attack on a model isn't guaranteed to apply to another one, not even yet invented. There's also a multitude of possible pre-processing methods and their combinations for any piece of media.

There's also the difficulty of attacking a system that's not well documented. Not every model out there is open source and available for deep analysis.

And it's hard to attack something that doesn't yet exist, which means countermeasures will come up only after a model was already successfully created. This is I'm sure of some academic interest, but the practical benefits seem approximately none.

Since information is trivially stored, anyone having any trouble could just download the file today and sit on it for a year or two not doing anything at all, just waiting for a new model to show up.

ben_w

To the extent that the people making the models feel unburdened by the data being explicitly watermarked "don't use me", you are correct.

Seems like an awful risk to deliberately strip such markings. It's a kind of DRM, and breaking DRM is illegal in many countries.

dale_glass

But it's not intended as a watermark, it's an attempt at disruption. And with some models it simply doesn't work.

For instance, I've seen somebody experiment with Glaze (the image AI version of this). Glaze at high levels produces visible artifacts (see middle image: https://pbs.twimg.com/media/FrbJ9ZTacAAWQQn.jpg:large ).

It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture, the character is just wearing a funny patterned shirt. This is while the intended result is fooling the model to generate something other than the intended character.

propter_hoc

Benn has been one of my favorite electronic composers for almost 20 years. Probably my favorite track of his:

The Flashbulb - Parkways: https://youtu.be/C6pzg7I61FI

whimsicalism

adversarial noise is very popular in the media but imo is a complete dead end for the desired goals - representations do not transfer between different models this easily

dijksterhuis

adversarial noise [transferability] for image classification used to be very easy (no idea now, not been in the space for half a decade).

the [transferability] rates just drop off significantly for audio (always felt it was a similar vibe to RNN ‘vanishing gradients’)

edit — specifically mention transferability

KennyBlanken

Why did you link to blogspam and not the original video?

https://www.youtube.com/watch?v=xMYm2d9bmEA

emsign

Maybe because it would be the fourth dupe of my submission by then. ;D

throw_m239339

OP probably is Peter Kirn himself.

thomastjeffery

The problem is that copyright is the law of the land, and it demands our participation.

Because of that reality, every artist who wants to make money must either participate in it, or completely isolate themselves from it.

These models have become an incredible opportunity for giant corporations to circumvent the law. By training a model on a copyrighted work, you can launder that work into your own new work, and make money from it without sharing that money with the original artists. Obviously, this is an incredibly immoral end to copyright as we know it.

So what are we going to do about this situation? Are we really going to keep pretending that copyright can work? It wasn't even working before all the AI hype! Ever heard the words "starving artist"? Of course you have!

We need a better system than copyright. I'm convinced that no system at all (anarchy) would be a superior option at this point. If not now, then when?

visarga

> By training a model on a copyrighted work, you can launder that work into your own new work, and make money from it without sharing that money with the original artists.

Not sure if "you" refers to model developers, hosting company or end users. But let's see each one of them in turn

- model development is a cost center, there is no profit yet

- model deployment brings little profit, they make cents per million tokens

- applying the model to your own needs - that is where the benefit goes.

So my theory is that benefits follow the problem, it is in the application layer. Have a need, you can benefit from AI, don't need it, no benefit. Like Linux. You got to use it for something. And that usage, that problem - is personal. You can't sell your problems, they remain yours. It is hard to quantify how people benefit from AI, it could be for fun, for learning, professional use, or for therapy.

Most gen-AI usage is seen by one person exactly once. Think about that. It's not commercial, it's more like augmented imagination. Who's gonna pay for AI generated stuff when it is so easy to make your own.

thomastjeffery

My point is that this entire situation has to be framed in the narrative that copyright demands it be framed in. It's "you" the participant of copyright.

When someone creates art, copyright says that there is a countable result we can refer to as their "work". Copyright also says that that artist has a monopoly over the distribution and sale of that work. The implication is that the way for an artist to get paid for their labor is for them to leverage the monopoly they have been granted, and negotiate a distribution scheme that involves paying them.

When an artist chooses to work outside the copyright model, that means they must predetermine part of their distribution negotiation. That might be the libertarian option (gratis distribution with no demands), or it might be the copyleft option, where the price is demanded, but also set to 0. The artist may find payment for their labor by other means, but that's challenging to do in an economy where copyright participants dominate.

visarga

I don't know about copyright, since for most artists the royalty revenues are not enough to live on. It seems like a failed system if the intent was to get royalty revenues.

depingus

Benn has a video about that too! His channel is pretty great.

https://www.youtube.com/watch?v=PJSTFzhs1O4

thomastjeffery

Indeed! I've definitely been a fan of his for a while, and I laud him for trying to make things work in a space where all the cards are stacked against him.

I do wish, though, that he would have introduced that perspective of the situation in this particular video. Leaving it out feels like making a video about learning to swim, set in the middle of the ocean.

null

[deleted]

constantcrying

IP is such a stupid concept. How does it make any sense of that an artist could own the right to let people learn from his music. The idea of an artist getting to choose who can and can't learn from their song is so patently absurd.

I hope that the adversarial attacks can be easily detected and circumvented, just like other IP protection measures have been subverted successfully.

charonn0

Exclusive rights over their published work encourages artists and inventors to publish their work, which is a clear benefit to society at large. The period of time it should remain exclusive and the specific rights that are made exclusive can be debated, but the utility of IP rights in general is obvious.

And generative AI is not a person in the first place, so I don't think the appeal to learning makes much sense here.

idle_zealot

> Exclusive rights over their published work encourages artists and inventors to publish their work

Do they? Please cite your studies.

connicpu

It was obvious enough to the founders of the USA to bake it into our constitution.

US Constitution, Article I, Section 8, Clause 8:

> [the United States Congress shall have power] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

barbazoo

People have been publishing their work for thousands of years before IP even existed. We don't know if the system we have is the best one we could have, I doubt it though.

charonn0

Historically most artistic works, books, etc. were privately commissioned and one-of-a-kind. Publishing as we understand the idea came with the printing press and widespread literacy.

os2warpman

Some of the earliest accounts of proto-IP laws date from Ancient Greece, with protections for new works (particularly recipes and plays?!?) being granted to their creators for a set period of time.

Then you have guilds, trade marks, potter's marks, royal warrants, etc... all "IP" protections of their days.

Would open-source license violations be possible to penalize if not for intellectual property laws?

emsign

This! Benn Jordan hasn't published anything in months because of scrapers for AI modeling.

const_cast

> artist could own the right to let people learn from his music.

They don't, what's happening here is their music is being fed to a computer program in a for-profit venture.

This anthropomorphism of LLMs is concerning. What you're actually implying here is that you believe some computer programs should be awarded the same rights as humans. You can't just skip that like it's some kind of foregone conclusion. You have to defend it. And, it's not easy.

constantcrying

>They don't, what's happening here is their music is being fed to a computer program in a for-profit venture.

I believe that no artist has the right to tell anyone what to do with the art they have published. It does not matter what happens inside the algorithm with that art. Whether NNs (LLMs, by definition are about language) learn like humans or not is totally irrelevant to my point.

kmeisthax

Benn Jordan is a musician who is probably one of the most critical of the current copyright regime in his space. For context, see https://www.youtube.com/watch?v=PJSTFzhs1O4

Copyright exists to enrich the interests of the publishers of a work, not the artists they funded. A long time ago, copyright was a sufficient legal tool to bring publishers to artists' heels, but no longer. Long copyright terms and the imbalance of power between different wealthy interests allowed publishers to usurp and alienate artists' ownership over their work. And the outsized amount of commercial interest in current generative AI tools comes down to the fact that publishers believe they can use them to strip what little ownership interest authors have left. What Benn is doing is looking for new tools to bring publishers to heel.

IP is fundamentally a social contract, subject to perpetual renegotiation through action and counter-action. If you told any game publisher in the early 2000s, during the height of the Napster Wars, that they'd be proudly allowing randos on the Internet to stream video of their games being played, they'd laugh in your face. But people did it anyway, and everyone in the games biz realized it's not worth fighting people who are adding to your game. Even Nintendo, notorious IP tightwads as they are, tried scraping micropennies off the top of streamers and realized it's a fool's errand.

The statement Benn is making is pretty clear. You can either...

- Negotiate royalties for, and purchase training data from, actual artists, who will then in exchange give you high-quality training data, or,

- Spend increasing amounts of time fighting to filter an increasingly polluted information ecosystem to have a model that only sorta kinda replicates the musical landscape of the late 2010s.

A lot of us are reflexively inclined to hate on anything "copyright-shaped" because of our experiences over the past few decades. Publishers wanted to go back to the days of copyright being a legal tool of arbitrary and capricious punishment. But that doesn't mean that everything that might fall afoul of copyright law is automatically good or that generative AI companies are trying to liberate creativity. They're trying to monopolize it, just like Web 2.0 "disintermediation" was turned into "here's five websites with screenshots of the other four". That's why so much money is being poured into these companies and why a surprisingly nonzero amount of copyright reformists also have deeply negative opinions of AI.

constantcrying

I am not against IP because it does or doesn't benefit artists. I am against the idea because it does not make sense. It gives ownership and control over imaginary things to people, a song you create and public isn't "yours". You do not get to decide what others do with it and how they use it.

I believe that artists see the current IP laws critically, of course they do as it directly impacts how they finance themselves and how they bargain. But I do not care how good/bad the bargain for the artist is. IP laws should be abolished regardless of what artists want.

voidhorse

But that's precisely the problem—while in theory the ideal is good, it is impractical unless you fundamentally change the economic model.

Artists rely on some form of IP to help secure payment for their creative works. They need this payment to be able to afford their own subsistence so that they can continue to live and create.

In an alternative system, maybe you could abolish all forms of IP outright, but how will you do that under capitalism while sustaining (already impoverished) artists?

If you are against the principle of IP, you are essentially saying that an entire segment of capital should be deactivated, and effectively the only jobs remaining would be those of active service/tangible goods. In the age of digital media, basically everything is instantly and infinitely replicable, so you are effectively asking for a world in which it becomes rapidly impossible to make money off of any kind of digital good (music, literature, film, software, etc.) This has an obvious material consequence of disincentivizing creation of these works simply because if the creators need to earn wages in tangible good/service markets they have strictly less time to devote to the creation of creative works.

jeremyjh

So you are against paying artists and musicians for their work? You are just entitled to it since it exists?

constantcrying

Yes.

I also pirate every single book I read. Sometimes I buy them though.

I also pirate every single show I watch. I never buy them.

Music is a bit difficult, but I pay for Spotify, but I wouldn't mind paying for the service if Spotify had no rights to the songs and wasn't compensating the artists.

jmuguy

So... do you want someone to present you with evidence that paying people for their work is a good thing? We're getting to the point of arguing the color of the sky here.

yoyohello13

This has to be bait.

How can you possibly justify this? Do you propose professional artists/authors/musicians just shouldn't exist?

null

[deleted]

SCdF

Use your ears to learn.

visarga

> IP is such a stupid concept

It's been struggling since the internet became a thing. People got more content than they can consume. For any topic there are 1000 alternative sites, most of them free. Any new work competes against decades of backlog. Under this attention scarcity mode, artists devolve into enshittification because they hunt ad money, while royalties are a joke.

On the other hand, people stopped being passive consumers, we like to interact now. Online games, social networks, open source, wikipedia and scientific publication - they all run in a permissive mode. How could we do anything together if we all insisted on copyright protection?

We like to make most of our content ourselves, we don't need the old top-down model of content creation. We attach "reddit" to our searches because we value comments more than official sources. It's an interactive world where LLMs fit right in, being interactive and contextually adaptive.

delusional

This is such a radical take on IP rights and AI "learning" that I can only assume you're consciously choosing to misunderstand both.

On the off chance that you are not: IP-rights does not cover "learning from" a source. What ML does is not in any way akin to human learning in methodology. When we call it learning that's an analogy. You can not argue a legal case from analogy alone.

constantcrying

I believe that a creator has no right to dictate what other people do with his creations.

I thought this was the most common anti-IP sentiment.

6P58r3MXJSLi

Isn't it the opposite: the author is the only one who has the right to dictate what other people do with his creation?

An extreme example: I do not want my code to be used in weapons guidance systems. Am I not expressing my rights as an author?

SamBam

"no right to dictate what other people do with his creations" seems a little radical -- i.e., you don't believe copyright should exist?

Do you think a movie's creator can dictate that I'm not allowed to use a pirated version of a movie to display for 100% profit at my 10,000 AMC movie theaters? Or a book's creator can dictate that I'm not allowed to copy their book, put my name on it, and sell it on Amazon for half-price?

If you agree that a creator can do those things, then you're already in a gray area between "they can dictate everything" and "they have no rights." At that point, you're arguing over the precise location of the line.

delusional

I'm glad I elaborated. A complete dismissal of all IP-rights is a very fringe very radical position. Most people have a more nuanced IP-rights view about which IP-rights ought to be protected and for how long.

You'll find many more people of the opinion that authors life+70 years is too long, than you will dismissing it entirely.

caconym_

In other words, (for instance) you believe in the right of massive corporations who control the lion's share of consumer distribution channels to, on an ongoing basis, scrape all authored content that is made publicly available in any form (paid or not) and sell it in a heavily discounted form, without the permission of authors and other rights holders and with no compensation to them, while freezing out the "official" published versions from their distribution channels entirely. More generally, you believe that creators should create for free, and that massive moneyed and powerful interests should reap the profits, even while those same creators toil in the mines to support their passions which do evidently have real value, though it is denied to them.

You think this will make the world better? For whom? Or worse, for whom? Or you place the highest importance on having a maximalist viewpoint that simply cannot be argued with, because being unassailably right in an abstract rhetorical framing is most important to you? Or you crave the elegance of such a position, reality and utility notwithstanding? Or you feel the need to rationalize your otherwise unfounded "belief" that piracy and/or training AI on protected IP should be allowed because you like it and are involved with it yourself? Or you think ASI is going to completely transform the world tomorrow, and whether we get Culture-style luxury gay space communism or something far darker, none of this will matter so we should eat, drink, and be merry today? Or some hybrid of that and a belief that we should actively strive toward and enable such a transition, and IP law stands in its way?

What is it? I've seen some version of all of these and frankly they are all childish nonsense (usually espoused by actual children). Are you a new species?

ImPleadThe5th

It makes me so uncomfortable that the relatively informed people on HN seem to equate a human learning to AI learning.

andy99

Care to elaborate?

visarga

> What ML does is not in any way akin to human learning in methodology. When we call it learning that's an analogy.

Of course it's different, but if we look closely, it is not copying. The model itself is smaller, sometimes 1000x smaller, than the training set. Being made of billions of examples, the impact of any one of them is very small (de minimis).

If you try to replicate something closely with AI it fails. If regurgitation was a huge problem we'd see lots of lawsuits on output, but we see most suits for input (training). That means authors can't identify cases of infringement in the outputs.

delusional

I'm not sure I agree that most suits are on input. Most of the ones I've read have been related to the output of a model. Early worried about CoPilot were about its tendency to regurgitate code verbatim. The WaPo suit was about verbatim output of segments of their articles.

stale2002

> You can not argue a legal case from analogy alone.

Yes they can because the analogy directly applies. The technical details of how a computer learns, vs a human learns doesn't really matter here, and is an irrelevant difference.

The reason why the analogy directly applies is because in both cases it is about how IP cannot really control how someone uses the IP.

Just like how IP laws cannot prevent you from listening to music while standing on your head, it also cannot prevent you from training your models on it (while also standing on your head, lets say!).

Instead, IP laws only prevents the publishing of copies of the IP.

So the point and the analogy stands.

delusional

I don't know what to tell you. You will find no judge willing to entertain your analogy without you justifying why it's useful in the particular case. You are going to have to explain the difference, and you are going to have to argue why those differences don't matter.

aezart

How do you propose that artists make a living?

constantcrying

They can work for their living like I do or get a following/sponsors which pays for their creative output. The later works quite well in the case of YouTube. YouTubers generally have proven that it is absolutely possible to commercialize an artistic output that is given out for free, removing IP laws would have essentially zero effect on them.

Nobody owes artists the ability to make a living just for being artists.

aezart

Art is labor. Just because the end result is an "idea" with no scarcity doesn't mean that time, energy, blood, sweat, and tears didn't go into making it. I consider profiting off someone else's art without compensating them for that work to basically be wage theft.

voidhorse

> They can work for their living like I do

Unless you have a service industry job or construct material goods, by your own arguments, I see no reason I should pay you either if you work on anything remotely related to intangibles like software.

I'm getting the sense that your perspective is colored more by some personal bias on what art is, what art making entails, and what art is good, rather than any sound logical principles around labor and economics, which is what any reasonable approach to IP should actually be based on.