Death by a Thousand Slops
126 comments
·July 14, 2025raywatcher
Aurornis
The most notable thing about this article, in my opinion, is the increase in human generated slop.
Everyone is talking about AI in the comments, but the article estimates only 20% of their submissions are AI slop.
The rest are from people who want a curl contribution or bug report for their resume. With all of the talk about open source contributions as a way to boost your career or get a job, getting open source contributions has become a checklist item for many juniors looking for an edge. They don’t have the experience to know what contributions are valuable or correct, they just want something to put on their resume.
grishka
Reminds me of those "I updated your dependencies/build system version" and "I reformatted your code" kinds of PRs I got several times for my projects. Yeah, okay, you did this very trivial thing. But didn't you stop to think about the fact that if it's so trivial, there must be a reason I haven't done it myself? "It already works as is" is a valid reason too.
Aurornis
I often update README files or documentation comments and submit PRs when I find incorrect documentation.
I’ve had mixed results. Most maintainers are happy to receive a well formatted update to their documentation. Some get angry at me for submitting non-code updates. It’s weird
empiko
It's human toll everywhere. AI used for peer review effectively forces researchers to implement suggestions between revisions, AI used by managers suggest bad solutions that engineers are forced to implement, etc. Effectively, the number of person-hours that is spent following whatever AI models suggest is increasing rapidly. Some of it might make sense, but uncomfortably many hours are burned in vain. There is a real cost of lost productivity in the economy by command chains not being ready to filter out slop.
AStonesThrow
[dead]
anon191928
this type of social moderation exist well over decade and FB had thousands of people hired for these. They were filtering liveleak level or even worse type of content for years with human manually watching or flagging the content. So nothing new.
bravetraveler
> hired
Do remember "we're" (hi, interjecting) talking about open source maintainers, we didn't all make curl or Facebook
meindnoch
My gut tells me that deciding the soundness of a vulnerability report is not in the same complexity class as deciding whether a video showing ISIS torture footage.
friedel
> but offer no real value
They could offer value, but just rarely, at least with the LLM/model/context they used.
> toll it takes to deal with these mind-numbing stupidities.
Could have a special area for submitting these where AI does the rejection letter and banning.
xg15
I think looking at one example is useful: https://hackerone.com/reports/2823554
What they did was:
1) Prompt LLM for a generic description of potential buffer overflows in strcopy() and a generic demonstration code for a buffer overflow. (With no connection to curl or even OpenSSL at all)
2) Present some stack traces and grep results that show usage of strcopy() in curl and OpenSSL.
3) Simply claim that the strcopy() usages from 2) somehow indicate a buffer overflow, with no additional evidence.
4) When called out, just pretend that the demonstrator code from 1) were the evidence, even though it's obvious that it's just a textbook example and doesn't call any code from curl.
It's not that they found some potentially dangerous code in curl and didn't go all the way to prove an overflow, which could have at least some value.
The entire thing is just bullshit made to look like a vulnerability report. There is nothing behind it at all.
Edit: Oh, cherry on top: The demonstrator doesn't even use strcopy() - nor any other kind of buffer overflow. It tries to construct some shellcode in a buffer, then gives up and literally calls execve("/bin/sh")...
deepdarkforest
> The problem is in strcpy in the src files of curl.. have you seen the exploit code ??????
The worst part is that once they are asked for clarifications by the poor maintainers, they go on offense and become aggressive. Like imagine the nerve of some people, to use LLMs to try to gaslight an actual expert that they made a mistake, and then act annoyed/angry when the expert asks normal questions
meindnoch
>They could offer value, but just rarely, at least with the LLM/model/context they used.
Eating human excrement can also offer value in the form of undigested pieces of corn and other seeds. Are you interested?
ElFitz
Funnily enough, fecal transplants (Fecal Microbiota Transplants, FMT) are a thing, used to help treat a range of diseases. It’s even being investigated to help treat depression.
So…
ndepoel
> They could offer value, but just rarely, at least with the LLM/model/context they used.
Still a net negative overall, given that you have to spend a lot of effort separating the wheat from the chaff.
> Could have a special area for submitting these where AI does the rejection letter and banning.
So we'll just have one AI talking to another AI with an indeterminate outcome and nobody learns anything of value. Truly we live in the future!
javcasas
It can be better. On slop detection, shadowban the offender and have it discuss with two AI "maintainers", and after 30 messages go and reveal the ruse. Then ban.
leovingi
And it's not just vulnerability reports that are affected by this general trend. I use social media, X specifically, to follow a lot of artists, mostly for inspiration and because I find it fun to share some of the work that other artists have created, but over the past year or so I find that the mental workload it takes for me to figure out if a particular piece of art is AI-generated is too much and I start leaning into the safe option of "don't share anything that seems even remotely suspicious unless I can verify the author".
The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.
DaSHacka
Genuine question; if you cant tell, why does it matter?
leovingi
It's a fair question and one that I've asked myself as well.
I like to use the example of chess. I know that computers can beat human players and that there are technical advancements in the field that are useful in their own right, but I would never consistently watch a game of chess played between a computer and a human. Why? Because I don't care for it. To me, the fun and excitement is in seeing what a HUMAN can achieve, what a HUMAN can create - I apply the same logic to art as well.
As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!
Seeing someone prompt an AI, wait half-a-minute and then post it on social media does not, even if the end result is of a reasonable quality.
rambambram
> As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!
Today I learned: LLMs and their presence in society eventually force one to producing/crafting/making/creating for fun instead of consuming for fun.
All jokes aside, you got the solution here. ;)
ants_everywhere
All the current active chess players learned by playing the computer repeatedly.
So what the human is achieving in this case is having been trained by AI.
impossiblefork
But how can't you tell?
To me AI generated art without repeated major human interventions is almost immediately obvious. There are things it just can't do.
aDyslecticCrow
Much of what makes art fun is human effort and show of skill.
People post AI art to take credit for being a skilled artist, just like people posting others art as their own. Its lame.
If I am to be a bit controversial among artists; we're exposed to so much good art today that most art posted online is "average" at best. (The bar is so high that it takes 20+ years to become above average for most)
Its average even if a human posted it but fun because a human spent effort making something cool. When an ai generates average art its ... just average art. Scrolling google images to look at art is also pretty dull, because its devoid of the human behind.
latexr
To continue your point, following the human doing it is also infinitely more rewarding because you can witness their progress.
npteljes
One of the reasons why people react so badly to AI art is because they encounter it in a context that implies human art. Then the discovery becomes treachery, a breach of trust. Not too much unlike having sex lovingly, only to discover that there was no love at all. Or people being nice to someone, but not meaning it, and them finding this out.
It's about implications, and trust. Note how AI art is thriving on platforms where it's clearly marked as such. People then can go into it by not having the "hand crafted" expectation, and enjoying it fully for what it is. AI-enabling subreddits and Pixiv comes to mind for example.
meindnoch
An olympic weightlifter doing clean and jerk with 150kg is worthy of my attention. A Komatsu forklift doing the same is not.
ta8645
> A Komatsu forklift doing the same is not ... [worthy of attention]
It is, if you're managing a warehouse; then it's a wonderful marvel. And it is a hidden benefit to everyone who receives cheaper products from that warehouse. Nobody cares if it's a human or the Komatsu doing the heavy lifting.
bit1993
A human artist puts in work and passion to create beautiful art from almost nothing. It brings them joy that their art brings someone joy. Every art piece has a story behind it, sharing their art with others gives them motivations to not only continue doing it and bless the world with more art but it also gives them feedback that yes this art is liked by someone out there. This feedback loop is part of what creates healthy civilizations.
nnf
For the same reason dealing in counterfeit money matters — just because I can't tell it's fake doesn't mean the person I try to pay won't know or care. If your reputation is your currency, you don't want to damage it by promoting artwork that other people know is AI generated, so it's likely better to play it safe.
mort96
It's tantamount to sharing a forgery and not caring because you "can't tell".
disqard
> You still have not told us on which source code line the buffer overflow occurs.
> > hey chat, give this in a nice way so I reply on hackerone with this comment
> This looks like you accidentally pasted a part of your AI chat conversation into this issue, even though you have not disclosed that you're using an AI even after having been asked multiple times.
A sample of what they have to deal with. Source:
toshinoriyagi
The abuse of AI here blows my mind. Not just the use of AI to try to find a vulnerability in a widely-used repo, but the complete ignorance when using the AI.
"hey chat, give this in a nice way so I reply on hackerone with this comment" is not language used naturally. It virtually never precedes high-quality conversation between humans so you aren't going to get that. You would only say this when prompting an LLM (poorly at that) so you are activating weights encoding information from LLM slop in the training data.
EdwardDiego
> The length check only accounts for tmplen (the original string length), but this msnprintf call expands the string by adding two control characters (CURL_NEW_ENV_VAR and CURL_NEW_ENV_VALUE). This discrepancy allows an attacker ...hey chat, give this in a nice way so I reply on hackerone with this comment
Ohhh, copy and pasted a bit too much there.
Hendrikto
> Certainly! Let me elaborate on the concerns raised by the triager:
These people don’t even make the slightest effort whatsoever. I admire Daniel’s patience in dealing with them.
Reading these threads is infuriating. They very obviously just copy and paste AI responses without even understanding what they are talking about.
jgb1984
LLM are a net negative on society on so many levels.
armchairhacker
You could charge a fee and give the money back if the report is wrong but seems well-intentioned.
I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).
However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.
jannes
This is probably something that the platform HackerOne should implement. It can't be addressed on the project level.
Aachen
Why?
I don't know if the link you posted answers the question, I get a blocked page ("You are visiting this page because we detected an unsupported browser"). You'd think a chromium-based browser would be supported but even that isn't good enough. I love open standards like html and http...
Edit: just noticed it goes to hackerone and not curl's own website. Of course they'd say curl can't solve payments on their own
latexr
> You could charge a fee and give the money back if the report is wrong but seems well-intentioned.
That idea was considered and rejected in the article:
> People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.
anthonyryan1
As the only developer maintaining a big bounty program. I believe they are all trending downward.
I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.
So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.
I think that speaks volumes about how much time goes into the actual discoveries.
Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.
Aachen
Similarly from a hacker's point of view, I also think vulnerability reporting is in a downwards spiral. Particularly the ones organised through a platform like this just aren't reaching the right people. It used to be pgp email to whoever needs to know of it and that worked great. I have no idea if it still would today for you guys, but from my point of view it's the only reliable way to reach a human who cares about the product and not someone whose job it is to refuse bounties. I don't want bounties, I've got a day job as security consultant for that, I'm just reporting what I stumble across. Chocolate and handwritten notes are nice, but primarily I want developers and sysadmins to fix their damn software
xg15
Putting on my tinfoil hat, I wonder if some of that slop might be coming from actual black-hat groups or state actors - who have an interest in making it harder to find and close real exploits.
Those people wouldn't care about the bounty, overwhelming the system would be the point.
silvestrov
> charging a fee [...] rather hostile way for an Open Source project that aims to be as open and available as possible
The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.
Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.
latexr
> You are really lucky if you get any kind of feedback from Apple.
Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.
omnicognate
This is because Apple software is perfect by definition. Any perceived bug is an example of someone failing to use the software correctly. Bug reports are records of user incompetence, whose only purpose is to be ritually mocked in morale-enhancing genius confirmation sessions.
IsTom
You could require that submissions include an expletive or anything else that LLMs are sanitized to not produce. With how lazy these people are that ought to filter out at least some of them.
xg15
They are lazy up until they lose money if they don't do something. So if this was the only way to submit the reports, they'll find a way to prompt-hack the LLM to produce the expletive.
...or, just add it to the generated text themselves.
ChrisMarshallNY
> Maybe we need to drop the monetary reward?
That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.
squigz
> Doesn't cost anything more, so why not? Every little hit adds to the coffers.
Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.
ChrisMarshallNY
Spammers and scammers have been running “scattershot” campaigns for decades. Works well for them, I guess, as they still do it.
AI just allows them to be more effective.
yayitswei
Make it cost money to submit.
bla3
The "Possible routes forward" section in the linked post mentions this suggestion, and why the author doesn't love it.
cjs_ac
... and use the proceeds to increase the bounties paid to genuine bug reports.
komali2
"Submit deposit." They get the money back in all cases where the bug is determined not to be AI slop, including it not being a real bug, user error, etc. Otherwise, deposit gone.
caioluders
Make a private program with monetary rewards and a public program without. Invite only verified researchers.
spydum
Right? I thought the value of these vuln programs like hackerone and bugbounty would be you could use the submitters reputation to filter the noise? Don't want to accept low quality submissions from new or low experience reports? Turn the knob up..
null
For all the discussions about the slopification of the internet, the human toll on open source maintainers isn’t really talked about. It's one thing to get flooded with bad reports; it's another to have to mentally filter AI-generated submissions designed to "sound correct" but offer no real value. Totally agree with the author mentioning the emotional toll it takes to deal with these mind-numbing stupidities.