Open source has a growing problem with LLM generated issues
9 comments
·November 10, 2025lofties
Sidenote, but I love that in a GitHub issue discussing banning the use of LLMs, the GitHub interface asks if there's anything I'd like to fix with CoPilot.
vineyardmike
I've seen an uptick in LLM generated bug reports from coworkers. A employee of my company (but not someone I work with regularly) used one of the CLI LLMs to search through logs for errors, and then automatically cut (hundreds!) of bugs to (sometimes) the correct teams. Turns out it was the result of some manager's mandate to "try integrating AI into our workflow". The resulting email was probably the least professional communication I've ever sent, but the message was received.
The only solution I can see is a hard-no policy. If I think this bug is AI, either by content or by reputation, I close without any investigation. If you want it re-opened, you'll need to IRL prove its genuine in an educated, good-faith approach that involves independent efforts to debug.
> "If you put your name on AI slop once, I'll assume anything with your name on it is (ignorable) slop, so consider if that is professionally advantageous".
CGamesPlay
I make a lot of drive-by contributions, and I use AI coding tools. I submitted my first PR that is a cross between those two recently. It's somewhere between "vibe-coded" and "vibe-engineered", where I definitely read the resulting code, had the agent make multiple revisions, and deployed the result on my own infrastructure before submitting a PR. In the PR I clearly stated that it was done by a coding agent.
I can't imagine that any policy against LLM code would allow this sort of thing, but I also imagine that if I don't say "this was made by a coding agent", that no one would ever know. So, should I just stop contributing, or start lying?
Blackthorn
They don't want your contribution, so don't disrespect them by trying to make it.
TingPing
I think it’s disrespectful of others to throw generated code their way. They become responsible for it and often donate their time.
dingnuts
[dead]
dropbox_miner
There seems to be a bot that routinely creates huge LLM generated Issues on runc github: https://github.com/containerd/containerd/issues/12496
And honestly, its becoming annoying
dropbox_miner
Curious to know if others are seeing a similar uptick in AI slop in issues or PRs for projects they are maintaining. If yes, how are you dealing with this?
Some of the software that I maintain is critical to container ecosystem and I'm an extremely paranoid developer who starts investigating any github issue within a few minutes of it opening. Now, some of these AI slop github issues have a way to "gaslight" me into thinking that some code paths are problematic when they actually are not. And lately AI slop in issues and PRs have been taking up a lot of my time.
I think everything has a growing problem with LLM/AI generated content. Emails, blog posts, news articles, research papers, grant applications, business proposals, music, art, pretty much everything you can think of.
There’s already more human produced content in the world than anyone could ever hope to consume, we don’t need more from AI.