Your vibe coded slop PR is not welcome
32 comments
·October 28, 2025jamesbelchamber
> “I am closing this but this is interesting, head over to our forum/issues to discuss”
I really like the way Discourse uses "levels" to slowly open up features as new people interact with the community, and I wonder if GitHub could build in a way of allowing people to only be able to open PRs after a certain amount of interaction, too (for example, you can only raise a large PR if you have spent enough time raising small PRs).
This could of course be abused and/or lead to unintended restrictions (e.g. a small change in lots of places), but that's also true of Discourse and it seems to work pretty well regardless.
softwaredoug
Anyone else feel like we're cresting the LLM coding hype curve?
Like a recognition that there's value there, but we're passing the frothing at the mouth stage of replacing all software engineers?
alwa
It feels that way to me, too—starting to feel closer to maturity. Like Mr. Saffron here, saying “go ham with the AI for prototyping, just communicate that as a demo/branch/video instead of a PR.”
It feels like people and projects are moving from a pure “get that slop out of here” attitude toward more nuance, more confidence articulating how to integrate the valuable stuff while excluding the lazy stuff.
andai
>That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.
>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?
A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.
When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?
Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.
And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.
I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)
jrochkind1
Well, but why not instead of asking/accepting people will lie undetectably when you say "No AI" and it's okay you're fine with lying, just say instead "Only AI when you spend the time to turn it into a real reviewed PR, which looks like X, Y, and Z", giving some actual tips on how to use AI acceptably. Which is what OP suggests.
darkwater
The title doesn't make justice to the content.
I really liked the paragraph about LLMs being "alien intelligence"
> Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.
I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.
Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.
It's a similitude I find compelling. The way they produce code and the way you have to interact with them really feels "alien", and when you start humanizing them, you get emotions when interacting with it and that's not correct.
I mean, I do get emotional and frustrated even when good old deterministic programs misbehaved and there was some bug to find and squash or work-around, but the LLM interactions can bring the game to a complete new level. So, we need to remember they are "alien".andai
Some movements expected alien intelligence to arrive in the early 2020s. They might have been on the mark after all ;)
Bengalilol
Shouldn't there be guidelines for open source projects where it is clearly stipulated that code submitted for review must follow the project's code format and conventions?
deadbunny
As if people read guidelines. Sure they're good to have so you can point to them when people violate them but people (in general) will not by default read them before contributing.
c0wb0yc0d3r
This is the thought that I always have whenever I see the mention of coding standards. Not only should there be standards they should be enforced by tooling.
Now that being said a person should feel free to do what they want with their code. It’s somewhat tough to justify the work of setting up infrastructure to do that on small projects, but AI PRs aren’t likely a big issue fit small projects.
portaouflop
In a perfect world people would read and understand contribution guidelines before opening a PR or issue.
Alas…
mattlondon
The way we do it is to use AI to review the PR before a human reviewer sees it. Obvious errors, non-consistent patterns, weirdness etc is flagged before it goes any further. "Vibe coded" slop usually gets caught, but "vibe engineered" surgical changes that adhere to common patterns and standards and have tests etc get to be seen by a real live human for their normal review.
It's not rocket science.
jmpeax
[flagged]
colesantiago
I wouldn't call it "vibe coded slop" the models are getting way better and I can work with my engineers a lot faster.
I am the founder and a product person so it helps in reducing the number of needed engineers at my business. We are currently doing $2.5M ARR and the engineers aren't complaining, in fact it is the opposite, they are actually more productive.
We still prioritize architecture planning, testing and having a CI, but code is getting less and less important in our team, so we don't need many engineers.
pards
> code is getting less and less important in our team, so we don't need many engineers.
That's a bit reductive. Programmers write code; engineers build systems.
I'd argue that you still need engineers for architecture, system design, protocol design, API design, tech stack evaluation & selection, rollout strategies, etc, and most of this has to be unambiguously documented in a format LLMs can understand.
While I agree that the value of code has decreased now that we can generate and regenerate code from specs, we still need a substantial number of experienced engineers to curate all the specs and inputs that we feed into LLMs.
didericis
> we can generate and regenerate code from specs
We can (unreliably) write more code in natural english now. At its core it’s the same thing: detailed instructions telling the computer what it should do.
HPsquared
Maybe the code itself is less important now, relative to the specification.
hansmayer
> and a product person
Tells me all I need to know about your ability for sound judgement on technical topics right there.
wycy
> the engineers aren't complaining, in fact it is the opposite, they are actually more productive.
More productive isn't the opposite of complaining.
colesantiago
I don't hear any either way.
blitzar
If an engineer complains in the woods and nobody is around to hear them, did they even complain at all?
theultdev
> reducing the number of needed engineers at my business
> code is getting less and less important in our team
> the engineers aren't complaining
lays off engineers for ai trained off of other engineer's code and says code is less important and engineers aren't complaining.
colesantiago
Um, yes?
They can focus on other things that are more impactful in the business rather than just slinging code all day, they can actually look at design and the product!
Maximum headcount for engineers is around 7, no more than that now. I used to have 20, but with AI we don't need that many for our size.
theultdev
Yeah I'm sure they aren't complaining because you'll just lay them off like the others.
I don't see how you could think 7 engineers would love the workload of 20 engineers, extra tooling or not.
Have fun with the tech debt in a few years.
lawn
> so it helps in reducing the number of needed engineers at my business
> the engineers aren't complaining
You're missing a piece of the puzzle here, Mr business person.
colesantiago
I mean our MRR and ARR is growing so we must be doing something right.
oompydoompy74
Did you read the full article?
colesantiago
Of course I did, however:
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Toby1VC
Nice jewish word mostly meant to mock. Why would I care what a plugin that I don't even see in use has to say to my face (since I had to read this with all the interpretation potential and receptiveness available). The same kind of inserted judgment that lingers similar to "Yes, I will judge you if you use AI".
mattlondon
Which word? Slop? I think it is from medieval old English if that is the word you are referring to.
softskunk
There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.
Essay is way more interesting than the title, which doesn't actually capture it.