Launch HN: mrge.io (YC X25) – Cursor for code review
90 comments
·April 15, 2025pyfon
There are a few of these already. Is this a land grab play, i.e. with investment get the big accounts then all the compliance ticks then dominate?
AI or conventional bots for PRs are neat though. Where I work we have loads of them checking all sorts of criteria. Most are rules based. E.g. someone from this list must review if this folder changes. Kinda annoying when getting the PR in but overall great for quality control. We are using an LLM AI for commenting on potential issues too. (Sorry I don't have any influence to help them to consider yours)
alexchantavy
Been using this for https://github.com/cartography-cncf/cartography and am very happy, thanks for building this.
Automated review tools like this are especially important for an open source project because you have to maintain a quality bar to keep yourself sane but if you're too picky then no one from the community will want to contribute. AI tools are like linters and have no feelings, so they will give the feedback that you as a reviewer may have been hesitant to give, and that's awesome.
Oh, and on the product itself, I think it's super cool that it comes up with rules on its own to check for based on conventions and patterns that you've enforced over time. E.g. we use it to make sure that all function calls that pull from an upstream API are decorated with our standard error handler.
pomarie
Thanks for sharing that Alex! Definitely love having an AI be the strict reviewer so that the human doesn't have to
justanotheratom
This is an awesome direction. Few thoughts:
It would be awesome if the custom rules were generalized on the fly from ongoing reviewer conversations. Imaging two devs quibble about line length in a PR, and in a future PR, the AI reminds about this convention.
Would this work seamlessly with AI Engineers like Devin? I imagine so.
This will be very handy for solo devs as well, even those who don't use Coding CoPilots could benefit from an AI reviewer, if it does not waste their time.
Maybe there can be multiple AI models review the PR at the same time, and over time, we promote the ones whose feedback is accepted more.
allisonee
Appreciate the feedback! We currently auto-suggest custom rules based on your comment history (and .cursorrules), however continuing to suggest from history is now on the roadmap thanks to your suggestion!
On working with Devin: Yes, right now we're focused on code review, so whatever AI IDE you use would work. In fact, it might even be better with autonomous tools like Devin since we focus on helping you (as a human) understand the code they've written faster.
Interesting idea on multiple AI models --we were also separately toying with the idea of having different personas (security, code architecture), will keep this one in mind!
justanotheratom
personas sounds great!
8organicbits
Line length isn't something I'd want reviewed in a PR. Typically I'd set up a linter with relevant limits and defer to that, ideally using pre-commit testing or directly in my IDE. Line length isn't an AI feature, it's largely a solved problem.
justanotheratom
bad example, sorry.
pomarie
These are all amazing ideas. We actually already see a lot of solo devs using mrge precisely because they want something to catch bugs before code goes live—they simply don't have another pair of eyes.
And I absolutely love your idea of having multiple AI models review PRs simultaneously. Benchmarking LLMs can be notoriously tricky, so a "wisdom of the crowds" approach across a large user base could genuinely help identify which models perform best for specific codebases or even languages. We could even imagine certain models emerging as specialists for particular types of issues.
Really appreciate these suggestions!
rushingcreek
I love this idea. We experimented with building an AI coding agent that we showed to a small set of users and the most common feedback was confusion over what exactly the agent did. And so, I think that something like this can solve that problem, especially as AI performs increasingly complicated edits.
eqvinox
Threw a random PR at it… of the 11 issues it flagged, only 1 was appropriate, and that one was also caught by pylint :(
(mixture of 400 lines of C and 100 lines of Python)
It also didn't flag the one SNAFU that really broke things (which to be fair wasn't caught by human review either, it showed in an ASAN fault in tests)
allisonee
sorry to hear that it didn't catch all the issues! if you downvote/upvote or reply directly to the bot comment @mrge-io <feedback>, we can improve it for your team.
We take all these into consideration when improving our AI, and your direct reply will fine tune comments for your repository-only.
eqvinox
That's good to know, but —assuming my sample of size 1 isn't a bad outlier, I should really try a few more— there's another problem: I don't think we'd be willing to sink time into tuning a currently-free subscription service that can be yanked at any time. And I'm in a position to say it is highly unlikely that we'd pay for the service.
(We already have problems with our human review being too superficial; we've recently come to a consensus that we're letting too much technical debt slip in, in the sense of unnoticed design problems.)
Now the funny part is that I'm talking about a FOSS project with nVidia involvement ;D
But also: this being a FOSS project, people have opened AI-generated PRs. Poor AI-generated PRs. This is indirectly hurting the prospects of your product (by reputation). Might I suggest adding an AI generated PR detector, if possible? (It's not in our guidelines yet but I expect we'll be prohibiting AI generated contributions soon.)
allisonee
totally get where you're coming from--many big open source repos have also been using it for a while and have seen some FP but have generally felt that the quality overall was worth it. would love to continue having you try it out, but also understand that maintaining a FOSS project is a ton of work!
if you have specific feedback on the pr--feel free to email at contact@mrge.io and i'll take a look personally and see if we can adjust anything for your repo.
nice idea on the fully AI-generated PRs! something in our roadmap is to better highlight PRs or chunks that were likely auto-gened. stay tuned !
dimal
Looks interesting. I’m a bit confused about how it knows the codebase and the custom rules interface. I generally have coding standards docs in the repo. Can it simply be made aware of those docs instead of requiring me to maintain two sets of instructions (one written one for humans, and one in the mrge interface for AI)? I could imagine that without being highly aware of a team’s standards, the usefulness of its review would be pretty poor. Getting general “best practices” type stuff wouldn’t be helpful.
bryanlarsen
It looks like graphite.dev has pivoted into this space too. Which is annoying, because I'm interested in graphite.dev's core non-AI product. Which appears to be stagnating from my perspective -- they still don't have gitlab support after several years.
pomarie
Yeah, noticed that too—what's the core graphite.dev feature you're interested in? PR stacking, by chance?
If that's it, we actually support stacked PRs (currently in beta, via CLI and native integrations). My co-founder, Allis, used stacked PRs extensively at her previous company and loved it, so we've built it into our workflow too. It's definitely early-stage, but already quite useful.
Docs if you're curious: https://docs.mrge.io/overview
bryanlarsen
Yes, stacked PR's and a rebase-only flow. Unfortunately we're a GitLab shop. Today's task is a particularly hairy review; it's too bad I can't try you out.
pomarie
Ah, totally get it—that’s frustrating. GitLab support is on our roadmap, so hopefully we can help you out soon.
In the meantime, good luck with that hairy review—hope it goes smoothly! If you're open to it, I'd love to reach out directly once GitLab support is ready.
catlover76
[dead]
justinl33
This is good. PR review has been completed neglected basically from day 0.
Did some self-research on Reddit about why (https://www.reddit.com/r/github/comments/1gtxqy6/comment/lxv...)
thuanao
It's been useful at our company. My only gripe is I'd like to run it locally. I don't want the feedback after I open a PR.
pomarie
Super useful, thanks for the feedback! We're definitely thinking of building something that would run the reviews in your IDE directly, before you push the code.
dyeje
I've been evaluating AI code review vendors for my org. We've trialed a couple so far. For me, taking the workflow out of GitHub is a deal breaker. I'm trying to speed things along, not upend my whole team's workflow. What's your take on that?
pomarie
Yeah, that's a totally legit point!
The good news with mrge is that it works just like any other AI code reviewer out there (CodeRabbit, Copilot for PRs, etc.). All AI-generated review comments sync directly back to GitHub, and interacting with the platform itself is entirely optional. In fact, several people in this thread mentioned they switched from Copilot or CodeRabbit because they found mrge's reviews more accurate.
If you prefer, you never need to leave GitHub at all.
berrazuriz
maybe blar.io works. Worth a try
frabona
This is super well done - love the approach with cloud-based LSP and the focus on making reviews actually faster for humans.
pomarie
Thanks for the encouragement!
LinearEntropy
The call to action button says "Get Started for Free", while the pricing page lists $20/month.
Clicking the get started button immediately wants me to sign up with github.
Could you explain on the pricing page (or just to me) what the 'free' is? I'm assuming a trial of 1 month or 1 PR?
I'm somewhat hesitant to add any AI tooling to my workflows, however this is one of the use cases that makes sense to me. I'm definitely interested in trying it out, I just think its odd that this isn't explained anywhere I could find.
allisonee
thanks for bringing this up! we're currently free (unlimited PRs) and will soon bill $20-$30/active user (has committed a PR) per month.
We'll try to make this clearer!
Hey HN, we’re building mrge (https://www.mrge.io/home), an AI code review platform to help teams merge code faster with fewer bugs. Our early users include Better Auth, Cal.com, and n8n—teams that handle a lot of PRs every day.
Here’s a demo video: https://www.youtube.com/watch?v=pglEoiv0BgY
We (Allis and Paul) are engineers who faced this problem when we worked together at our last startup. Code review quickly became our biggest bottleneck—especially as we started using AI to code more. We had more PRs to review, subtle AI-written bugs slipped through unnoticed, and we (humans) increasingly found ourselves rubber-stamping PRs without deeply understanding the changes.
We’re building mrge to help solve that. Here’s how it works:
1. Connect your GitHub repo via our Github app in two clicks (and optionally download our desktop app). Gitlab support is on the roadmap!
2. AI Review: When you open a PR, our AI reviews your changes directly in an ephemeral and secure container. It has context into not just that PR, but your whole codebase, so it can pick up patterns and leave comments directly on changed lines. Once the review is done, the sandbox is torn down and your code deleted – we don’t store it for obvious reasons.
3. Human-friendly review workflow: Jump into our web app (it’s like Linear but for PRs). Changes are grouped logically (not alphabetically), with important diffs highlighted, visualized, and ready for faster human review.
The AI reviewer works a bit like Cursor in the sense that it navigates your codebase using the same tools a developer would—like jumping to definitions or grepping through code.
But a big challenge was that, unlike Cursor, mrge doesn’t run in your local IDE or editor. We had to recreate something similar entirely in the cloud.
Whenever you open a PR, mrge clones your repository and checks out your branch in a secure and isolated temporary sandbox. We provision this sandbox with shell access and a Language Server Protocol (LSP) server. The AI reviewer then reviews your code, navigating the codebase exactly as a human reviewer would—using shell commands and common editor features like "go to definition" or "find references". When the review finishes, we immediately tear down the sandbox and delete the code—we don’t want to permanently store it for obvious reasons.
We know cloud-based review isn't for everyone, especially if security or compliance requires local deployments. But a cloud approach lets us run SOTA AI models without local GPU setups, and provide a consistent, single AI review per PR for an entire team.
The platform itself focuses entirely on making human code reviews easier. A big inspiration came from productivity-focused apps like Linear or Superhuman, products that show just how much thoughtful design can impact everyday workflows. We wanted to bring that same feeling into code review.
That’s one reason we built a desktop app. It allowed us to deliver a more polished experience, complete with keyboard shortcuts and a snappy interface.
Beyond performance, the main thing we care about is making it easier for humans to read and understand code. For example, traditional review tools sort changed files alphabetically—which forces reviewers to figure out the order in which they should review changes. In mrge, files are automatically grouped and ordered based on logical connections, letting reviewers immediately jump in.
We think the future of coding isn’t about AI replacing humans—it’s about giving us better tools to quickly understand high-level changes, abstracting more and more of the code itself. As code volume continues to increase, this shift is going to become increasingly important.
You can sign up now (https://www.mrge.io/home). mrge is currently free while we're still early. Our plan for later is to charge closed-source projects on a per-seat basis, and to continue giving mrge away for free to open source ones.
We’re very actively building and would love your honest feedback!