Show HN: GitMCP is an automatic MCP server for every GitHub repo
46 comments
·April 3, 2025kiitos
broodbucket
As someone who is obviously not the target audience, I feel like literally anything on this page that could lead me to explain what MCP is would be nice, while we're talking about what the landing page doesn't tell you. Even just one of the MCP mentions being a link to modelcontextprotocol.io would be fine.
Or maybe I'm so out of the loop it's as obvious as "git" is, I dunno.
fragmede
It’s fair to be curious, but at some point it’s also reasonable to expect people are capable of using Google to look up unfamiliar terms. I'm not gatekeeping—just, like, put in a bit of effort?
Threads like this work better when they can go deeper without rehashing the basics every time.
sdesol
After chatting with Sonnet and Gemini about the API directory, I posted your questions. The first point is quite informative.
1. "Simply change the domain from github.com or github.io to gitmcp.io and get instant AI context for any GitHub repository."
What does this mean? This is the core user interaction. You take a standard GitHub URL (e.g., github.com/owner/repo or owner.github.io/repo) and replace the domain with gitmcp.io (e.g., gitmcp.io/owner/repo or owner.gitmcp.io/repo). This new URL points to this tool, which then attempts to provide AI-related functionality for that repository.
How does it work? - URL Parsing: The tool parses the gitmcp.io URL to extract the GitHub repository's owner and name.
- Documentation Retrieval: It fetches the documentation from the specified GitHub repository (primarily looking for llms.txt or README.md).
- Semantic Indexing: It processes the documentation, chunks it, and creates vector embeddings for each chunk.
- MCP Server: It exposes this indexed documentation through an MCP server.
How can I understand how it works? By examining the code, we've determined that it:
- Fetches documentation from GitHub.
- Chunks the documentation.
- Creates vector embeddings.
- Stores the embeddings in Upstash Vector.
- Provides a search interface via MCP.
Requirements, limitations, constraints?
- Requires a GitHub repository with either an llms.txt or README.md file.
- Limited to those two file types. It won't process other documentation formats.
- Relies on a custom embedding function, which may not be as accurate as state-of-the-art embedding models.
- Uses Upstash Vector and Redis, so it's dependent on those services.
- Assumes the client (your AI assistant) understands the MCP protocol.
Full conversation below:
https://app.gitsense.com/?chat=64b98b48-5498-428e-b644-c6297...
Edit:
Gemini Pro does a good job of summarizing the code. I did feed it more files though since Pro is free.
https://app.gitsense.com/?chat=978d5943-f082-479d-90f4-f746d...
kiitos
I appreciate that! Now maybe they could update the readme accordingly! ;)
john2x
Is this the new LMGTFY?
T3RMINATED
[dead]
ianpurton
Some context.
1. Some LLMs support function calling. That means they are given a list of tools with descriptions of those tools.
2. Rather than answering your question in one go, the LLM can say it wants to call a function.
3. Your client (developer tool etc) will call that function and pass the results to the LLM.
4. The LLM will continue and either complete the conversation or call more tools (functions)
5. MCP is gaining traction as a standard way of adding tools/functions to LLMs.
GitMCP
I haven't looked too deeply but I can guess.
1. Will have a bunch of API endpoints that the LLM can call to look at your code. probably stuff like, get_file, get_folder etc.
2. When you ask the LLM for example "Tell me how to add observability to the code", the LLM can make calls to get the code and start to look at it.
3. The LLM can keep on making calls to GitMCP until it has enough context to answer the question.
Hope this helps.
liadyo
We built an open source remote MCP server that can automatically serve documentation from every Github project. Simply replace github.com with gitmcp.io in the repo URL - and you get a remote MCP server that serves and searches the documentation from this repo (llms.txt, llms-full.txt, readme.md, etc). Works with github.io as well. Repo here: https://github.com/idosal/git-mcp
nlawalker
>searches the documentation from this repo (llms.txt, llms-full.txt, readme.md, etc)
What does etc include? Does this operate on a single content file from the specified GitHub repo?
the_arun
But why would we need an MCP server for a github repo? Sorry, I am unable to understand the use case.
scosman
It’s one of my favourite MCP use cases. I have cloned projects and used the file browser MCP for this, but this looks great.
It allows you to ask questions about how an entire system works. For example the other day “this GitHub action requires the binary X. Is it in the repo, downloading it, or building it on deploy, or something else.” Or “what tools does this repo used to implement full text search? Give me an overview”
liadyo
It's very helpful when working with a specific technology/library, and you want to access the project's llms.txt, readme, search the docs, etc from within the IDE using the MCP client. Check it out, for exmaple, with the langgraph docs: https://gitmcp.io/#github-pages-demo It really improves the development experience.
null
qainsights
Same here. Can't we just give the repo URL in Cursor/Windsurf to use the search tool to get the context? :thinking:
adusal
As an example, some repositories have huge documents (in some cases a few MBs) that agents won't process today. GitMCP offers semantic search out of the box.
jwblackwell
Yeah this is one fundamental reason I don't see MCP taking off. The only real use cases there are will just be built in natively to the tools.
hobofan
Yes, they could be, but then you 100% rely on the client tools doing a good job doing that, which they aren't always good at, and they also have to reinvent the wheel on what are becoming essentially commodity features.
E.g. one of the biggest annoyances for me with cursor was external documentation indexing, where you hand it the website of a specific libarary and then it crawls and indexes that. That feature has been completely broken for me (always aborting with a crawl error). Now with a MCP server, I can just use one that is specialized in this kind of documentation indexing, where I also have the ability to tinker with it if it breaks, and then can use that in all my agentic coding tools that need it (which also allows me to transfer more work to background/non-IDE workflows).
cruffle_duffle
MCP servers present a structured interface for accessing something and (often) a structured result.
You tell the LLM to visit your GitHub repository via http and it gets back… unstructured, unfocused content not designed with an LLM’s context window in mind.
With the MCP server the LLM can initiate a structured interface request and get back structured replies… so instead of HTML (or text extracted from HTML) it gets JSON or something more useful.
cgio
Is html less structured than json? I thought with LLMs the schematic of structure is less relevant than the structure itself.
SkyPuncher
Once case I’ve found valuable is dropping a reference to a PR that’s relevant to my work.
I’ll tell it to look at that PR to gain context about what was previously changed.
ramoz
Right, because agents can utilize git natively.
If this is for navigating/searching github in a fine-grained way, then totally cool and useful.
pcwelder
Why not have a single mcp server that takes in the repo path or url in the tool call args? Changing config in claude desktop is painful everytime.
liadyo
Yes! The generic form is also supported of course. https://gitmcp.io/docs does exactly that: https://github.com/idosal/git-mcp?tab=readme-ov-file#usage
vessenes
I agree - i'd like that option as well.
null
qwertox
That is a complex webserver. https://github.com/idosal/git-mcp/tree/main/api
What about private repos in, let's say GitLab or Bitbucket instances, or something simpler?
A Dockerfile could be helpful to get it running locally.
liadyo
Yes, this is a fully remote MCP server, so the need for an SSE support makes the implementation quite complex. The MCP spec updated to use HTTP streaming, but clients do not support it yet.
TechDebtDevin
Gemini does I believe. On my list of todos is to add this to my fork of mcp-go.
vessenes
+1 for this, I'm so so tired of writing my MCP code in python.
fzysingularity
While I like the seamless integration with GitHub, I’d imagine this doesn’t fully take advantage of the stateful nature of MCP.
A really powerful git repo x MCP integration would be to automatically setup the GitHub repo library / environment and be able to interact with that library, making it more stateful and significantly more powerful.
xena
How do I opt-out for my repos?
creddit
How does this differ from the reference Github MCP server?
https://github.com/modelcontextprotocol/servers/tree/main/sr...
EDIT: Oh wait, lol, I looked closer and it seems that the difference is that the server runs on your server instead which is like the single most insane thing I can think of someone choosing to do when the reference Github MCP server exists.
creddit
This literally looks like spyware to me. Crazy.
eagleinparadise
Getting "@ SSE error: undefined" in Cursor for a repo I added. Is there also not a way to force a MCP server to be used? Haiku doesn't pick it up in Cursor.
adusal
The error usually isn't an issue since the agent can use the tools regardless. It's a by-product of the current implementation's serverless nature and SSE's limitations. We are looking into alternative solutions.
> Simply change the domain from github.com or github.io to gitmcp.io and get instant AI context for any GitHub repository.
What does this mean? How does it work? How can I understand how it works? The requirements, limitations, constraints? The landing page tells me nothing! Worse, it doesn't have any links or suggestions as to how I could possibly learn how it works.
> Congratulations! The chosen GitHub project is now fully accessible to your AI.
What does this mean??
> GitMCP serves as a bridge between your GitHub repository's documentation and AI assistants by implementing the Model Context Protocol (MCP). When an AI assistant requires information from your repository, it sends a request to GitMCP. GitMCP retrieves the relevant content and provides semantic search capabilities, ensuring efficient and accurate information delivery.
MCP is a protocol that defines a number of concrete resource types (tools, prompts, etc.) -- each of which have very specific behaviors, semantics, etc. -- and none of which are identified by this project's documentation as what it actually implements!
Specifically what aspects of the MCP are you proxying here? Specifically how do you parse a repo's data and transform it into whatever MCP resources you're supporting? I looked for this information and found it nowhere?