MCP: An in-depth introduction
26 comments
·May 13, 2025epistasis
nylonstrung
MCP is a kitchen sink of anti-patterns. There's no way it's not forgotten in a year, just like Langchain will be
auggierose
The super power of MCP is that it allows you to hook up arbitrary tools using a flatrate like Claude Pro. That alone will make sure it stays.
danjc
MCP Clients need to support auth (and probably the spec needs to have a broader set of options for auth) - this is going to be a major blocker for adoption.
MacsHeadroom
What makes you say that?
danjc
Most providers don't support auth in their client implementations yet. Means it's only good for calling into public data. Private enterprise data is where there's huge value.
TZubiri
I feel like I need the opposite, a cursory view, or at least a definition.
Most of the material on MCP is either too specific or too in depth.
WTF is it?! (Other than a dependency by Anthropic)
kristopolous
look at the <client> implementation here, https://modelcontextprotocol.io/quickstart/client
that's the missing piece in most of these description.
You send off a description of the tools, the model decides if it wants to use one, then you run it with the args, send it back to the context and loop.
fhd2
I found that the other day and finally got what MCP is. Kinda just a convenience layer for hooking up an API via good "old" tool use.
Unless I'm missing something major, it's just marginally more convenient than just hooking up tool calls for, say, OpenAPI. The power is probably in the hype around it more than it's on technical merits.
nylonstrung
Except in practice it is far less convenient because it constantly breaks, with terrible error handling
troupo
It's a vibe-coded protocol that lets LLM models query external tools.
You write a wrapper ("MCP server") over your docs/apis/databases/sites/scripts that exposes certain commands ("tools"), and you can instruct models to query your wrapper with these commands ("calling/invoking tools") and expect responses in a certain format that they can then use.
That is it.
Why vibe-coded? Because instead of bi-directional websockets the protocol uses unidirectional server-side events, so you need to send requests to a separate endpoint and then listen to the SSE hoping for an answer. There's also non-existent authentication.
nylonstrung
I see zero reason they couldn't have used standard websockets and made it simpler and more robust.
Awful case of "not invented here" syndrome
I'm personally interested in if WebTransport could be the basis for something better
aryehof
A standard protocol that allows many different Applications to provide context to many different LLMs.
Conversely, it allows many different LLMs to get context via many different Applications using a standard prodocol.
It addresses an m*n problem.
esafak
It's an API to expose tools to LLMs.
jredwards
Or... it's a tool to expose APIs to LLMs.
repeekad
functions that an LLM can use in its reasoning are called "tools", so the prior is probably more correct in the sense that an API can be used to provide the LLM tools
aryehof
It also supports Resources and Prompts, not just Tools.
jredwards
https://youtu.be/74c1ByGvFPE?si=S-5oBO8ptL_7WmQ9
I like this succinct explanation.
mdaniel
This is a VFAQ https://hn.algolia.com/?q=what+is+mcp
But to save you the click & read: it's OpenAPI for LLMs
shepherdjerred
OpenAPI for LLMs is such a good way to describe it!
TZubiri
"“MCP is an open protocol that standardizes how applications provide context to LLMs, what’s the problem?”"
We are already off to a wrong start, context has a meaning specific to LLMs, everyone who works with LLMs knows what it means: the context is the text that is fed as input at runtime to LLM, including the current message (user prompt) as well as the previous messages and responses by the LLM.
So we don't need to read any further and we can ignore this article, and MCPs by extension, YAGNI
lolinder
This is a really shallow dismissal, and I say that as someone who is outspokenly critical of MCP [0].
As you yourself say, the context is the text that is fed as input at runtime to an LLM. This text could just always come from the user as a prompt, but that's a pretty lousy interface to try to cram everything that you might want the model to know about, and it puts the onus entirely on the user to figure out what might be relevant context. The premise of the Model Context Protocol (MCP) is overall sound: how do we give the "Model" access to load arbitrary details into "Context" from many different sources?
This is a real problem worth solving and it has everything to do with the technical meaning of the word "context" in this context. I'm not sure why you dismiss it so abruptly.
jredwards
Well, that's the worst take I've seen all week, and it's Friday.
Agent LLMs are able to retrieve additional context and MCP servers give them specific, targeted tools to do so.
andes314
For anyone confused, you can play with mcp for free on usetexture.com
jredwards
There are thousands of ready-made MCP servers hosted on https://smithery.ai
> But even after a few hours of reading about what MCP is and working through an example , it can be confusing to follow exactly what is happening when and where. What does the LLM do? What does the MCP server do? What does the MCP client do? Where does data flow, and where are choices made?
Yeah MCP is the worst documented technology I have ever encountered. I understand APIs for calling LLMs, I understand tool calling APIs. Yet I have read so much about MCP and have zero fucking clue except vague marketing speak. Or code that has zero explanation. What an amateur effort.
I've given up, I don't care about MCP. I'll use tool calling APIs as I currently do.