Skip to content(if available)orjump to list(if available)

MCP explained without hype or fluff

thembones

Good point about the M x N problem reduction, but this glosses over a critical limitation. While MCP does turn integration complexity from M x N to M + N for the protocol layer, authentication and authorization remain stubbornly M x N problems.

Each MCP server still needs to handle auth differently depending on what it's connecting to. A GitHub MCP server needs GitHub tokens, a database server needs database credentials, an email server needs SMTP auth, etc. The client application now has to manage and securely store N different credential types instead of implementing N different integrations.

So yes, the protocol complexity is reduced, but the real operational headache (managing secrets, handling token refresh, dealing with different auth flows) just gets moved around rather than solved. In some ways this might actually be worse since you now have N different processes that each need their own credential management instead of one application handling it all.

This doesn't make MCP useless, but the "M x N to M + N" framing undersells how much complexity remains in the parts that actually matter for production deployments.

nilslice

> Each MCP server still needs to handle auth differently depending on what it's connecting to.

Setting aside expected criticism about this being some middleware layer, but we’ve launched a solution to this problem:

An MCP “SSO”, where you install and auth your MCP servers into profiles (collections of servers), which we virtualize into a single MCP server with only a single OAuth flow — simplifying the experience substantially for both the user of the MCP servers and the clients connecting to them.

https://docs.mcp.run/blog/2025/05/14/mcp-sso/

soulofmischief

If PGP had evolved with better ergonomics, the world would be so different today. I should just be able to use one key or certificate everywhere, with a web of trust to help providers decide whether my key is authentic.

nurettin

I imagine this will speed up the convergence of all servers towards oauth and totp

cruffle_duffle

You still have different tokens for every site.

I think this problem is inherent to connecting to a bunch of different providers. Unless all the providers were the same company or had to register directly with a single company and then proxy through but even then now you’ve just moved the problem.

heisenbit

If the MCP is running on the user client side and only the llm is remote then possibly one can leverage the existing authentication infrastructure between enterprise IdP, browser, MCP and the enterprise target sites?

jononor

Seems like you have identified a potential business need! If some component could simplify the auth similarly, that would presumably be very valuable. Could be an open source project, or a startup (or both).

hansmayer

The overload of GenAI related postings on here almost makes me look with nostalgy at the period when most of the posts were about some SQLite optimisation/use-case/weird trick....

explorigin

Can I interest you in a new Javascript framework?

faizshah

hnlmorg

I’d forgotten about the 2048 era. That was indeed one of the better HN phases.

Edit: that was 11 years ago?!? Wow.

hansmayer

Why yes, especially if it is "elegant", "easy to use" and of course "optimised for developer experience" :)

rglover

You may dig what I've built, then [1][2].

[1] https://cheatcode.co/joystick

[2] https://github.com/cheatcode/joystick

dvh

Water found on Mars!

thm

Eternal September

virtouspapaya

In that era there were probably loads of people nostalgic about weird C memory use-cases not being talked about as much.

fortyseven

This is a bit of hyperbole. Step back and look at the list of articles on the front page. You're going to see a lot of different things. It's not the overwhelming flood of Gen AI content that you're saying here. We got one guy who wrote his own music player for iOS. We got this other guy who's optimizing is OCR code. Another article talks about infrared contact lenses. And yeah there's going to be more AI stuff than there used to be, because that's what's going on right now in the field. But it's far from the only thing. Not by a long shot. The only reason you and I are conversing in this AI adjacent thread is because we both clicked on the link. The only difference between us is that I was actually interested in what this article had to say, but you're clearly not. And that's completely fair! But you make it sound like you're starved for non-AI content when there's a whole wealth of it on the front page. C'mon.

hansmayer

On a deeper level, the complaint is about the meaninglessness of those many posts. It is supposed to be revolutionary tech, but every week, every week we are bombarded with these vague and mostly it seems incremental "improvements". People who post this stuff should wait up a bit and let us know when there is a real breakthrough. Or when the VC investors stop pumping in on the order of 200B USD into the magic oracle industry,only to generate a total of 10B pre-tax income for all the major AI companies combined.

tra3

Yes, please bring back the blockchain posts. /s

<small>hmm, generative ai blockchain?...</small>

threetonesun

We had Bored Ape, but what we really wanted was Infinite Monkeys.

scubbo

I keep waiting for someone to break character and admit that this is all an extended trolling campaign. People are actually connecting these autocompletes to APIs and giving credentials to take impactful external actions? Y'all are _insanely_ trusting.

tra3

I'm with you, but it makes for extremely impressive demos. And surprisingly useful day to day improvements.

It's not like you can't do this manually (or automatically) but MCP makes things like this so much easier:

> Check JIRA against my org-mode and identify any tasks that I worked on that haven't been reflected in either system.

Undoubtedly there's an incredible amount of hype, but there's a reason for it. I prefer MCP tools that are read only for now.

throwanem

Do you maintain a newsletter or take subscriptions in some other fashion? This is a refreshingly low-BS take and those are hard to come by, and I would be interested especially in Emacs integrations.

karthink

> I would be interested especially in Emacs integrations.

gptel can use MCP server tools in Emacs by integrating with the mcp.el package, here's a demo: https://i.imgur.com/WTuhPuk.mp4.

mcp.el: https://github.com/lizqwerscott/mcp.el

Relevant gptel README section (you'll have to unfold the details block): https://github.com/karthink/gptel?tab=readme-ov-file#model-c...

Yoric

Sounds useful. However, I'd rather put deterministic code in control of the LLM than the LLM in control of deterministic code. And that's even before prompt injections.

notepad0x90

you don't trust or untrust algorithms. like you said, they're just autocompletes. you quantify the risk and see if you have an appetite for it. any sane engineer won't just let an LLM loose, they'll build guardrails around it. If the solution involved untrusted external/human input the risk is much higher, but for a private system, there is only so much paranoia you can throw around before it turns into anthropomorphizing algorithms.

scubbo

Leaving aside the fact that anthropomorphization is a perfectly valid discursive shorthand - yes, exactly. LLMs have an insanely-high risk profile to be granted access to anything without a human-in-the-loop.

> any sane engineer won't just let an LLM loose, they'll build guardrails around it

Sure seem to be plenty of insane engineers around these days. And, worse - plenty of them with good marketing teams that can convinced non-engineers that their systems are "safe" and "reliable".

nythroaway048

Can we not just point LLMs at OpenAPI documents and achieve the same result? All of the example functions in the article look like very very basic REST endpoints.

hn_throwaway_99

Exactly. We already have lots of standards for defining APIs (OpenAPI, GraphQL, SOAP if I'm showing my age, etc. etc.) Part of my original "wow this is magic" moment with AI came when OpenAI released some of their plugins and showed how you could just point it at an API spec and the LLM could just figure out, on its own, how to use it.

So one real beauty of AI is that it is so good at taking "semi structured" data and structuring it. So perhaps I'm missing something, but I don't see how MCP benefits you over existing API documentation formats. It seems like an "old way" of thinking, where we always wanted to define these interoperation contract formats and protocols, but a huge point of AI is you shouldn't really need any more protocols to start with.

Again, I don't know all the ins and outs of MCP, so I'm happy to be corrected. It's just that whenever I see examples like in the article, I'm always left wondering what benefit MCP gives you in the first place.

vkazanov

Well, one benefit is the precision and focus of the protocol that can be used to train/finetune LLMs.

More focused training -> more reliable understanding in LLMs.

Zigurd

I hear you but what exactly about MCP is more precise or training-friendly than other approaches? I can think of at least one way that it isn't: MCP doesn't provide an API sandbox the way an Apigee or Mulesoft API documentation page could.

hn_throwaway_99

I understand what you're saying, but I'm still not clear why any of this should be necessary or is a benefit for LLMs. Another commenter mentioned that MCP saves tokens and is more compact. So what? Then just have the LLM do a one-time pass of a more verbose spec to summarize/minify it.

Any human brainspace needed to even think about MCP just seems like it goes against the whole raison d'être of AI in that it can synthesize and use disparate information much faster, more efficiently, and cheaper than a human can.

floatrock

Don't forget HATEOAS if we're listing prior art of self-discoverable APIs!

s900mhz

You can, most MCP servers are just wrappers around existing SDKs or even rest endpoints.

I think it all comes down to discovery. MCP has a lot of natural language written in each of its “calls” allowing the LLM to understand context.

MCP is also not stateless, but to keep it short. I believe it’s just a way to make these tools more discoverable for the LLM. MCP doesn’t do much that you can’t with other options. Just makes it easier on the LLM.

That’s my take as someone who wrote a few.

Edit: I like to think of them as RCP for LLM

pcwelder

OpenAPI definitions are verbose and exhaustive. In MCPs you can remove a lot of extra material, saving tokens.

For example in [1], whole `responses` schema can be eliminated. The error texts can instead be surfaced when they appear. You also don't need duplicate json/xml/url-encoded input formats.

Secondly, whole lot of complexities are eliminated, arbitrary data can't be sent and received. Finally, the tool output are prompts to the model too, so you can leverage the output for better accuracy, which you can't do with general purpose apis.

[1] https://github.com/swagger-api/swagger-petstore/blob/master/...

hn_throwaway_99

So why can't the LLM just take the verbose OpenAPI spec, summarize it and remove the unnecessary boilerplate and cruft (do that once), and only use the summarized part in the prompt?

0x457

There is probably an MCP for that

ath_ray

author of the article here.

You can use OpenAPI as well. With MCP however, there's an aspect of AI-nativity that MCP offers that reifies patterns that show up in building integrations that helps building and adoption (Tools, Prompts, Resources etc). It's a different layer of abstraction. There's some things like Sampling that I cannot find an OpenAPI equivalent of easily.

I definitely barely scratched the surface with my example, but it's true that most MCP Servers I have seen and used are basic REST endpoints exposed as tool calls.

That said, the MCP server layer has some design considerations as well, since it's a different layer of abstraction from a REST API. You may not want to expose all the API endpoints, or you may want to encode specific actions in a way that is better understood by an LLM rather than an application that parses OpenAPI.

hnlmorg

That’s basically what we did before MCP. And what (for example) langchain does.

It’s great to have a standard way to integrate tools but I can’t say I have much love for MCP specifically.

striking

The docs are often pretty wrong. It's nice to formalize the glue in a server.

null

[deleted]

m3kw9

Why is there so much explaining to do for MCPs? There seem to be something seriously wrong with the way Anthropic is marketing it. It looks like the entire world is confused as to what it is.

rglover

It's being made out to be something bigger/more important than what it is to create hype and investment interest. Is it incredibly useful? Yes. But it's not aliens landing on the front lawn of the White House offering us anti-gravity tech.

If they said what it really was (see my other comment in this thread [1]), they couldn't leverage it to make more money/get more investors.

[1] https://news.ycombinator.com/item?id=44065739

crystal_revenge

To parent's point, your summary:

> basically RAG with a bit of sugar on top

Is not correct. MCP can work very well with a RAG system, providing a standard way to add context to a model call, but itself doesn't do any Retrieval.

Over the years there have been a huge variety of ways information such as tool use, RAG context, and other prompting information has been communicated to the model (very often using some ad hoc approach). MCP seeks to clarify and standardize how that information is communicated to an from the model. This, as the poster points out, allows you to reuse tools, RAG, etc with any supporting model rather than hacking these together to work with each on individually.

Previously you would have had to come up with your own way to add the retrieved metadata from RAG to the model, use the vendor specific method of tool calling and then write you own method of tool dispatch once a tool call has been returned.

rglover

> Is not correct. MCP can work very well with a RAG system, providing a standard way to add context to a model call, but itself doesn't do any Retrieval.

That's misrepresentation of what I said. I didn't say that MCP replaces RAG, just that it's essentially a RAG system with some syntax sugar on top (which your response confirms).

It's great that it adds some standardization to the process of implementing RAG, but under the hood that's the engine of MCP.

Zaheer

I found myself trying to explain MCP the other day. The simplest way I could put it for another developer:

MCP is a standardized set of API endpoints that makes it easier for LLM's to discover and operate with all the other regular APIs you have.

mannyv

It's cgi-bin for AI.

egorfine

I think it's a beautiful comparison on many levels.

ok123456

Waiting for the AI /cgi-bin/phf.

dvt

MCP is bloated AI hype that basically solves nothing (the Langchain of 2025!). Typical tooling on top of tooling and the quintessential case of a problem looking for a solution. It's absolute garbage from just about any standpoint: architectural, security, elegance, etc. But my main point is that it solves nothing and there's nothing novel here. It's APIs talking to APIs that talk to other APIs. Wow, groundbreaking!

I genuinely believe that there will be (and potentially already are) use-cases when it comes to AI agents, but we really to step back and re-think the whole thing. In the middle of writing a blog post about this, but I really do think genAI is a dead-end and that no one really wants to chill out for a second and solve the hard stuff:

    - Needle in a haystack accuracy
    - Function calling (and currying/chaining) reliability
    - Non-chat UI paradigm (the chat-box UI is a dead-end)
    - Context culling (ignoring non-relevant elements)
    - Data retrieval given huge contexts (RAG is just not good enough)
    - Robotics
    - Semantic inference
Like, I get it, it's hard to come up with new ways of solving some of these (or bringing them up from ~50% to 90% accuracy), but no one's going to use an AI agent when it confidently fakes data, it doesn't remember important stuff, or you gotta sit there and tweak a prompt for 30 minutes.

kanwisher

MCP is pretty awesome for linking tools to LLMs with each not knowing about each other in advance. Coolest is linking a decompiler with MCP and then LLM then requests to decompile specific functions and give explanations of the assembly code https://www.youtube.com/watch?v=u2vQapLAW88

beernet

MCP is as revolutionary as JSON.

Still, funny to see numerous hyped GenAI start-ups with bad monetary traction jump on the bandwagon and proclaim MCP as the latest revolution (after RAG, Agents, you name it)...All of these are simply tools which add zero value by themselves. Looking forward to the VC wake up calls.

emehex

As in: "JSON was a huge deal, and this could also be a huge deal" or "Just use JSON"?

bootsmann

Has anyone found good resources about dealing with authentication in MCP, especially about managing the oauth tokens locally?

cruffle_duffle

I haven’t really figured it out yet either. I think this protocol is pretty early in terms of support and tooling. They really seem to want you to connect locally and edit config files still.

Hopefully soon this stuff will get ironed out because yeah, auth is super important.

huhkerrf

I feel like there's something wrong with me for not understanding the big leap with MCP and the proponents aren't helping.

I saw a tweet stream that said something like "if you think MCP is the same as REST, you're not thinking big enough" followed by a bunch of marketing speak that gave off LinkedIn web3 influencer vibes. I saw a blog post that says MCP is better because it bridges multiple protocols. Okay, and?

I really want to get this, but I don't know how to square "LLMs are hyper intelligent" with "LLMs can't figure out OpenAPI documentation."

doug_durham

REST doesn't provide the documentation or the semantics of the interface. It's the API definition along with text on why to use it, when to use it, and how to use it. This is what is needed for a LLM to consume it. The documentation is a requirement. I have developed many MCP Servers, they are real and they provide me real value in my work every day.

hansmayer

They are, like much of the GenAI-hype, simply a solution looking for a problem. That's why they need to explain it so much - it's more of a desperate convincing really...

0x457

With how big some OpenAPI docs are the problem isn't "LLMs can't figure out OpenAPI documentation." it's that consuming OpenAPI documentation is extremely resource intensive. I've met openapi docs that make client generators choke just by being large.

MCPs don't solve this entirely either, that's why GitHub's MCP has ability to offload modules to not confuse LLM with all the options.

thekodols

I know it’s in active discussion on GH, but I wish clients would solve (non-text) UIs from MCP servers rather soon. It would 10x the power of these chat extensions.

anthonypasq

can you give an example of what you're talking about? do you mean not having a "chatbot" UI and instead sending a camera feed to your mcp client or something?

thekodols

Here's someone experimenting with what I mean: https://github.com/idosal/mcp-ui

PKop

Is this sort of like how, when iPhone touch-screen came out, it allowed for dynamic regeneration of UI for each specific app instead of "hard coding" hardware inputs/buttons as the one interface to all apps? So here, AI can dynamically generate a context-dependent UI on the fly that can be interacted with, influenced by user input, API reponses etc?