A critical look at MCP
347 comments
·May 10, 2025lolinder
ComplexSystems
DeepSeek's documentation has a different problem, which is that there are spelling errors and weird grammatical constructions everywhere:
"DeepSeek API does NOT constrain user's rate limit. We will try out best to serve every request. However, please note that when our servers are under high traffic pressure, your requests may take some time to receive a response from the server. During this period, your HTTP request will remain connected, and you may continuously receive contents in the following formats..."
The documentation is still mostly easy to read, so it doesn't *really" matter, but I always thought this was bizarre. I mean, I get the language barrier reading manuals from Chinese products off of Amazon or whatever, but this is a company that does nothing but work with language all day long, and even at one point had the world's leading English-speaking language model. Shouldn't they be able to produce professional-looking documentation without spelling and grammatical errors?
fakedang
I've seen documents that were applications by CCP-affiliated provincial government bodies, things like detailed studies for loan applications to international banks, etc. and trust me, the Deepseek documentation is miles ahead of that. These are official government documents from one government agency to some international agency.
ComplexSystems
This has fascinated me for years. I'll just re-link this comment of mine from a few years ago: https://news.ycombinator.com/item?id=37544019#37548278.
This was about Amazon products rather than government documentation, but the point is the same. I'll just quote the relevant part:
> The people who make these products have to spend millions and millions of dollars setting up factories, hiring people, putting things into production, etc. But somehow they don't have a budget for a bilingual college student intern to translate a bunch of copy to English better than "using this product will bring a great joy." Why?
> I will make a super strong claim: ChatGPT can now do nearly perfect mass translations of this stuff for free, in theory simultaneously increasing translation quality and reducing costs. Despite this, for whatever reason, I predict that the average translation quality on Amazon won't improve within the next few years.
My super strong claim has so far been correct. Just go on Amazon.com and click just about anything. For instance, here's a random blanket: https://www.amazon.com/dp/B07MR4FSPT
"OPTIMUM GIFT: All people can use this flannel fleece blanket in Coach、Office、Bed、Study, etc. Reversible softness offers all seasons warmth. INTIMATE SERVICE: If you have any questions, please contact us. it is our pleasure to serve you."
How does a human being in this situation somehow invent the phrase "OPTIMUM GIFT?" "Optimum" is a fairly advanced English word. Maybe you'd expect, I dunno, "GREAT GIFT" or "BEST GIFT"? And "INTIMATE SERVICE?"
And once again, we now have magic English-speaking computers that can do this all for us - for free - and China has unanimously decided "nah, screw that. We'd rather go with INTIMATE SERVICE."
comradesmith
What’s the problem? Can you point out a specific thing you would change from that quote?
lolinder
Are you a native English speaker?
"does NOT constrain user's rate limit" should be "does NOT rate limit incoming requests" or similar.
"We will try out best" should be "our best".
"when our servers are under high traffic pressure" is at least grammatical, but it's awkward. Normally you'd say "when our servers are dealing with high load" or something similar.
"your requests may take some time to receive a response from the server" is again grammatical but also awkward. "Our response times may be slower" would be more natural.
The last sentence is also awkward but the whole thing would need to be restructured, which is too much for an HN comment.
Basically: everything about this screams English as a second language. Which does mean that it's unlikely to have been LLM generated, because from what I've seen DeepSeek itself does a pretty good job with English!
rrr_oh_man
I'd just shorten it:
DeepSeek API does NOT have rate limits.
However, when our servers are under high traffic,
your requests may take some time. During this period,
you will continuously receive the following responses:
albert_e
Maybe the first sentence? I am guessing they meant "DeepSeek API does not enforce any rate limit on users." would be more appropriate.
_Constraining the rate 'limit'_ seems like incorrect usage - but it is an a easy mistake to make in a first draft. Review should have caught it.
827a
Similarly strange and incorrect grammatical constructions are found in the English translations for Game Science’s hit game Black Myth Wukong. My expectations for, for example, the construction manual for a bookshelf is pretty different than a game or AI model & service costing tens of millions of dollars in development (or more).
Heck, they could literally pay any native English speaker to take their English-ish translations and regionalize them; you don’t even need to know Chinese to fix those paragraphs. Why is this such a common problem with the English China exports? Is it cultural? Are they so disconnected from the west that they don’t realize?
A great counter-example is NetEase’s Marvel Rivals; their English translations are fantastic, and even their dev interviews with their Chinese development team is fantastically regionalized. They make a real effort to appeal to English audiences.
meindnoch
That's just standard Chinglish.
k__
That text would be cut to at least half its length by an editor.
ljm
Sometimes I wonder if I have ADHD or if it's induced by the content, because I can spend hours soaking up interesting literature and putting my weird thoughts down onto paper but I can barely make it a few words through LLM-driven drivel.
It's crazy seeing bots posting AITA rage bait on Reddit that always follows the same pattern: some inter-personal conflict that escalates to a wider group: "I told my husband I wasn't into face-sitting and now all my colleagues are saying I should sit on his face to keep the peace."
That is one thing but using the same LLM to drive your tech specs, knowing it can say a whole lot of shit the 'author' isn't aware of, because they're illiterate and that is fucking normal... is worrying.
stuaxo
Yeah it's unreadable for me.
There's been a trend to post LLM slop about tech subjects and they anger me - I don't know why someone wanted to waste people's time like that.
Even worse - I've come across an AI slop site that masquerades as dev information, with just plain wrong information.
DonHopkins
I let the domain "micropolisonline.com" expire, which I was using for the old OpenLaszlo/Flash Python/SWIG/C++ client/server based version of open source SimCity, and somebody took it over and replaced it with AI generated claptrap, stealing a lot of my own and other's images without any credit. It even promises the source code, but doesn't actually link to it, just has promises and placeholders.
It totally misrepresents what Micropolis is, which was based on the original SimCity classic, and confuses it with all the subsequent versions of SimCity and other made-up stuff. And it never mentions the GPL-3 license, EA's license and restrictions on the use of their SimCity trademark, or Micropolis's license to use their trademark. I have no idea what the point of it is.
https://micropolisonline.com/source-code/
>How to Access the Source Code: For those eager to explore the Micropolis Online Source Code, it is available on our dedicated GitHub repository. Visit [Link] to access the repository, where you can browse the code, contribute to ongoing projects, or initiate your own.
The source code is actually not at [Link] but at:
https://github.com/SimHacker/MicropolisCore
Not even so much as a link to the my demo!
https://www.youtube.com/watch?v=8snnqQSI0GE
They could be in some legal jepordy since they didn't mention or link to the Micropolis GPL License or the Micropolis Public Name License, which they may be violating.
https://github.com/SimHacker/MicropolisCore/blob/main/Microp...
https://github.com/SimHacker/MicropolisCore/blob/main/Microp...
The have a "Meet the Team" page that mentions nobody, just hand waves about "we" and the community. They couldn't even bother to generate generic looking fake profiles of non-existent people. Suffice it to say I never heard back from anyone after using the "Contact Us" page.
They even have a cute little Terms and Conditions page with their very own license, which doesn't allow anyone to do to them what they did to me, and is not particularly GPL-v3 compatible:
https://micropolisonline.com/terms-conditions/
>License to Use Micropolis Online
>Unless otherwise stated, Micropolis Online and/or its licensors own the intellectual property rights for all material on Micropolis Online. All intellectual property rights are reserved. You may view and/or print pages from micropolisonline.com for your own personal use subject to restrictions set in these terms and conditions.
>You must not:
>Republish material from micropolisonline.com Sell, rent, or sub-license material from micropolisonline.com Reproduce, duplicate, or copy material from micropolisonline.com
They also claim all rights to all user created content:
>By displaying Your Content, you grant Micropolis Online a non-exclusive, worldwide irrevocable, sub-licensable license to use, reproduce, adapt, publish, translate, and distribute it in any and all media.
Kind of ironic for an LLM to go around stealing people's content, then telling them that not only can't anyone copy it back, but it owns the rights to everything anyone else may contribute in the future.
glimps
I get the distinct feeling the spec was created by llm too. As with the doc, every evidence hints at it.
Makes great IPO to tell investor most tour product are already created be averaging out the most likely outcome
benatkin
The DeepSeek documentation seems to be better. It looks to be quickly thrown together but not bad. I’m not sure what that says about LLMs writing documentation.
clbrmbr
Certainly a shame if true, there are some really sharp folks at Anthropic and this is an important building block in the emerging ecosystem.
teaearlgraycold
In my experience AI startups are AI maximalists. They use AI for everything they can. AI meeting summarizations, AI search (Perplexity), AI to write code and contracts, AI to perform SEO, AI to recruit candidates, etc. So I 100% believe they would use AI to write specs.
runlaszlorun
Seems like many are dreading our near future. Not I, I can't wait to see how this all plays out...
jes5199
someone is going to write an MCP adaptor that lets Claude use OpenAPI and then we can forget that MCP was a thing
otabdeveloper4
This. Endure a couple months and this madness ends.
cruffle_duffle
How would that even work?
whatever1
So many bullet points in the documentation!
jerf
It had not occurred to me that the AI coding vendors are basically positively motivated to themselves produce code that is not documented. They want code that is comprehensible to AIs but actively not comprehensible to humans. Then you need their AIs to manipulate it.
AI code as the biggest "lock you in the box" in programming history. That takes rather a lot of the luster out of it....
They'd better be right that they can get to the point that they can fully replace programmers in about two years, otherwise following this siren song will, well, demonstrate why I chose "siren song" as my metaphor. If AI code produces big piles of code that are simply incomprehensible to humans, but then the AIs can't handle it either, they'll crash out their own market by the rather disgusting mechanism of killing all their customers, precisely because the customers consumed their service.
never_inline
To be honest I don't think they have any plans either.
walterbell
Self Alignment™
meander_water
I can't say whether the original spec was written with AI assistance, but having a cursory look through the commit history [0] it doesn't look like they're just blatantly auto-generating the docs. The git history indicates that they do think about the spec and manually update the docs as the spec changes.
[0] https://github.com/modelcontextprotocol/modelcontextprotocol...
never_inline
I don't write perfect English. Far from it. But I'd prefer broken English any day over default LLM verbiage. It seems so unnatural and facetitious. I always have this in my prompts: "Be succinct and use simple English sentences".
hirsin
In the same way that crypto folks speedran "why we have finance regulations and standards", LLM folks are now speedrunning "how to build software paradigms".
The concept they're trying to accomplish (expose possibly remote functions to a caller in an interrogable manner) has plenty of existing examples in DLLs, gRPC, SOAP, IDL, dCOM, etc, but they don't seem to have learned from any of them, let alone be aware that they exist.
Give it more than a couple months though and I think we'll see it mature some more. We just got their auth patterns to use existing rails and concepts, just have to eat the rest of the camel.
ethbr1
> Give it more than a couple months though and I think we'll see it mature some more.
Or like the early Python ecosystem, mistakes will become ossified at the bottom layers of the stack, as people rapidly build higher level tools that depend on them.
Except unlike early Python, the AI ecosystem community has no excuse, BECAUSE THERE ARE ALREADY HISTORICAL EXAMPLES OF THE EXACT MISTAKES THEY'RE MAKING.
volemo
Could you throw on a couple of examples of calcified early mistakes of Python? GIL is/was one, I presume?
achierius
CPython in particular exposes so much detail about its internal implementation that other implementations essentially have to choose between compatibility and performance. Contrast this with, say, JavaScript, which is implemented according to a language standard and which, despite the many issues with the language, is still implemented by three distinct groups, all reasonably performant, yet all by and large compatible.
Timwi
Static functions (len, map/filter,...) that should have been methods on objects.
Doxin
Possibly packaging too? though lately that has improved to the point where I'd not really consider it ossified at all.
fullstackchris
> early mistakes of Python?
Python.
worldsayshi
I guess there's an incentive to quickly get a first version out the door so people will start building around your products rather than your competitors.
And now you will outsource part of the thinking process. Everyone will show you examples when it doesn't work.
FridgeSeal
Hey there, expecting basically literacy or comprehension out of a sub-industry seemingly dedicated to minimising human understanding and involvement is bridge too far.
Clearly if these things are problems, AI will simply solve them, duhhh.
/s
brabel
You joke, but with the right prompt, I am almost certain that an LLM would've written a better spec than MCP. Like others said, there are many protocols that can be used as inspiration for what MCP tries to achieve, so LLMs should "know" how it should be done... which is definitely NOT by using SSE and a freaking separate "write" endpoint.
wunderwuzzi23
Your comment reminds me that when I first wrote about MCP it reminded me of COM/DCOM and how this was a bit of a nightmare, and we ended up with the infamous "DLL Hell"...
Let's see how MCP will go.
https://embracethered.com/blog/posts/2025/model-context-prot...
TheOtherHobbes
It's a classic Worse is Better situation.
Most users don't care about the implementation. They care about the way that MCP makes it easier to Do Cool Stuff by gluing little boxes of code together with minimal effort.
So this will run ahead because it catches developer imagination and lowers cost of entry.
The implementation could certainly be improved. I'm not convinced websockets are a better option because they're notorious for firewall issues, which can be showstoppers for this kind of work.
If the docs are improved there's no reason a custom implementation in Go or Arm assembler or whatever else takes your fancy shouldn't be possible.
Don't forget you can ask an LLM to do this for you. God only knows what you'll get with the current state of the art, but we are getting to the point where this kind of information can be explored interactively with questions and AI codegen, instead of being kept in a fixed document that has to be updated manually (and usually isn't anyway) and hand coded.
baxtr
To this date I have not found a good explanation what an MCP is.
What is it in old dev language?
mondrian
It's a read/write protocol for making external data/services available to a LLM. You can write a tool/endpoint to the MCP protocol and plug it into Claude Desktop, for example. Claude Desktop has MCP support built-in and automatically queries your MCP endpoint to discover its functionality, and makes those functions available to Claude by including their descriptions in the prompt. Claude can then instruct Claude Desktop to call those functions as it sees fit. Claude Desktop will call the functions and then include the results in the prompt, allowing Claude to generate with relevant data in context.
Since Claude Desktop has MCP support built-in, you can just plug off the shelf MCP endpoints into it. Like you could plug your Gmail account, and your Discord, and your Reddit into Claude Desktop provided that MCP integrations exist for those services. So you can tell Claude "look up my recent activity on reddit and send a summary email to my friend Bob about it" or whatever, and Claude will accomplish that task using the available MCPs. There's like a proliferation of MCP tools and marketplaces being built.
fendy3002
If you know JSON-RPC: it's a JSON-RPC wrapper exposed for AI use and discovery.
If you know REST / http request:
it's single endpoint-only, partitioned / routed by single "type" or "method" parameter, with some different specification, for AI.
krackers
Wasn't the point of REST supposed to be runtime discoverability though? Of course REST in practice just seems to be json-rpc without the easy discoverability which seems to have been bolted on with Swagger or whatnot. But what does MCP do that (properly implemented) REST can't?
kaoD
In a nutshell: RPC with builtin discoverability for LLMs.
jimmySixDOF
old dev language is deterministic, llm in the loop now so the language is stochastic.
jgalt212
it is amazing we used to prize determinism, but now it's like determinism is slowing me down. I mean how do you even write test cases for LLM agents. Do you have another LLM judge the results as close enough, or not close enough?
_raz
A RPC standard that plays nicely with LLMs?
victorbjorklund
self documenting API
matchagaucho
Also missing in these strict, declarative protocols is a reliance on latent space, and the semantic strengths of LLMs.
Is it sufficient to put a agents.json file in the root of the /.well-known web folder and let agents just "figure it out" through semantic dialogue?
This forces the default use of HTTP as Agent stdio.
northern-lights
also called Vibe Designing.
DonHopkins
I agree they should learn from DLLs, gRPC, SOAP, IDL, dCOM, etc.
But they should also learn from how NeWS was better than X-Windows because instead of a fixed protocol, it allowed you to send executable PostScript code that runs locally next to the graphics hardware and input devices, interprets efficient custom network protocols, responds to local input events instantly, implements a responsive user interface while minimizing network traffic.
For the same reason the client-side Google Maps via AJAX of 20 years ago was better than the server-side Xerox PARC Map Viewer via http of 32 years ago.
I felt compelled to write "The X-Windows Disaster" comparing X-Windows and NeWS, and I would hate if 37 years from now, when MCP is as old as X11, I had to write about "The MCP-Token-Windows Disaster", comparing it to a more efficient, elegant, underdog solution that got out worse-is-bettered. It doesn't have to be that way!
https://donhopkins.medium.com/the-x-windows-disaster-128d398...
It would be "The World's Second Fully Modular Software Disaster" if we were stuck with MCP for the next 37 years, like we still are to this day with X-Windows.
And you know what they say about X-Windows:
>Even your dog won’t like it. Complex non-solutions to simple non-problems. Garbage at your fingertips. Artificial Ignorance is our most important resource. Don’t get frustrated without it. A mistake carried out to perfection. Dissatisfaction guaranteed. It could be worse, but it’ll take time. Let it get in your way. Power tools for power fools. Putting new limits on productivity. Simplicity made complex. The cutting edge of obsolescence. You’ll envy the dead. [...]
Instead, how about running and exposing sandboxed JavaScript/WASM engines on the GPU servers themselves, that can instantly submit and respond to tokens, cache and procedurally render prompts, and intelligently guide the completion in real time, and orchestrate between multiple models, with no network traffic or latency?
They're probably already doing that anyway, just not exposing Turing-complete extensibility for public consumption.
Ok, so maybe Adobe's compute farm runs PostScript by the GPU instead of JavaScript. I'd be fine with that, I love writing PostScript! ;) And there's a great WASM based Forth called WAForth, too.
https://news.ycombinator.com/item?id=34374057
It really doesn't matter how bad the language is, just look at the success and perseverance of TCL/Tk! It just needs to be extensible at runtime.
NeWS applications were much more responsive than X11 applications, since you download PostScript code into the window server to locally handle input events, provide immediate feedback, translate them to higher level events or even completely handle them locally, using a user interface toolkit that runs in the server, and only sends high level events over the network, using optimized application specific protocols.
You know, just what all web browsers have been doing for decades with JavaScript and calling it AJAX?
Now it's all about rendering and responding to tokens instead of pixels and mouse clicks.
Protocols that fix the shape of interaction (like X11 or MCP) can become ossified, limiting innovation. Extensible, programmable environments allow evolution and responsiveness.
Speed run that!
cmrdporcupine
It reminds me a bit of LSP, which feels to me like a similar speed-run and a pile of assumptions baked in which were more parochial aspects of the original application... now shipped as a standard.
And yeah, sounds like it's explicitly a choice to follow that model.
airspresso
Yes the authors openly acknowledge [0] to be inspired by LSP.
_QrE
Agreed with basically the entire article. Also happy to hear that someone else was as bewildered as me when they visited the MCP site and they found nothing of substance. RFCs can be a pain to read, but they're much better than 'please just use our SDK library'.
dlandis
Agree... this is an important blog. People need to press pause on MCP in terms of adoption...it was simply not designed with a solid enough technical foundation that would make it suitable to be an industry standard. People are hyped about it, kind of like they were for LangChain and many other projects, but people are going to gradually (after diving into implementations) that it's not actually what they were looking for..It's basically a hack thrown together by a few people and there are tons of questionable decisions, with websockets being just one example of a big miss.
__loam
The Langchain repo is actually hilariously bad if you ever go read the source. I can't believe they raised money with that crap. Right place right time I guess.
jtms
Yeah agree. I spent a few hours looking at the langchain repo when it first hit the scene and could not for the life of me understand what value it actually provided. It (at least at the time) was just a series of wrappers and a few poorly thought through data structures. I could find almost no actual business logic.
stuaxo
My first surprise on it:
I made an error trying with aws bedrock where I used "bedrock" instead of "bedrock-runtime".
The native library will give you an error back.
Langchain didn't try and do anything, just kept parsing the json and gave me a KeyError.
I was able to get a small fix, but was surprised they have no error like ConfigurationError that goes across all their backends at all.
The best I could get them to add was ValueError and worked with the devs to make the text somewhat useful.
But was pretty surprised, I'd expect a badly configured endpoint to be the kind of thing that happens when setting stuff up for the first time, relatively often.
worldsayshi
Isn't that what a lot of this is about? It's a blue ocean and everyone are full of fomo.
null
oxidant
I wish there was a clear spec on the site but there isn't https://modelcontextprotocol.io/specification/2025-03-26
It seems like half of it is Sonnet output and it doesn't describe how the protocol actually works.
For all its warts, the GraphQL spec is very well written https://spec.graphql.org/October2021/
9dev
I didn’t believe you before clicking the link, but hot damn. That reads like the ideas I scribbled down in school about all the cool projects I could build. There is literally zero substance in there. Amazing.
svachalek
I thought it was just me. When I first saw all the hype around MCP I went to go read this mess and still have no idea what MCP even is.
null
Spivak
And then you read the SDK code and the bewildering doesn't stop at the code quality, organization, complete lack of using exiting tools to solve their problems, it's an absolute mess for a spec that's like 5 JSON schemas in a trench coat.
_raz
Glad to here, also thought I was alone :)
keithwhor
On MCP's Streamable HTTP launch I posted a issue asking if we should just simplify everything for remote MCP servers to just be HTTP requests.
https://github.com/modelcontextprotocol/modelcontextprotocol...
MCP as a spec is really promising; a universal way to connect LLMs to tools. But in practice you hit a lot of edge cases really quickly. To name a few; auth, streaming of tool responses, custom instructions per tool, verifying tool authenticity (is the server I'm using trustworthy?). It's still not entirely clear (*for remote servers*) to me what you can do with MCP that you can't do with just a REST API, the latter being a much more straightforward integration path.
If other vendors do adopt MCP (OpenAI and Gemini have promised to) the problem they're going to run into very quickly is that they want to do things (provide UI elements, interaction layers) that go beyond the MCP spec. And a huge amount of MCP server integrations will just be lackluster at best; perhaps I'm wrong -- but if I'm { OpenAI, Anthropic, Google } I don't want a consumer installing Bob's Homegrown Stripe Integration from a link they found on 10 Best MCP Integrations, sharing their secret key, and getting (A) a broken experience that doesn't match the brand or worse yet, (B) credentials stolen.
keithwhor
Quick follow up:
I anticipate alignment issues as well. Anthropic is building MCP to make the Anthropic experience great. But Anthropic's traffic is fractional compared to ChatGPT - 20M monthly vs 400M weekly. Gemini claims 350M monthly. The incentive structure is all out of whack; how long are OpenAI and Google going to let an Anthropic team (or even a committee?) drive an integration spec?
Consumers have barely interacted with these things yet. They did once, with ChatGPT Plugins, and it failed. It doesn't entirely make sense to me that OpenAI is okay to do this again but let another company lead the charge and define the limitations of the end user experience (because that what the spec ultimately does, dictates how prompts and function responses are transported), when the issue wasn't the engineering effort (ChatGPT's integration model was objectively more elegant) but a consumer experience issue.
The optimistic take on this is the community is strong and motivated enough to solve these problems as an independent group, and the traction is certainly there. I am interested to see how it all plays out!
Scotrix
OpenAI takes the backseat and wait until something stable/usable comes out of it which gains traction and takes it over then. Old classic playbook to let others make the mistakes and profit from it…
rco8786
> It's still not entirely clear (for remote servers) to me what you can do with MCP that you can't do with just a REST API,
Nothing, as far as I can tell.
> the latter being a much more straightforward integration path.
The (very) important difference is that the MCP protocol has built in method discovery. You don't have to 'teach' your LLM about what REST endpoints are available and what they do. It's built into the protocol. You write code, then the LLM automatically knows what it does and how to work with it, because you followed the MCP protocol. It's quite powerful in that regard.
But otherwise, yea it's not anything particularly special. In the same way that all of the API design formats prior to REST could do everything a REST API can do.
toonalfrink
In other words, MCP is RESTful (as in, it has HATEOAS) and "REST" is not
angusturner
I’m really glad to see people converging on this view because I feel a bit insane for not understanding all the hype.
Like, yeah, we need a standard way to connect LLMs with tools etc, but MCP in its current state is not a solution.
_raz
Once I published the blog post, I ended up doing a similar thing the other day. https://github.com/modelcontextprotocol/modelcontextprotocol...
From reading your issue, I'm not holding my breath.
It all kind of seems too important to fuck up
keithwhor
In the grand scheme of things I think we are still very early. MCP might be the thing which is why I'd rather try and contribute if I can; it does have a grassroots movement I haven't seen in a while. But the wonderful thing about the market is that incentives, e.g. good customer experiences that people pay for, will probably win. This means that MCP, if it remains the focal point for this sort of work, will become a lot better regardless of whether or not early pokes and prods by folks like us are successful or not. :)
mattw1810
MCP should just have been stateless HTTP to begin with. There is no good reason for almost any of the servers I have seen to be stateful at the request/session level —- either the server carries the state globally or it works fine with a session identifier of some sort.
taocoyote
I don't understand the logistics of MCP interactions. Can anyone explain why they aren't stateless. Why does a connection need to be held open?
mattw1810
I think some of the advanced features around sampling from the calling LLM could theoretically benefit from a bidirectional stream.
In practice, nobody uses those parts of the protocol (it was overdesigned and hardly any clients support it). The key thing MCP brings right now is a standardized way to discover & invoke tools. This would’ve worked equally well as a plain HTTP-based protocol (certainly for a v1) and it’d have made it 10x easier to implement.
brumar
Sampling is to my eyes a very promising aspect of the protocol. Maybe its implementation is lagging behind because it's too far from the previous mental model of tool use. I am also fine if the burden is on the client side if it enables a good DX on server side. In practice, there would be much more servers than clients.
brabel
> This would’ve worked equally well as a plain HTTP-based protocol
With plain HTTP you can quite easily "stream" both the request's and the response's body: that's a HTTP/1 feature called "chunking" (the message body is not just one byte array, it's "chunked" so that each chunk can be received in sequence). I really don't get why people think you need WS (or ffs SSE) for "streaming". I've implemented a chat using just good old HTTP/1.1 with chunking. It's actually a perfect use case, so it suits LLMs quite well.
0x457
Well, the point is to provide context, it's easier to do if server has state.
For example, you have a MCP client (let's say it's amazon q cli), a you have a MCP server for executing commands over ssh. If connection is maintained between MCP client and server, then MCP server can keep ssh connection alive.
Replace SSH server with anything else that has state - a browser for example (now your AI assistant also can have 500 open tabs)
lo0dot0
I don't claim to have a lot of experience on this but my intuition tells me that a connection that ends after the request needs to be reopened for the next request. What is more efficient, keeping the session open or closing it, depends on the usage pattern, how much memory does the session consume, etc. etc.
mattw1810
This is no different from a web app though, there’s no obvious need to reinvent the wheel. We know how to do this very very well: the underlying TCP connection remains active, we multiplex requests, and cookies bridge the gap for multi-request context. Every language has great client & server support for that.
Instead we ended up with a protocol that fights with load balancers and can in most cases not just be chucked into say an existing Express/FastAPI app.
That makes everything harder (& cynically, it creates room for providers like Cloudflare to create black box tooling & advertise it as _the_ way to deploy a remote MCP server)
ycombinatrix
That's not "stateful" for the purposes of correctness. Reusing a tcp stream doesn't make a protocol stateful.
jes5199
I recently wrote an MCP server, in node, after trying and failing to get the official javascript SDK to work. I agree with the criticisms — this is a stunningly bad specification, perhaps the worst I have seen in my career. I don’t think the authors have actually tried to use it.
Trying to fix “you must hold a single connection open to receive all responses and notifications” by replacing it with “you must hold open as many connections as you have long-running requests, plus one more for notifications” is downright unhinged, and from reading the spec I’m not even sure they know that’s what they are asking clients to do
dend
Just to add one piece of clarification - the comment around authorization is a bit out-of-date. We've worked closely with Anthropic and the broader security community to update that part of MCP and implement a proper separation between resource server (RS) and authorization server (AS) when it comes to roles. You can see this spec in draft[1] (it will be there until a new protocol version is ratified).
[1]: https://modelcontextprotocol.io/specification/draft/basic/au...
lolinder
What percentage of the MCP spec is (was?) LLM output?
It's setting off all kinds of alarm bells for me, and I'm wondering if I'm on to something or if my LLM-detector alarms are miscalibrated.
dend
Can only speak for the authorization spec, where I am actively participating - zero. The entire spec was written, reviewed, re-written, and edited by real people, with real security backgrounds, without leaning into LLM-based generation.
_raz
Idk, I'm kind of agnostic and ended up throwing it in there.
Regurgitating the OAuth draft don't seem that usefull imho, and why am I forced into it if I'm using http. Seems like there are plenty of usecases where un-attended thing would like to interact over http, where we usually use other things aside from OAuth.
It all probably could have been replaced by
- The Client shall implement OAuth2 - The Server may implement OAuth2
dend
For local servers this doesn't matter as much. For remote servers - you won't really have any serious MCP servers without auth, and you want to have some level setting done between client and servers. OAuth 2.1 is a good middle ground.
That's also where, with the new spec, you don't actually need to implement anything from scratch. Server issues a 401 with WWW-Authenticate, pointing to metadata for authorization server locations. Client takes that and does discovery, followed by OAuth flow (clients can use many libraries for that). You don't need to implement your own OAuth server.
vlovich123
Bearer tokens work elsewhere and imho are drastically simpler than oauth
null
_kidlike
I know it's not auth-related, but the main MCP "spec" says that it was inspired by LSP (language server protocol). Wouldn't something like HATEOAS be more apt?
aristofun
This is a part of the bigger problem. Near all of AI is done by mathematicians, (data) scientists, students and amateur enthusiasts. Not by professional software engineers.
This is why nearly everything looks like a one weekend pet project by the standards of software engineering.
doug_durham
Speak for yourself. I see the majority of work being done by professional software engineers.
aristofun
Any popular examples to support your claim?
My claim is supported by the post article and many points there, for example. Another example is my own experience working with python ecosystem and ai/ml libraries in particular. With rare exceptions (like pandas) it is mostly garbage from DevX perspective (in comparison of course).
But I admit my exposure is very limited. I don’t work in ai area professionally (which is another example of my point btw, lol))
fleischhauf
pytorch, tensorflow, numpy there are quite a few examples ai/ml has been steadily more commodetized, so it's far from only being developed by mathematicians. Hence every highschools student and his mother has an AI startup now. (And I'm not even mad, it's actually very exciting to see what people come up with nowadays)
tdullien
As a trained mathematician with 20+ years shipping software products, I object to this.
A lot of AI work is done by people that "dash-shaped" -- broad, but with no depth anywhere.
Then there's a few I-shaped people that drive research progress, and a few T-shaped people that work on the infrastructure that allows the training runs to go through.
But something like a protocol will certainly be designed by a dash, not an I or a T, because those are needed to keep the matrices multiplying.
punkpeye
I am the founder of one of the MCP registries (https://glama.ai/mcp/servers).
I somewhat agree with author’s comments, but also want to note that the protocol is in the extremely early stages of development, and it will likely evolve a lot over the next year.
I think that no one (including me) anticipated just how much attention this will get straight out the door. When I started working on the registry, there were fewer than a few dozen servers. Then suddenly a few weeks later there was a thousand, and numbers just kept growing.
However, lots and lots of those servers do not work. Majority of my time has gone into trying to identify servers that work (using various automated tests). All of this is in large part because MCP got picked up by the mainstream AI audience before the protocol reached any maturity.
Things are starting to look better now though. We have a few frameworks that abstract the hard parts of the protocol. We have a few registries that do a decent job surfacing servers that work vs those that do not. We have a dozen or so clients that support MCPs, etc. All of this in less than half a year is unheard of.
So yes, while it is easy to find flaws in MCP, we have to acknowledge that all of it happened in a super short amount of time – I cannot even think of comparisons to make. If the velocity remains the same, MCP future is very bright.
For those getting started, I maintain a few resources that could be valuable:
* https://github.com/punkpeye/awesome-mcp-servers/
ethical_source
> I somewhat agree with author’s comments, but also want to note that the protocol is in the extremely early stages of development, and it will likely evolve a lot over the next year.
And that's why it's so important to spec with humility. When you make mistakes early in protocol design, you live with them FOREVER. Do you really want to live with a SSE Rube Goldberg machine forever? Who the hell does? Do you think you can YOLO a breaking change to the protocol? That might work in NPM but enterprise customers will scream like banshees if you do, so in practice, you're stuck with your mistakes.
jes5199
they already did though. the late-2024 version and the early-2025 version have completely incompatible SSE rube goldberg machines
punkpeye
Just focusing on worst-case scenarios tends to spread more FUD than move things forward. If you have specific proposals for how the protocol could be designed differently, I’m sure the community would love to hear them – https://github.com/orgs/modelcontextprotocol/discussions
ethical_source
The worst case scenario being, what, someone implementing the spec instead of using the SDK and doing it in a way you didn't anticipate? Security and interoperability will not yield to concerns about generating FUD. These concerns are important whether you like them or not. You might as well be whispering that ill news is a ill guest.
At the least, MCP needs to clarify things like "SHOULD rate limit" in more precise terms. Imagine someone who is NOT YOU, someone who doesn't go to your offsites, someone who doesn't give a fuck about your CoC, implementing your spec TO THE LETTER in a way you didn't anticipate. You going to sit there and complain that you obviously didn't intend to do the things that weird but compliant server is doing? You don't have a recourse.
The recent MCP annotations work is especially garbage. What the fuck is "read only"? What's "destructive"? With respect to what? And hoo boy, "open world". What the fuck? You expect people to read your mind?
What would be the point of creating GH issues to discuss these problems? The kind of mind that writes things like this isn't the kind of mind that will understand why they need fixing.
rco8786
Agree with basically all of this.
The actual protocol of MCP is…whatever. I’m sure it will continue to evolve and mature. It was never going to be perfect out of the gate, because what is?
But the standardization of agentic tooling APIs is mind bogglingly powerful, regardless of what the standard itself actually looks like.
I can write and deploy code and then the AI just..immediately knows how to use it. Something you have to experience yourself to really get it.
_raz
Kind of my fear exactly. We are moving so fast and that mcp would create an accept a transport protocol that might take years or decades to get rid off for something better.
Kind of reminds me of the browser wars during 90s where everyone tried to run the fastest an created splits in standards and browsers what we didn't really det rid of for a good 20 year or more. IE11 was around for far to long
punkpeye
I think that transport is a non-issue.
Whatever the transport evolves to, it is easy to create proxies that convert from one transport to another, e.g. https://github.com/punkpeye/mcp-proxy
As an example, every server that you see on Glama MCP registry today is hosted using stdio. However, the proxy makes them available over SSE, and could theoretically make them available over WS, 'streamable HTTP', etc
Glama is just one example of doing this, but I think that other registries/tools will emerge that will effectively make the transport the server chooses to implement irrelevant.
nylonstrung
Do you think WebTransport and HTTP3 could provide better alternatives for transport?
null
bongodongobob
Then have a convo with all your devs to stop spamming the glory of MCP all over the damn place. Have some patience and finish writing and testing it first.
mrcsharp
> "In HTTP+SSE mode, to achieve full duplex, the client sets up an SSE session to (e.g.) GET /sse for reads. The first read provides a URL where writes can be posted. The client then proceeds to use the given endpoint for writes, e.g., a request to POST /a-endpoint?session-id=1234. The server returns a 202 Accepted with no body, and the response to the request should be read from the pre-existing open SSE connection on /sse."
This just seems needlessly complicated. Performing writes on one endpoint and reading the response on another just seems so wrong to me. An alternative could be that the "client" generates a session id and the start of the chat and make http calls to the server passing that ID in a query string or header. Then, the response is sent back normally instead of just sending 202.
What benefit is SSE providing here? Let the client decide when a session starts/ends by generating IDs and let the server maintain that session internally.
nitely
> What benefit is SSE providing here? Let the client decide when a session starts/ends by generating IDs and let the server maintain that session internally.
The response is generated asynchronously, instead of within the HTTP request/response cycle, and sent over SSE later. But emulating WS with HTTP requests+SSE seems very iffy, indeed.
mrcsharp
Well with SSE the server and client are both holding a HTTP connection open for over a relatively long period of time. If the server is written with a language that supports async paradigms, then a http request that needs async IO will use about the same amount of resource anyways. And when the response if finished, that connection is closed and resources are freed. Whereas SSE will keep them for much longer.
nitely
Yes, and the client may do multiple requests, and if all take long to be processed you may end up with a lot of open connections at the same time (at least on http1), so there is a point to fast HTTP requests+SSE, instead of slow requests (and no SSE). Granted, if the server is HTTP2 the requests can share the same connection, but then it'd be similar to just using WS for this usage. Also, this allows to queue the work, and processed it either sequentially or concurrently.
By async I meant a process that may take longer than you are willing to do within the request/response cycle, not necessarily async IO.
Kiro
What benefit are WebSockets providing? It's the same. You send something to one endpoint and need to listen for a response in another handler.
justanotheratom
It is indeed quite baffline why MCP is taking off, but facts are facts. I would love to be enlightened how MCP is better than an OpenAPI Spec of an existing Server.
simonw
My theory is that a lot of the buzz around MCP is actually buzz around the fact that LLM tool usage works pretty well now.
OpenAI plugins flopped back in 2023 because the LLMs at the time weren't reliable enough for tool usage to be anything more than interesting-but-flawed.
MCP's timing was much better.
fhd2
I'm still having relatively disastrous results compared to just sending pre curated context (i.e. calling tools deterministically upfront) to the model.
Doesn't cover all the use cases, but for information retrieval stuff, the difference is pretty light and day. Not to mention the deterministic context management approach is quite a bit cheaper in terms of tokens.
visarga
I find letting the agent iterate search leads to better results. It can direct the search dynamically.
runekaagaard
I thinks a lot is timing and also that it's a pretty low bar to write your first mcp server:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Basic Math Server")
@mcp.tool()
def multiply(a: int, b: int) -> int:
return a * b
mcp.run()
If you have a large MCP server with many tools the amount of text sent to the LLM can be significant too. I've found that Claude works great with an OpenAPI spec if you provide it with a way to look up details for individual paths and a custom message that explains the basics. For instance https://github.com/runekaagaard/mcp-redmine_raz
That's kind of my point, that the protocols complexity is hidden in py sdk making it feel easy... But taking on large tech dept
practal
The difficult part is figuring out what kind of abstractions we need MCP servers / clients to support. The transport layer is really not important, so until that is settled, just use the Python / TypeScript SDK.
pixl97
I mean isn't this the point of a lot of, if not most successful software? Abstracting away the complexity making it feel easy, where most users of the software have no clue what kind of technical debt they are adopting?
Just think of something like microsoft word/excel for most of its existence. Seems easy to the end user, but attempting to move away from it was complex, the format had binary objects that were hard to unwind, and interactions that were huge security risks.
hirsin
This is one of the few places I think it's obvious why MCP provides value - an OpenAPI document is static and does no lifting for the LLM, forcing the LLM to handle all of the call construction and correctness on its own. MCP servers reduce LLM load by providing abstractions over concepts, with basically the same benefits we get by not having to write assembly by hand.
In a literal sense it's easier, safer, faster, etc for an LLM to remember "use server Foo to do X" than "I read a document that talks about calling api z with token q to get data b, and I can combine three or four api calls using this http library to...."
acchow
I believe gp is saying the MCP’s “tool/list” endpoint should return dynamic, but OpenAPI-format, content.
Not that the list of tools and their behavior should be static (which would be much less capable)
tedivm
I'm not saying MCP is perfect, but it's better than OpenAPI for LLMs for a few reasons.
* MCP tools can be described simply and without a lot of text. OpenAPI specs are often huge. This is important because the more context you provide an LLM the more expensive it is to run, and the larger model you need to use to be effective. If you provide a lot of tools then using OpenAPI specs could take up way too much for context, while the same tools for MCP will use much less.
* LLMs aren't actually making the calls, it's the engine driving it. What happens when an LLM wants to make a call is it responds directly with a block of text that the engine catches and uses to run the command. This allows LLMs to work like they're used to: figuring out text to output. This has a lot of benefits: less tokens to output than a big JSON blob is going to be cheaper.
* OpenAPI specs are static, but MCP allows for more dynamic tool usage. This can mean that different clients can get different specs, or that tools can be added after the client has connected (possibly in response to something the client sent). OpenAPI specs aren't nearly that flexible.
This isn't to say there aren't problems. I think the transport layer can use some work, as OP sent, but if you play around in their repo you can see websocket examples so I wouldn't be surprised if that was coming. Also the idea that "interns" are the ones making the libraries is an absolute joke, as the FastMCP implementation (which was turned into the official spec) is pretty solid. The mixture of hyperbole with some reasonable points really ruins this article.
smartvlad
If you look at the actual raw output of tools/list call you may find it surprisingly similar to the OpenAPI spec for the same interface. In fact they are trivially convertible to each other.
Personally I find OpenAPI spec being more practical since it includes not just endpoints with params, but also outputs and authentication.
Know all that from my own experience plugging dozens of APIs to both MCP/Claude and ChatGPT.
9dev
> OpenAPI specs are static, but MCP allows for more dynamic tool usage.
This is repeated everywhere, but I don’t get it. OpenAPI specs are served from an HTTP endpoint, there’s nothing stopping you from serving a dynamically rendered spec depending on the client or the rest of the world?
armdave
What does it mean that "different clients can get different specs"? Different in what dimension? I could imagine this makes creating repeatable and reliable workflows problematic.
tedivm
Using MCP you can send "notifications" to the server, and the server can send back notifications including the availability of new tools.
So this isn't the same as saying "this user agent gets X, this gets Y". It's more like "this client requested access to X set of tools, so we sent back a notification with the list of those additional tools".
This is why I do think websockets make more sense in a lot of ways here, as there's a lot more two way communication here than you'd expect in a typically API. This communication also is very session based, which is another thing that doesn't make sense for most OpenAPI specs which assume a more REST-like stateless setup.
Brainlag
Why is it baffling? Worse is better! Look at PHP, why took that prank of a programming language ever anyone serious?
jacob019
The consensus around here seems to be that the protocol itself is fine, but the transport is controversial.
Personally, even the stdio transport feels suboptimal. I mostly write python and startup time for a new process is nontrivial. Starting a new process for each request doesn't feel right. It works ok, and I'll admit that there's a certain elegance to it. It would be more practical if I were using a statically compiled language.
As far as the SSE / "Streamable HTTP" / websockets discussion, I think it's funny that there is all this controversy over how to implement sockets. I get that this is where we are, because the modern internet only supports a few protocols, but a the network level you can literally just open up a socket and send newline delimited JSON-RPC messages in both directions at full duplex. So simple and no one even thinks about it. Why not support the lowest level primitive first? There are many battle tested solutions for exposing sockets over higher level protocols, websockets being one of them. I like the Unix Philosophy.
Thinking further, the main issue with just using TCP is the namespace. It's similar to when you have a bunch of webservers and nginx or whatever takes care of the routing. I use domain sockets for that. People often just pick a random port number, which works fine too as long as you register it with the gateway. This is all really new, and I'm glad that the creators, David and Justin, had the foresight to have a clean separation between transport and protocol. We'll figure this out.
ximus
> Starting a new process for each request doesn't feel right.
I think there is a misunderstanding of how stdio works. The process can be long running and receive requests via stdio at any time. No need to start one for each request.
jacob019
Sure, but that is not what I'm seeing in FastMCP proxy mode, and it can only talk with the parent process. You make a good point though, stdio is similar to tcp sockets and cross-platform without namespace issues, but can only support one client per process. I could use socat if I want it to talk sockets.
> the documentation is poorly written (all LLM vendors seem to have an internal competition in writing confusing documentation).
This is almost certainly because they're all using LLMs to write the documentation, which is still a very bad idea. The MCP spec [0] has LLM fingerprints all over it.
In fact, misusing LLMs to build a spec is much worse than misusing them to avoid writing good docs because when it comes to specifications and RFCs the process of writing the spec is half the point. You're not just trying to get a reasonable output document at the end (which they didn't get anyway—just try reading it!), you're trying to figure out all the ways your current thinking is flawed, inadequate, and incomplete. You're reading it critically and identifying edge cases and massaging the spec until it answers every question that the humans designing the spec and the community surrounding it have.
Which means in the end the biggest tell that the MCP spec is the product of LLMs isn't that it's somewhat incoherent or that it's composed entirely of bullet lists or that it has that uniquely bland style: it's that it shows every sign of having had very little human thought put into it relative to what we'd expect from a major specification.
[0] https://modelcontextprotocol.io/specification/2025-03-26