Show HN: Open-Source MCP Server for Context and AI Tools
6 comments
·March 14, 2025fudged71
Very cool.
How does it work when multiple installed MCP servers have overlapping functionality? Are MCPs going to have competing prompts saying for example they’re the best to choose for OCR etc?
dlevine
I have been playing around with MCP, and one of its current shortcomings is that it didn’t support OAuth. This means that credentials need to be hardcoded somewhere. Right now, it appears that a lot of MCP servers are run locally, but there is no reason they couldn’t be run as a service in the future.
There is a draft specification for OAuth in MCP, and hopefully this is supported soon.
socrateslee
For the OAuth part, the access_token is all an MCP server needs. So users could do an OAuth Authorization like in the settings or by the chatbot, and let MCP servers handle the storage of the access_token.
For remote MCP servers, storing access_token is a very common practice. For MCP servers hosted locally, how to deal with a bunch of secret keys is a problem.
knowaveragejoe
There are remotely run MCP server options out there, such as mcp.run and glama.ai
johnjungles
This is pretty cool!
I too am working on effortless mcp servers for other developers using cursor and windsurf - there’s so much out there on mcp but turns out a lot of mcp servers don’t “just work”. A lot of other people have been porting APIs but actually you need to put a lot more thought into it because people don’t memorize uuids that are required to make api calls. Memory is a good approach, but afraid of the recall aspect and how that could potentially cause tool calls with bad inputs.
I built https://skeet.build where anyone can try out mcp for cursor and windsurf - we approached it with just brute force thoughtful design and a lot of trial and error.
We did this because of a painpoint I experienced as an engineer having to deal with Jira and Linear - updating slack and all that friction. I noticed I copy and paste a lot to cursor and so spent time building this app.
Mostly for workflows that I like:
* start a PR with a summary of what I just did * slack or comment to linear/Jira with a summary of what I pushed * pull this issue from sentry and fix it * Find a bug a create a linear issue to fix it * pull this linear issue and do a first pass * pull in this Notion doc with a PRD then create an API reference for it based on this code * Postgres or MySQL schemas for rapid model development
Everyone seems to go for the hype but ease of use, practical pragmatic developer workflows, and high quality polished mcp servers are what we’re focused on
Lmk what you think!
pdf2calguy
sweet product
Large Language Models (LLMs) are powerful, but they’re limited by fixed context windows and outdated knowledge. What if your AI could access live search, structured data extraction, OCR, and more—all through a standardized interface?
We built the JigsawStack MCP Server, an open-source implementation of the Model Context Protocol (MCP) that lets any AI model call external tools effortlessly.
Here’s what it unlocks:
- Web Search & Scraping: Fetch live information and extract structured data from web pages.
- OCR & Structured Data Extraction: Process images, receipts, invoices, and handwritten text with high accuracy.
- AI Translation: Translate text and documents while maintaining context. Image Generation: Generate images from text prompts in real-time.
Instead of stuffing prompts with static data or building custom integrations, AI models can now query MCP servers on demand—extending memory, reducing token costs, and improving efficiency.
Read the full breakdown here: https://jigsawstack.com/blog/jigsawstack-mcp-servers
If you’re working on AI-powered applications, try it out and let us know how it works for you.