Show HN: Cloud-Ready Postgres MCP Server
81 comments
·March 30, 2025saberience
Just for everyone here, the code for "building an MCP server", is importing the standard MCP package for Typescript, Python, etc, then writing as little as 10 lines of code to define something is an MCP tool.
Basically, it's not rocket science. I also built MCP servers for Mysql, Twilio, Polars, etc.
spennant
I built this to fill a specific need. It's not really made to talk to Claude Desktop (although it can). It's built to take multiple connections from agents who bring their own LLM (via API key or whatever) and provide context to the LLM model. Yes, it does import the standard Python MCP package, however it does quite a few non-standard things in order to achieve the multi-tenancy. I encourage you to look at the code.
koakuma-chan
What is multi-tenancy?
spennant
Just a fancy way of saying that multiple agents (with their own LLMs) can concurrently connect, also that pg-mcp can simultaneously connect to multiple Postgres servers as well.
yroc92
User isolation in a single data store. Basically, many customers sharing a single database securely without sharing their data.
runako
From HN guidelines:
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
We are hackers here. Building is good. Sharing is good. All this is true even if you personally know how to do what is being shared, and it is easy for you. I promise you there are people who encounter every sharing post here and do not think what is posted is easy.
brulard
I think we exactly need to hear things like that. This is what I was wondering. Why is every MCP project such a big news? Isn't it just a few lines of code?
runako
Is this really “big news” or is it a GitHub link titled “Show HN”?
Is there a glitzy corporate PR page trying to sell something, or is this just code for people to read?
Did Ars Technica breathlessly cover it, or did a random hacker post and share something they worked on?
If it’s the work of a random hacker not promoted by media outlets, who benefits from negative comments about that person’s work?
Is it possible that there are at least some people who read this site who know less about the topics covered than you do, and so might find this interesting or useful?
When you post something, will it help you to improve if people post non-constructive negative feedback? Will dismissive comments like these make you more or less likely to show your work publicly?
Just food for thought…
asdev
5% hackers, 95% FANG employees who think this is meaningless
esafak
You do have to implement every functionality that you want to expose.
1zael
This is wild. Our company has like 10 data scientists writing SQL queries on our DB for business questions. I can deploy pg-mcp for my organization so everyone can use Claude to answer whatever is on their mind? (e.x."show me the top 5 customers by total sales")
sidenote: I'm scared of what's going to happen to those roles!
clusterfook
Yep gonna be easy
Q: show me the top 5 customers by total sales
A: System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
Q: Why do I get this error
A: Looks like it needs an index, let me create that for you. Done. Rerunnign query.
could not close temporary statistics file "pg_stat_tmp/global.tmp": No space left on device
Q: Why this error
A: 429 Too Many Requests
Rub hands... great next 10 years to be a backend dev.
curious_cat_163
That’s a good example of a worst case scenario. This is why we would still need humans loitering about.
The question is do they still need 10? Or 2 would suffice? How about 5?
This does not need to be a debate about the absolutes.
clusterfook
You will need 2. BUT, and here is the rub. https://en.wikipedia.org/wiki/Jevons_paradox
I am hoping Jevon will keep employing me. He has been a great boss for the last 25 years TBH.
brulard
I have to say I had a very good results creating and optimizing quite complex queries with Sonnet. But letting LLM run them on their own in production is quite a different beast.
fullstackchris
and the next 10 after that, and the next 10 after that, and...
otabdeveloper4
Probably nothing. "Expose the database to the pointy-haired boss directly, as a service" is an idea as old a computing itself. Even SQL itself was originally an iteration of that idea. Every BI system (including PowerBI and Tableau) were supposed to be that.
It doesn't work because the PHB doesn't have the domain knowledge and doesn't know which questions to ask. (No, it's never as simple as group-by and top-5.)
jaccola
I would say SQL still is that! My wife had to learn some SQL to pull reports in some non-tech finance job 10 years ago. (I think she still believes this is what I do all day…)
I suppose this could be useful in that it prevents everyone in the company having to learn even the basics of SQL which is some barrier, however minimal.
Also the LLM will presumably be able to see all the tables/fields and ‘understand’ them (with the big assumption that they are even remotely reasonably named) so English language queries will be much more feasible now. Basically what LLMs have over all those older attempts is REALLY good fuzziness.
I see this being useful for some subset of questions.
spennant
It won’t be that easy. First off, most databases in the wild are not well documented. LLMs benefit from context, and if your tables/columns have non-intuitive or non-descriptive names, the SQL may not even work. Second, you might benefit from an LLM fine-tuned on writing code and/or an intelligent Agent that checks for relevancy and ambiguity in user input prior to attempting to answer the question. It would also help if the agent executed the query to see how it answered the user’s question. In other words “reasoning”… pg-mcp simply exposes the required context for Agents to do that kind of reasoning.
nickdichev
The COMMENT command will finally be useful :)
dinfinity
Then let the AI first complete the documentation by looking at the existing documentation, querying the DB (with pg-mcp), etc.
Do human reviewing and correcting of the updated documentation. Then ensure that the AI knows that the documentation might still contain errors and ask it to do the 'actual' work.
moltar
There are LLM SQL benchmarks. [1] And state of the art solution is still only at 77% accuracy. Would you trust that?
flappyeagle
Yes. Ask it to do it 10 times and pick the right answer
pclmulqdq
That only works if you assume the fail cases are uncorrected. Spoiler alert: they are not.
Kiro
There are hundreds of text-to-SQL companies and integrations already. What's different about this that makes you react like that?
romanovcode
Those companies will be dead once this goes mainstream. Why pay to a 3rd party company when you can ask LLM to create graphs and analysis of whatever you want. Pair it with scheduled tasks and I really don't see any value in those SaaS products.
slt2021
there are a lot of nuances in Business Analytics, you maybe can get away with GenAI for naiive questions like "Who are my top5 customers?", but thats not the type of insight usually needed. Most companies already know their top5 customers by heart and these don't change a lot.
Nuanced BI analytics can have a lot of toggles and filters and drilldowns, like compare sales of product A in category B subcategory C, but only for stores in regions X,Y and that one city Z during time periods T1, T2. and out of these sales, look at sales of private brand vs national brand, and only retail customers, but exclude purchases via business credit card or invoiced.
with every feature in a DB (of which there could be thousands), the number of permutations and dimensions grows very quickly.
whats probably going to happen, is simple questions could be self-served by GenAI, but more advanced usage is still needed interention by specialist. So we would see some improvement in productivity, but people will not lose jobs. Perhaps number of jobs could even increase due to increased demand for analytics, as it often happens with increased efficiency/productivity (Jevon's paradox)
Kiro
Those companies and integrations are already using LLMs. That's the whole point. I'm only talking about LLM products, many of which are free and open source. This has been mainstream for years.
a-dub
is that true? i'd like that, but i get the sense that this mcp stuff is more oriented around programming assistant and agent applications.
i suppose the desktop app can use it, but how good is it for this general purpose "chat with the database for lightweight analytics" use cases is it worth the trouble of dealing with some electron app to make it work?
sshine
> i get the sense that this mcp stuff is more oriented around programming assistant and agent applications
Agents will become ubiquitous parts of the user interface that is currently the chat.
So if you bother with a website or an electron app now, MCP will just add more capabilities to what you can control using agents.
a-dub
yeah, i understand the premise. my question revolves around how well it actually works today for bi style applications. specifically, how close is it to being something that you can just drop in as a smart query and plotting interface rather than a bi stack that is built around something like tableau.
when i've read through documentation for mcp servers, it seems like the use cases they've mostly been focused on are improving effectiveness of programming assistants by letting them look at databases associated with codebases they're looking to modify.
i understand that these things are meant to be generic in nature, but you never really know if something is fit for purpose until it's been used for that purpose. (at least until agi, i suppose)
slt2021
didn't Tableau (and some other BI solutions) have this feature out of the box?
risyachka
So you will ask "What is our churn?", get a random result, and then turn your whole marketing strategy around wrong number?
Thats cute.
fulafel
From docker-compose
ports:
- "8000:8000"
This will cause Docker to expose this to the internet and even helpfully configure an allow rule to the host firewall, at least on Linux.rubslopes
Good catch.
OP, exposing your application without authentication is a serious security risk!
Quick anecdote: Last week, I ran a Redis container on a VPS with an exposed port and no password (rookie mistake). Within 24 hours, the logs revealed someone attempting to make my Redis instance a slave to theirs! The IP traced back to Tencent, the Chinese tech giant... Really weird. Fortunately, there was nothing valuable stored in it.
acheong08
> The IP traced back to Tencent, the Chinese tech giant... Really weird.
They're a large cloud provider in Asia like Amazon AWS or Microsoft Azure. I doubt such a tech company would make it that obvious when breaking the law.
rubslopes
I didn't know that, thank you.
spennant
I made a few assumptions about the actual deployer and their environment that I shouldn’t have… I’ll need to address this. Thanks!
tudorg
This is great, I like in particular that there are extensions plugins. I’ll be looking at integrating this in the Xata Agent (https://github.com/xataio/agent) as custom tooling.
spennant
Xata.io looks very interesting!!! I was thinking about building an intelligent agent for pg-mcp as my net project but it looks like you did a lot of the hard work already. When thinking about the "AI Stack" I usually separate concerns like this:
UI <--> Agent(s) <--> MCP Server(s) <--> Tools/Resources
|
LLM(s)
tudorg
That's very similar to what we are thinking as well, and we'd like to separate the Agent tools into an MCP server as well as use MCP for custom tools.
scottpersinger
Where's the pagination? How does a large query here not blow up my context:
https://github.com/stuzero/pg-mcp/blob/main/server/tools/que...
spennant
It's coming...
jillesvangurp
Is there more to MCP than being a simple Remote Procedure Call framework that allows AI interactions to include function calls driven by the AI model? The various documentation pages are a bit hand wavy on what the protocol actually is. But it sounds to me that RPC describes all/most of it.
doug_durham
The biggest contribution is the LLM compatible metadata that describes the tool and its argument. It is trivial to adopt. In python you can use FASTMcp to add a decorator to a function, and as long as that function returns a JSON string you are in business. The decorator extracts the arguments and doc strings and presents that to the LLM.
jillesvangurp
What makes a spec LLM compatible? I've thrown a lot of different things at gpt o1 and it generally understands them more better than I do. OpenAI specifications, unstructured text, log output, etc.
spennant
Indeed. Anything you do with MCP can be done in more traditional ways.
ahamilton454
I don’t understand the advantage of having the transport protocol be HTTP/SSE rather than studio especially in this case when it’s literally running locally.
spennant
The use case for pg-mcp is server deployment - local running is just for dev purposes. HTTP/SSE enables multiple concurrent connections and network access, which stdio can't provide.
runako
I'm still trying to grok MCP, would be awesome if you could include usage examples in the doc.
Good luck!
saberience
It's not complicated at all.
All it does is expose methods as a "tool" which is then brought back to your LLM and defined with its name, description and input parameters.
E.g. Name: "MySqlTool", Description: "Allows arbitrary MySQL queries to the XYZ database", Parameters: "string: sqlToExecute"
The MCP Client (e.g. Claude Desktop, Claude Code), is configured to talk to an MCP server via stdio or sse, and calls a method like "tools/list", the server just sends a list back (in JSON) of all the tools, names, descriptions, params.
Then, if the LLM gets a query that mentions e.g. do a web search, or a web scraping, etc, it just outputs a tool use token then stops inferencing. Then the code calls that tool via stdio/sse (json-rpc), to the MCP server, which just runs that method, returns the result, then its added to the message history in the LLM, then inferencing runs again from the beginning.
runako
I think people who have been building with LLMs have a different view on what is complicated vs not :-)
It may be easy for you to configure, but you dropped some acronyms in there that I would have to look up. I have definitely not personally set up anything like this.
bavell
It's basically a simple rpc server, there's nothing complicated going on...
mparis
+1
My first foray into using MCP was via Claude Desktop. Would be great if you packaged your tool such that one could add it with a few lines in their ‘~/Library/Application Support/Claude/claude_desktop_config.json’
jamestimmins
Same here. Tonight I added Whatsapp to Claude Desktop via Luke Harries' https://github.com/lharries/whatsapp-mcp. Very solid intro into how it all works.
romanovcode
It's very simple and this is actually good example.
1. You add this MCP to your DB (make sure it is securely connected to your AI of choice of course)
2. Ask anything about your data, ask to make graphs, ask to make scheduled tasks, ask to analyze queries and show optimizations and so on.
3. Profit, literally. No need to pay BI companies thousands each month.
teaearlgraycold
The main things that made MCP hard for me to understand at first is that it’s both transport agnostic (so no leveraging semantic HTTP) and is an async task management protocol as well as a tool use protocol. The name itself is also poorly chosen. I would call it Tool Use Protocol.
Think about each MCP implementer like an agent’s input/output device.
oulipo
Nice!
What I'd be looking for is a MCP server where I can run in "biz/R&D exploration-mode", eg:
- assume I'm working on a replica (shared about all R&D engineers) - they can connect and make read-only queries to the replica for the company data - they have a temporary read-write schema just for their current connection so they can have temporary tables and caches - temporary data is deleted when they close the session
How could you make a setup like that so that when using your MCP server, I'm not worried about the model / users modifying the data, but only doing their own private queries/tables?
hackburg
[dead]
curtisszmania
[dead]
revskill
Everytime i see a cloud API_KEY is required, i'm off.
null
Jaxkr
Well the key it wants is for Anthropic and you can’t run those models locally.
spennant
The pg-mcp server doesn’t need an API key. Those variables are only needed to run the example claude-client. You can write your own agent to talk to whatever LLM you like.
Hey HN,
I built pg-mcp, a Model Context Protocol (MCP) server for PostgreSQL that provides structured schema inspection and query execution for LLMs and agents. It's multi-tenant and runs over HTTP/SSE (not stdio)
Features - Supports multiple database connections from multiple agents
- Schema Introspection: Returns table structures, types, indexes and constraints; enriched with descriptions from pg_catalog. (for well documented databases)
- Read-Only Queries: Controlled execution of queries via MCP.
- EXPLAIN Tool: Helps smart agents optimize queries before execution.
- Extension Plugins: YAML-based plugin system for Postgres extensions (supports pgvector and postgis out of the box).
- Server Mode: Spin up the container and it's ready to accept connections at http://localhost:8000/sse