Show HN: Self-updating MCP server for official pip, uv, poetry and conda docs
17 comments
·July 23, 2025rgovostes
keminghe
Thank you so much for pointing that out, I just updated the docker instructions in the README and on DockerHub:
```shell # Pin to commit hash for production security # Get current hash from: https://hub.docker.com/r/keminghe/py-dep-man-companion/tags docker pull keminghe/py-dep-man-companion@sha256:2c896dc617e8cd3b1a1956580322b0f0c80d5b6dfd09743d90859d2ef2b71ec6 # 2025-07-22 release example
# Or use latest for development docker pull keminghe/py-dep-man-companion:latest ```
WhatsName
The demo is not convincing, I rarely find myself migrating between package managers and if I do I would expect claude code to ace this task without mcp help.
keminghe
Appreciate the feedback. I will make it my todo to try out your suggestion of comparing Claude Code with and without MCP to measure the quantitative difference.
imcritic
I currently have poor internet connection and I am not able to view the demo, but having read the description of the project I failed to understand what it is/does and which problems/tasks it solves.
keminghe
This is great feedback. I just improved the README and DockerHub overview to be more clear:
"Stop getting out-of-date Python package manager commands from your AI. Cross-reference latest official pip, poetry, uv, and conda docs with auto-updates."
imcritic
It didn't get clearer much: so what am I supposed to do to achieve that? If I query AI and it gives me outdated instructions how is your server supposed to help here? I suppose that I am supposed to somehow direct that AI's pipeline to use MCP protocol to make as a next step and query a locally running instance of the server of yours to improve the final answer..? The solution sounds quite ad-hoc: so instead of fixing the problem where it is (in AI's knowledge base), you suggest to apply corrections to it's results by making it query a server each user supposed to run locally? Sounds as a wrong approach to me, I honestly doubt many people would bother working with AI that way, especially given that AI is a paid service.
What I think would be great is either you hosting a central server permanently available to public and somehow convincing major AI service providers to query your servers for solving that narrow scope of tasks, or rather do something similar for open source models available on Hugging Face or something.
keminghe
You're absolutely right about the root cause being outdated AI knowledge bases/training data. I agree, my solution doesn't address that directly.
Where this actually shines is with local LLMs (Ollama, etc) - smaller models, no API costs, fully offline, and the AI gets fresh docs without waiting months for model retraining cycles. Your point about convincing major providers to integrate something like Dash (https://kapeli.com/dash) would definitely be the ideal solution though.
I definitely hear you on the broader ecosystem approach. Anything you've been working on in the same space?
mrbonner
Even though your demo is not that helpful, I appreciate you sharing this. I think it opens my eyes for another ideas of providing better documentation for coding agent. I believe the current RAG-based approach for coding agent is not the most optimal solution.
keminghe
Thank you. Yes, 50% of the core value prop is the self-updating automation and the traditional fuzzy + full text search capabilities that disrupt the embedding-centric RAG paradigm. Plus, Tantivy (Rust-based) is fast.
keminghe
Genuinely curious: what are aspects of the demo you find less helpful? I want to improve it.
dcreater
This is superceded by Context7 no?
hobofan
I think in terms of providing the widest coverage, I think Dash[0] which has been in the offline documentation space for a long time should have everyone beat.
keminghe
Appreciate you opening my eyes to this. Dash is indeed comprehensive, and a much bigger initiative. I wonder how it's handling the documentation staleness issue? New docs are published every minute.
hobofan
For the package manager ecosystem it supports, it relies on the projects auto-generated docs, and the builds the docsets from those. I guess it does that step in a cached on-demand way. That way, it can provide docs to all the packages and package versions.
E.g. for Rust: Crate is published crates.io -> triggers automatic docs build on docs.rs -> Dash clients can now pull docsets through a proxy that builds the docsets built on the static HTML bundles built for docs.rs.
vanous
Dash doesn't seem to be open source, plus it's a subscription model. Nothing against that, but OPs solution is MIT licensed.
> Docker `:latest` tag guarantees you always get current docs without manual updates.
The docs should probably be pinned to the version of the tool you have installed. Aside from that, pinning to a specific container hash (not tag) allows you to audit it and trust that malicious instructions haven’t been introduced into the docs.