Clojure MCP
51 comments
·May 25, 2025brucehauman
pkphilip
Is there a possibility that you could do a short video with a demonstration of your workflow using AI with REPL in Clojure?
null
null
TheSmoke
flappy bird demo but for clojure mcp!
rads
You might see this and think, "great, another hyped up vibe coding tool". If Clojure is about simplicity and understanding your code deeply (with the end goals of long-term maintenance and reliability), why would you need this?
When working with Clojure, I've been using LLMs primarily for two use cases:
1. Search
2. Design feedback
The first case is obvious to anyone who's used an LLM chat interface: it's often easier to ask an LLM for the answer than a traditional search engine.The second case is more interesting. I believe the design of a system is more important than the language being used. I'd rather inherit a well-designed codebase in some other language over a poorly designed Clojure codebase any day. Due to the values of Clojure embedded in the language itself and the community that surrounds it, Clojure programmers are naturally encouraged to think first, code second.
The problem I've run into with the second case is that it often takes too much effort for me to get the context into the LLM for it to answer my questions in detail. As a result, I tend to reach for LLMs when I have a general design question that I can then translate into Clojure. Asking it specific questions about my existing Clojure code has felt like more effort than it's worth, so I've actually trained myself to make things more generic when I talk to the LLM.
This MCP with Claude Code seems like the tipping point where I can start asking questions about my code, not just asking for general design feedback. I hooked this up to a project of mine where I recently added multi-tenancy support (via an :app-id key), which required low-level changes across the codebase. I asked the following question with Claude Code and the Clojure MCP linked here:
> given that :app-id is required after setup, are there any places where :app-id should be checked that is missing?
It actually gave me some good feedback on specific files and locations in my code for about 10 seconds of effort. That said, it also cost me $0.48. This might be the thing that gets me to subscribe to a Claude Max plan...
didibus
I think you need to try the 3rd way:
3. Agentic Coding (aka Vibe Coding)
This is what clojure-mcp with Claude Desktop lets you try. Or you can try Amazon Q CLI (there is a free tier https://aws.amazon.com/q/developer/pricing/). Not Clojure specific.You need to find a workflow to leverage it. There are two approaches.
1. Developer Guided
Here you setup the project and basic project structure. Add the dependencies you want to use, setup your src and test folders, and so on.Then you start creating the namespaces you want, but you don't implement them, just create the `(ns ...)` with a doc-string that describes it. You can also start adding the public functions you want for it's API. Don't implement those either. Just add a signature and doc-string.
Then you create the test namespace for it. Creates a deftest for the functions you want to test, and add `(testing ...)` but don't add the body, just write the test description.
Now you tell the AI to fill in the implementation of the tests and namespace so that all described test cases pass and to run the test and iterate until it all does.
Then ask the AI to code review itself, and iterate on the code until it has no more comments.
Mention security, exception handling, logging, and so on as you see fit, if you explicitly call those concerns it'll work on them.
Rinse and repeat. You can add your own tests to be more sure, and also test things out and ask it to fix.
2. Product Guided
Here you pretend to be the Product Manager. You create a project and start adding markdown files in it that describe the user stories, the features of the app/service and so on.Then you ask AI to generate a design specification. You review that, and have it iterate on it until you like it.
Then you ask AI to break down a delivery plan, and a test plan to implement it. Review and iterate until you like it.
Then you ask AI to break up the delivery in milestones, and to create a break down of tasks for the first milestone. Review and iterate here.
Then you ask AI to implement the first task, with tests. Review and iterate. Then the next, and so on.
rads
I'm skeptical because I don't think generating the Clojure code is the hard part. These ideas seem more like wishful thinking than actual productivity improvements with the current state of tech.
Developer guided: For the projects I'm currently working on, the understanding is the most difficult part, and writing the code is a way for me to check my understanding as I go. I do use LLMs to generate code when I feel like it can save me time, such as setting up a new project or scaffolding tests, but I think there are diminishing returns the larger and/or more complex the project is. Furthermore, I work on code that other people (or LLMs) are meant to understand, so I value code that is consistent and concise.
Product guided: Even with meat-based agents (i.e humans), there's a limit to how many Jira tickets I can write and junior engineers I can babysit, and this is one of the worst parts of the job to begin with. Furthermore, junior engineers often make mistakes which means I need to have my own understanding to fix the issues. That said, getting feedback from experienced colleagues is invaluable, and that's what I'm currently simulating with LLMs.
ransom1538
This right here. There is a large segment of developers that have not hooked up indexed code, eg: "where I can start asking questions about my code". They treat it like a search engine. What you want is the LLM to INDEX your code. Then you can see the real power of LLMs. Spoiler alert: it is amazing.
stingraycharles
And to make people get the point: indexing in this case would be to store it in a vector database and make the LLM be able to query it on demand.
It’s really powerful.
tosh
or even better (esp with highly expressive languages): just slurp the whole codebase, no vector db needed
danw1979
The point in the readme about using this with Claude Desktop to avoid API charges is a top tip for any coding-through-MCP setup with Claude.
I’m not a Clojure user but I’ve set up the Jetbrains MCP server and Claude Desktop has been working really well with it. I added a system prompt telling Claude to inspect the CLAUDE.md (used with Claude Code normally) whenever using Jetbrains to get project context, which is also working out really well.
tosh
REPL-driven development seems like a natural fit with current agent concepts
nb: Bruce also did figwheel, a hot/live reloading tool for Clojurescript
worldsayshi
I would like to see how to easily get regression tests from REPL driven development.
In general I wonder how to quickly set up mocks and such with chatbot integrations.
diggan
> how to easily get regression tests from REPL driven development
Usually it's a matter of renaming "comment" to "deftest", and move some things around. Not sure how/why it would be different when using a LLM compared to doing it manually.
rads
There is often a misunderstanding that “REPL driven development” means typing things into the REPL prompt interactively and accumulating state that’s not easy to reproduce. Writing code down in comment blocks as you go is the solution to that, as you mentioned. With that technique I have no fear of restarting my REPL and getting back to where I was.
worldsayshi
> LLM compared to doing it manually.
When testing against an LLM you probably want to mock its responses for tests. Which is quite doable by copy-pasting its responses. I'm just curious what the "best" workflow is.
pona-a
I'm no sure, if you mean having the LLM interact with the REPL. REPL requires a lot of discipline from the developer to keep track of its state. LLMs seem to be far worse at this sort of long-term state tracking than most humans. It would likely keep forgetting things, calling undefined functions, or mistaking its own interfaces.
diggan
> REPL requires a lot of discipline from the developer to keep track of its state
More discipline than what is required when trying to keep track of all of that in your head? Personally I use a REPL because I'm lazy and dumb, and the REPL helps me keep track and makes sure I don't do stupid mistakes. If I wanna know what value a thing has, I just ask the REPL, instead of trying to retrace some logic by reading the code again.
dkarl
What I don't like about building systems at the REPL is that when I investigate a possibility and it works out, I have to identify the code that's actually needed, clean it up, and turn my ad-hoc tests in the REPL into unit tests. There's a trade-off between quick experimentation and the mess I create in search of immediate feedback.
I think an LLM could help with that. Ideally, I could say, "Please add the widget-muckle-frobnicate function from the current REPL session to the codebase. Fix or flag any shortcuts I took in its implementation, such as hardcoding configuration and naming functions 'foo' or 'bar', and add unit tests that include the examples I tried in the REPL as well as property-based tests to exercise edge cases."
tosh
I agree, it’s tricky, but agents are also great at inspecting stuff
right now the advice in the readme makes a lot of sense: use git (or similar) for checkpointing
a bit like climbing with ropes and anchors
pydry
REPL requires less discipline if you combine it with TDD. If there's an "abandon ship, tear down and set up the right state from scratch" red button it helps.
cldwalker
https://www.youtube.com/watch?v=F61YWNapxJg is a cool demo of this. I'm curious how well this repl assistant approach will work with larger codebases
makizar
Here's a video with a demo: https://www.youtube.com/watch?v=F61YWNapxJg
mark_l_watson
Looks good and fills a useful niche. Unfortunately for me, I only use Gemini, OpenAI, and local models and this is configured for Claude. I set a calendar reminder to see if other platforms are supported in the future.
Off topic, but I have found Google Jules to be useful for Clojure development.
tosh
I don't think the MCP is inherently tied to Claude, afaiu it should work with any agent/client that has MCP support.
Bruce recommends/mentions Claude Desktop specifically because it makes it easy to try out Clojure MCP without having to worry about token cost.
null
whalesalad
It appears to have support for Gemini and OpenAI, https://github.com/bhauman/clojure-mcp?tab=readme-ov-file#ll...
ihodes
If there were an MCP to connect to, say, a running Chrome tab with your frontend running on it, that would allow an agent to both visually inspect and interact with the webpage as well as look at the network and console tabs, etc. That would be hugely helpful. Is there something like that today?
chrismustcode
The playwright MCP
ihodes
Very cool—It seems to allow access to the page itself, not to the network or console tabs, correct?
fjalahdbtnt
What the hell is the point of a desktop app for a chatbot? I thought there were apis for this sort of thing.
If you need to run code locally, why bother paying google to run claude?
> I highly recommend using it with Claude Desktop to start. It's prettier and there are no api charges!
???? How does this make sense to anyone
didibus
I'm not sure what you are asking exactly?
Say you have a Clojure project, it's in a folder on your computer where you likely cloned or initialized a git repo.
Now you want to leverage an agentic LLM that can connect to clojure-mcp so you can prompt the LLM to make real edits to your project source files, folder structure, resources, documentation, etc.
Your options are kind of limited: - Amazon Q CLI - Claude Code CLI - OpenAI Codex CLI
Those are the best. Then you have the IDE based ones, like Cursor, Windsurf, Copilot agent mode (in public preview currently), and so on.
What they are saying though is that Claude Desktop also support MCP, and can be used without incurring API charges.
Honestly, the in-IDE ones, for me, are not very good, you really don't need this stuff tied up inside an editor. I prefer the CLIs personally, but can see how you can also just as easily run Claude Desktop as a side-bar than need something inside your editor.
roenxi
And present state isn't even that important. In 2-4 years we'll be 2 hardware generations in the future, people can buy hardware tailored to 2024-25 models and VRAM will be creeping up (or mitigations for low VRAM found). The models something uses today don't tell us much about what a project will look like in 3 years. None of the current crop of leading models are going to last that long. A project might easily be looking at the medium term not the present.
dotemacs
> Your options are kind of limited: - Amazon Q CLI - Claude Code CLI - OpenAI Codex CLI
Ampcode have a CLI, which is their agent using Claude 4.
Google also came out with Jules a few days ago.
There's aider, with which you can use whichever LLM you'd like.
I'm pretty sure that there are others...
didibus
Aider does not have MCP support yet. Neither does Jules I believe.
Ampcode I heard of, but I also heard it's very expensive, same for Devin. I also don't know if either of them support MCP.
I'm sure there are others, of varying quality, but realistically, the options you'd want to use are the ones I listed I think.
P.S.: I'd been looking for alternatives by the way, something that lets me use OpenAI models, I've yet to try it but heard good things about: https://block.github.io/goose/
throwaway314155
> Your options are kind of limited: - Amazon Q CLI - Claude Code CLI - OpenAI Codex CLI
Out of curiosity, which option do you go for?
didibus
For clojure-mcp, you really should try just Claude Desktop. That's because clojure-mcp provides all the tools you need already, reading files, running shell commands, running code in the REPL, running tests, listing directories, linting code, etc.
The others I listed above come with a lot of tools baked in, and I'm not sure if they could interfere, like the LLM might prefer one that's bundled to using the clojure-mcp ones.
Otherwise I use Amazon Q CLI, because it is the cheapest of the bunch. I'd say Claude Code CLI is the other I'd use personally.
null
Until you've tried using an LLM assistant fully hooked into a stateful REPL, you can't speculate. The experience is fantastic as the feedback for the code being developed is earlier and tighter.
The LLM agent will often write the code for a function and immediately follow that code up with several smoke testing expressions and the eval the whole thing in one go, function and tests.
It will creatively setup test harnesses to enable it to exercise code while it's being developed. (think html endpoints, starting and stopping servers, mocking)
And it goes on from there. Its an experience, and I submit to the reader that they experience it sooner than later bc its an extremely effective workflow and its AWESOME!