Show HN: Pipelex – Declarative language for repeatable AI workflows
12 comments
·October 28, 2025novoreorx
Recently I'm working on a project that let the users to analyze and rate the bid documents based on one tender document. The general logic is quite alike to the cv-job_offer example in Pipelex. The challenge I encountered is that both tender and bid documents are very large, it's impossible to load even a single one in LLM context, not to say both. Therefore I have to design a workflow to extract structured informations and evaluation criteria from the tender doc into predefined data models, and then dispatch multiple tasks to complete evaluation of these data models against each bid doc.
I'm wondering if this kind of scenario (essentially it's just the input document is too big) is possible to be handled in Pipelex. In my understanding DSL is good for it's high-level abstraction and easy to understand, but lacks flexibility and the power is restricted. How can the users of Pipelex iterate the pipelines to fulfill the complex need when the business logic became complex inevitably?
clafferty
Declarative workflows is such a good idea, fantastic, and I love the AI first principles where pipeline creation and editing the pipeline can be done with AI too.
The declarative style keeps the workflow detail at a high enough level to iterate super quick - love that. More important to me is that it’s structured and seems like it would be more testable (I see validation in your docs).
Zooming in to the pipe/agent steps I can’t quite see if you can leverage MCP as a client and make tool calls? Can you confirm? If not what’s your solution for working with APIs in the middle of your pipeline?
Also a quick question, declarative workflows won’t solve the fact that LLMs output is always non deterministic, and so we can’t always be guaranteed the output from prior steps will be correct. What tools or techniques are you using/recommending to measure the reliability of the output from prior steps? I’m thinking of how might you measure at a step level to help you prioritise which prompts need refinements or optimisations? Is this a problem you expect to own in Pipex or one to be solved elsewhere?
Great job guys, your approach looks like the right way to solve this problem and add some reliability to this space. Thanks for sharing!
lchoquel
Hi Clafferty, Providing an MCP server was a no brainer and we have the first version available. But you're right, using "MCP as a client" is a question we have started asking ourselves too. But we haven't had the time to experiment yet, so, no definitive answer. For now, we have a type of pipe called PipeFunc which can call a python function and so possibly any kind of tool under the hood. But that is really a makeshift solution. And we are eager to get your point of view and discuss with the community to get it right.
Many companies are working on evals and we will have a strategy for integration with Pipelex. What we have already is modularity: you can test each pipe separately or test a whole workflow, that is pretty convenient. Better yet, we have the "conceptual" level of abstraction: the code is the documentation. So you don't need any additional work to explain to an eval system what we were expecting at each workflow step: it's already written into it. We even plan to have an option (typically for debug mode) that checks every input and every output complies semantically with what was intended and expected.
Thanks a lot for your feedback! It's a lot of work so greatly appreciated.
novoreorx
Love the concept, I saw similar ideas in BAML [1], what do you think are the differences and advantages of Pipelex over it?
lchoquel
Hi novoreorx, the biggest difference is that in Pipelex workflows, we express the logic in our high-level language, rather than in Python or typescript. This makes it easier to collaborate between tech and non-tech people, like domain experts. It's also great for collaboration with AI: the declarative approach and the high level of abstraction means that LLMs have all the context to understand what's going on in the workflow. Also, we don't use Pipelex for "autonomous agents": our workflows are meant to be repeatable and deterministic, like a tool. Like a tool that can be used by agents though our MCP server.
RoyTyrell
Sorry, I guess I'm not fully understanding what this is exactly. Would you describe this as a low-code/no-code agent generator? So if you can define requirements via a pipelex "config" file, Pipelex will generate a python-based agent?
lchoquel
Hi RoyTyrell, I guess you could call it low-code, a new kind of no-code where we have natural language in the mix. But no, Pipelex does not generate a python-based agent: the pipelex script is interpreted at runtime.
hartem_
Declarative DSL is a really interesting approach, especially since you’re exposing it directly to the users. There are some applications where throwing the dice in production by having LLM as part of the runtime is not an option.
lchoquel
Yes! Clearly the introduction of LLMs into the mix raises the problem of throwing dice. The point of view we chose is: how to orchestrate the collaboration between AI, Software and people? With our aim to have repeatable workflows, this drove us away from building autonomous agents and towards a place where the software is in command of the orchestration. Then the Humans and AI can discuss "what you want to do" and have software run it and use AI where it's needed.
ronaldgumo
Very cool declarative + agent-first is the right direction. Love the “Dockerfile for AI reasoning” analogy. Excited to try composing Pipelex with Codiris workflows.
Waiting for partnership to propose to our users
lchoquel
Thanks, Ronald! Yes, very interested in discussing integrations. Pipelex is super modular and open by design, so it should be a breeze.
cranberryturkey
does pipex expose any kind of api I can build around?
We’re Robin, Louis, and Thomas. Pipelex is a DSL and a Python runtime for repeatable AI workflows. Think Dockerfile/SQL for multi-step LLM pipelines: you declare steps and interfaces; any model/provider can fill them.
Why this instead of yet another workflow builder?
- Declarative, not glue code: you state what to do; the runtime figures out how. - Agent-first: each step carries natural-language context (purpose, inputs/outputs with meaning) so LLMs can follow, audit, and optimize. Our MCP server enables agents to run pipelines but also to build new pipelines on demand. - Open standard under MIT: language spec, runtime, API server, editor extensions, MCP server, n8n node. - Composable: pipes can call other pipes, created by you or shared in the community.
Why a domain-specific language?
- We need context, meaning and nuances preserved in a structured syntax that both humans and LLMs can understand - We need determinism, control, and reproducibility that pure prompts can't deliver - Bonus: editors, diffs, semantic coloring, easy sharing, search & replace, version control, linters…
How we got there:
Initially, we just wanted to solve every use-case with LLMs but kept rebuilding the same agentic patterns across different projects. So we challenged ourselves to keep the code generic and separate from use-case specifics, which meant modeling workflows from the relevant knowledge and know-how.
Unlike existing code/no-code frameworks for AI workflows, our abstraction layer doesn't wrap APIs, it transcribes business logic into a structured, unambiguous script executable by software and AI. Hence the "declarative" aspect: the script says what should be done, not how to do it. It's like a Dockerfile or SQL for AI workflows.
Additionally, we wanted the language to be LLM-friendly. Classic programming languages hide logic and context in variable names, functions, and comments: all invisible to the interpreter. In Pipelex, these elements are explicitly stated in natural language, giving AI full visibility: it's all logic and context, with minimal syntax.
Then, we didn't want to write Pipelex scripts ourselves so we dogfooded: we built a Pipelex workflow that writes Pipelex workflows. It's in the MCP and CLI: "pipelex build pipe '…'" runs a multi-step, structured generation flow that produces a validated workflow ready to execute with "pipelex run". Then you can iterate on it yourself or with any coding agent.
What’s included: Python library, FastAPI and Docker, MCP server, n8n node, VS Code extension.
What we’d like from you
1. Build a workflow: did the language work for you or against you? 2. Agent/MCP workflows and n8n node usability. 3. Suggest new kinds of pipes and other AI models we could integrate 4. Looking for OSS contributors to the core library but also to share pipes with the community
Known limitations
- Connectors: Pipelex doesn’t integrate with “your apps”, we focus on the cognitive steps, and you can integrate through code/API or using MCP or n8n - Visualization: we need to generate flow-charts - The pipe builder is still buggy - Run it yourself: we don’t yet provide a hosted Pipelex API, it’s in the works - Cost-tracking: we only track LLM costs, not image generation or OCR costs yet - Caching and reasoning options: not supported yet
Links
- GitHub: https://github.com/Pipelex/pipelex - Cookbook: https://github.com/Pipelex/pipelex-cookbook - Starter: https://github.com/Pipelex/pipelex-starter - VS Code extension: https://github.com/Pipelex/vscode-pipelex - Docs: [https://docs.pipelex.com](https://docs.pipelex.com/) - Demo video (2 min): https://youtu.be/dBigQa8M8pQ - Discord for support and sharing: https://go.pipelex.com/discord
Thanks for reading. If you try Pipelex, tell us exactly where it hurts, that’s the most valuable feedback we can get.