Launch HN: Lucidic (YC W25) – Debug, test, and evaluate AI agents in production
34 comments
·July 30, 2025iancarroll
I do feel frustrated with the current state of evaluations for long-lived sessions with many tool calls -- by default OpenAI's built-in eval system seems to rate chat completions that end with a tool call as "bad" because the tool call response is only in the next completion.
But our stack is in Go and it has been tough to see a lot of observability tools focus on Python rather than an agnostic endpoint proxy like Helicone has.
AbhinavX
we're working on that right now, we'd love to hear your opinions(if you're interested you can send us an email at team@lucidic.ai).
IgorBlink
Looks great! debugging agents is a huge pain for me, and this actually looks useful. Love the time travel and trajectory clustering ideas. Bookmarked to try it soon
AbhinavX
Awesome--let us know what you think!
jauhar_
Congrats on the launch! On a tangential note, is this work open source or do you guys have some technical report that you could share? I am specially interested in your results on the clustering methods for surfacing behavioural patterns. Thanks!
AbhinavX
We're new to the open source scene so we don't have anything published yet but plan to in the future. A basic overview of the way we do clustering is we condense stateful information -> create a state embedding -> create tags -> cluster based on distance of tags + embeddings.
srameshc
I am not an expert but still I am building enough agents. But I don't understand how this tool can be integrated with an exisiting system. Is it like an APM for agents if I understand it correctly ?
AbhinavX
the way it is integrated (its explained more in the docs) is by installing the python/typescript sdk and writing "lai.init()" at the top of your code. Then we capture all LLM calls and tools with integrated providers (similar to LLM ops platforms). If you want to manually add more information you can add decorators, lai.create_step/create_event "logs", etc.
We then take all this information you give us and try to transform it i.e group together similar nodes, run an agent to evaluate a session, or to find root cause of a session failure in the backend.
majdalsado
I'm looking into a tool like this for my startup. Why should I use this over Langfuse or Helicone?
AbhinavX
Langfuse and Helicone work well for traditional LLM operations, but AI agents are different. We discovered that AI agents require fundamentally different tooling, here are some examples.
First, while LLMs simply respond to prompts, agents often get stuck in behavioral loops where they repeat the same actions; to address this, we built a graph visualization that automatically detects when an agent reaches the same state multiple times and groups these occurrences together, making loops immediately visible.
Second, our evaluations are much more tailored for AI Agents. LLM ops evaluations usually occur at a per prompt level (i.e hallucination, qa-correctness) which makes sense for those use cases, but agent evaluations are usually per session or run. What this means is that usually a single prompt in isolation didn’t cause an issue but some downstream memory issue or previous action caused this current tool to fail. So, we spent a lot of time creating a way for you to create a rubric. Then, to evaluate the rubric (so that there isn’t context overload) we created an agentic pipeline which has tools like viewing rubric examples, ability to zoom “in and out” of a session (to prevent context overload), referencing previous examples, etc.
Third, time traveling and clustering of similar responses. LLM debugging is straightforward because prompts are stateless and are independent from one another, but agents maintain complex state through tools, context, and memory management; we solved this by creating “time travel” functionality that captures the complete agent state at any point, allowing developers to modify variables like context or tool availability and replay from that exact moment and then simulate that 20-30 times and group together similar responses (with our clustering alg).
Fourth, agents exhibit far more non-deterministic behavior than LLMs because a single tool call can completely change their trajectory; to handle this complexity, we developed workflow trajectory clustering that groups similar execution paths together, helping developers identify patterns and edge cases that would be impossible to spot in traditional LLM systems.
witnessme
Love the UX. From the value POV, I am yet to see/experience how it differs from competitors. P.S. I currently use Braintrust and Opik
henriquegodoy
Nice, i think that yall are on the correct path betting on evals, but please make your ui less "generic"
simonw
How does Lucidic define the term "AI agent"?
AbhinavX
Colloquially, AI agents are just while loops with LLM calls and tool calls. More specifically, what distinguishes an agent from LLM pipelines is that its next step is determined dynamically (based on the output of the previous one) so the execution path isn’t fixed. The boundary between complex LLM chaining and agents is pretty fuzzy, but we support both.
Haha also our whole backend is in Django :)
iskhare
You say your rubric approach is “better than llm as a judge.” Can you please elaborate on what makes you say that?
AbhinavX
LLM as a judge for agent usually has context overload and even if you have a really good prompt for your evaluation, LLMs hallucinate because there is just too much information to ingest. So we created an agentic pipeline to basically do evaluations on rubrics which have better results and dont miss intricacies due to the overloaded context.
ehsanu1
I'm reading: the difference is that this is an agent as a judge rather than an LLM as a judge, paired with more structured judging parameters. Is that right? Is the agent just a loop over each criterium, or is it also reflecting somehow on its judging or similar?
KaseyZhang
Congrats on the launch - would be great to read more about the clustering approach you're taking
SkylerJi
looks cool—what do you mean clustering similar responses. Usually llm outputs are a bit different would those be the clustered together or is it exact text similarity
barapa
Excited to try this
Hi HN, we’re Abhinav, Andy, and Jeremy, and we’re building Lucidic AI (https://dashboard.lucidic.ai), an AI agent interpretability tool to help observe/debug AI agents.
Here is a demo: https://youtu.be/Zvoh1QUMhXQ.
Getting started is easy with just one line of code. You just call lai.init() in your agent code and log into the dashboard. You can see traces of each run, cumulative trends across sessions, built-in or custom evals, and grouped failure modes. Call lai.create_step() with any metadata you want, memory snapshots, tool outputs, stateful info, and we'll index it for debugging.
We did NLP research at Stanford AI Lab (SAIL), where we worked on creating an AI agent (w/ fine-tuned models and DSPy) to solve math olympiad problems (focusing on AIME/USAMO); and we realized debugging these agents was hard. But the last straw was when we built an e-commerce agent that could buy items online. It kept failing at checkout, and every one-line change, tweaking a prompt, switching to Llama, adjusting tool logic, meant another 10-minute rerun just to see if we hit the same checkout page.
At this point, we were all like, this sucks, so we improved agent interpretability with better debugging, monitoring, and evals.
We started by listening to users who told us traditional LLM observability platforms don't capture the complexity of agents. Agents have tools, memories, events, not just input/output pairs. So we automatically transform OTel (and/or regular) agent logs into interactive graph visualizations that cluster similar states based on memory and action patterns. We heard that people wanted to test small changes even with the graphs, so we created “time traveling,” where you can modify any state (memory contents, tool outputs, context), then re-simulate 30–40 times to see outcome distributions. We embed the responses, cluster by similarity, and show which modifications lead to stable vs. divergent behaviors.
Then we saw people running their agent 10 times on the same task, watching each run individually, and wasting hours looking at mostly repeated states. So we built trajectory clustering on similar state embeddings (like similar tools or memories) to surface behavioral patterns across mass simulations.
We then use that to create a force-directed layout that automatically groups similar paths your agent took, which displays states as nodes, actions as edges, and failure probability as color intensity. The clusters make failure patterns obvious; you see trends across hundreds of runs, not individual traces.
Finally, when people saw our observability features, they naturally wanted evaluation capabilities. So we developed a concept for people to make their own evals called "rubrics," which lets you define specific criteria, assign weights to each criterion, and set score definitions, giving you a structured way to measure agent performance against your exact requirements.
To evaluate these criteria, we used our own platform to build an investigator agent that reviews your criteria and evaluates performance much more effectively than traditional LLM-as-a-judge approaches.
To get started visit dashboard.lucidic.ai and https://docs.lucidic.ai/getting-started/quickstart. You can use it for free for 1,000 event and step creations.
Look forward to your thoughts! And don’t hesitate to reach out at team@lucidic.ai