Skip to content(if available)orjump to list(if available)

Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output

Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output

14 comments

·February 3, 2025

We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.

What Klarity does:

- Real-time analysis of model uncertainty during generation - Dual analysis combining log probabilities and semantic understanding - Structured JSON output with actionable insights - Fully self-hostable with customizable analysis models

The tool works by analyzing each step of text generation and returns a structured JSON:

- uncertainty_points: array of {step, entropy, options[], type} - high_confidence: array of {step, probability, token, context} - risk_areas: array of {type, steps[], motivation} - suggestions: array of {issue, improvement}

Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.

Installation is simple: `pip install git+https://github.com/klara-research/klarity.git`

We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?

Links:

- Repo: https://github.com/klara-research/klarity - Our website: [https://klaralabs.com](https://klaralabs.com/)

deoxykev

The fundemental challenge of using log probabilities to measure LLM certainty is the mismatch between how language models process information and how semantic meaning actually works. The current models analyze text token by token-- fragments that don't necessarily align with complete words, let alone complex concepts or ideas.

This creates a gap between the mechanical measurement of certainty and true understanding, much like mistaking the map for the territory or confusing the finger pointing at the moon with the moon itself.

I've done some work before in this space, trying to come up with different useful measures from the logprobs, such as measuring shannon entropy over a sliding window, or even bzip compression ratio as a proxy for information density. But I didn't find anything semantically useful or reliable to exploit.

The best approach I found was just multiple choice questions. "Does X entail Y? Please output [A] True or [B] False. Then measure the linprobs of the next token, which should be `[A` (90%) or `[B` (10%). Then we might make a statement like: The LLM thinks there is a 90% probability that X entails Y.

siliconc0w

It seems like it would be easy to upgrade existing benchmarks to include uncertainty as a dimension. Then if a model is less certain it could maybe spend more time reasoning or route to a bigger model.

KTibow

Why does the example code use a base model to generate the analysis input?

mrciffa

In the example I'm using the instruction tuned version of Qwen2.5-7B to generate the insights

kurisufag

this seems neat but you really need to work on commit messages other than "update code". it makes it harder to get a bearing on the codebase.

mrciffa

Oh damn, you are right. It's my first opensource project and I didn't thought about it

dleeftink

You'll get there! Even if a commit doesn't have peculiars, just try to include the reason for making a change.

wruza

Not all people (and/or not in all development phases) granulate commits to something easily describable that is not “update code”. Having mass changes or flow of consciousness style refactorings in a single commit is absolutely normal.

An author doesn’t need to please a repo reader until they see a good reason to do so.

andreakl

Very interesting approach!! what models are u currently consider to integrate?

mrciffa

We want to integrate reasoning models as next steps because we see a lot of value in understanding better CoTs behaviour (DeepSeek R1 & Co)

andreakl

Okay thanks that sounds great, have u also thought about extending the scope beyond language models?

thomastjeffery

On your website, "Learn More" links to a meeting invite? That's... a decision...

I think most people clicking that button would be better served by scrolling down, but that's not made very obvious.

null

[deleted]