Skip to content(if available)orjump to list(if available)

Show HN: Min.js style compression of tech docs for LLM context

iandanforth

I applaud this effort, however the "Does it work?" section answers the wrong question. Anyone can write a trivial doc compressor and show a graph saying "The compressed version is smaller!"

For this to "work" you need to have a metric that shows that AIs perform as well, or nearly as well, as with the uncompressed documentation on a wide range of tasks.

marv1nnnnn

I totally agreed with your critic. To be honest, it's even hard for myself to evaluate. What I do is select several packages that current LLM failed to handle, which are in the sample folder, `crawl4ai`, `google-genai` and `svelte`. And try some tricky prompt to see if it works. But even that evaluation is hard. LLM could hallucinate. I would say most time it works, but there are always few runs that failed to deliver. I actually prepared a comparison, cursor vs cursor + internet vs cursor + context7 vs cursor + llm-min.txt. But I thought it was stochastic, so I didn't put it here. Will consider add to repo as well

ricardobeat

> But even that evaluation is hard. LLM could hallucinate. I would say most time it works, but there are always few runs that failed to deliver

You can use success rate % over N runs for a set of problems, which is something you can compare to other systems. A separate model does the evaluation. There are existing frameworks like DeepEval that facilitate this.

rybosome

To be honest with you, it being stochastic is exactly why you should post it.

Having data is how we learn and build intuition. If your experiments showed that modern LLMs were able to succeed more often when given the llm-min file, then that’s an interesting result even if all that was measured was “did the LLM do the task”.

Such a result would raise a lot of interesting questions and ideas, like about the possibility of SKF increasing the model’s ability to apply new information.

timhigins

> LLM could hallucinate

The job of any context retrieval system is to retrieve the relevant info for the task so the LLM doesn't hallucinate. Maybe build a benchmark based on less-known external libraries with test cases that can check the output is correct (or with a mocking layer to know that the LLM-generated code calls roughly the correct functions).

SparkyMcUnicorn

It's also missing the documentation part. Without additional context, method/type definitions with a short description will only go so far.

Cherry picking a tiny example, this wouldn't capture the fact that cloudflare durable objects can only have one alarm at a time and each set overwrites the old one. The model will happily architect something with a single object, expecting to be able to set a bunch of alarms on it. Maybe I'm wrong and this tool would document it correctly into a description. But this is just a small example.

For much of a framework or library, maybe this works. But I feel like (in order for this to be most effective) the proposed spec possibly needs an update to include little more context.

I hope this matures and works well. And there's nothing stopping me from filling in gaps with additional docs, so I'll be giving it a shot.

enjoylife

Was going to point this out too. One suggestion would be to try this on libraries having recent major semvar bumps. See if the compressed docs do better on the backwards incompatible changes.

rco8786

Yea I was disappointed to see that they just punted (or opted not to show?) on benchmarks.

gk1

92% reduction is amazing. I often write product marketing materials for devtool companies and load llms.txt into whatever AI I’m using to get accurate details and even example code snippets. But that instantly adds 60k+ tokens which, at least in Google AI Studio, annoyingly slows things down. I’ll be trying this.

Edit: After a longer look, this needs more polish. In addition to key question raised by someone else about quality, there are signs of rushed work here. For example the critical llm_min_guideline.md file, which tells the LLM how to interpret the compressed version, was lazily copy-pasted from an LLM response without even removing the LLM's commentary:

"You are absolutely right! My apologies. I was focused on refining the detail of each section and overlooked that key change in your pipeline: the Glossary (G) section is no longer part of the final file..."

Doesn't exactly instill confidence.

Really nice idea. I hope you keep going with this as it would be a very useful utility.

marv1nnnnn

Oof, you nailed it. Thanks for the sharp eyes on llm_min_guideline.md. That's a clear sign of me pushing this out too quickly to get feedback on the core concept, and I didn't give the supporting docs the attention they deserve. My bad. Cleaning that up, and generally adding more polish, is a top priority. Really appreciate you taking the time to look deeper and for the encouragement to keep going. It's very helpful!

ricardobeat

Wait, are you also using an LLM to respond on Hacker News?

thegeomaster

What is absolutely essential to present here, but is missing, is a rigorous evaluation of task completion effectiveness between an agent using this format vs the original format. It has to be done on a new library which is guaranteed not to be present in the training set.

As it stands, there is nothing demonstrating that this lossy compression doesn't destroy essential information that an LLM would need.

I also have a gut feeling that the average LLM will actually have more trouble with the dense format + the instructions to decode it than a huge human-readable file. Remember, LLMs are trained on internet content, which contains terabytes of textual technical documentation but 0 bytes of this ad-hoc format.

I am happy to be proven wrong on both points (LLMs are also very unpredictable!), but the burden of proof for an extravagant scheme like this lies solely on the author.

marv1nnnnn

Agree, actually this approach isn't even possible without the birth of reasoning LLM. In my test, reasoning LLM perform much better than non-reasoning LLM in interpreting the compressed file. Those LLMs are really good at understanding abstraction.

thegeomaster

My point still stands --- the reasoning tokens being consumed to interpret the abstracted llms.txt could have been used for solving the problem at hand.

Again, I'm not saying the solution doesn't work well (my intuition on LLMs has been wrong enough times), but it would be really helpful/assuring to see some hard data.

ricardobeat

I'm a little disappointed. Was excited to try this, and it seemed to work initially. But then I gave it a real website to scrape, and it always hangs after only parsing ~10 out of 50+ pages, before even getting to the compression step.

Then I decided to try and switch to the local mode, and after ~ an hour figuring out how to build a markdown version of the docs I needed, hit the "object has no attribute 'generate_from_text'" error, as someone else also reported [1].

So I cloned the source and started to look around, and the method really doesn't exist, even though it's called from main.py. A comment above it says "Assuming LLMMinGenerator has a method to process raw text" and I immediately feel the waft of vibe coding... this is all a mirage. I saw a long README and assumed it was real, but that was probably written by an LLM as well. Would have been obvious by the 'IntegratedKnowledgeManifest_SKF' and 'GenerationTimestamp' keys in the 'SKF format' definition - the former makes no sense, and neither has any reason to be this verbose when the goal is compression.

ianbicking

This mentions an SKF format for knowledge representation... but looking it up, I'm assuming it was invented just for this project?

Which is fine, but is there a description of the format distinct from this particular use? (I'm playing around with these same knowledge representation and compression ideas but for a different domain, so I'm curious about the ideas behind this format)

dmos62

I would really like a benchmark showing that AIs can use this. Just the possibility that AI can understand the compressed format similarly well as the original excites me. How did you come up with the format?

marv1nnnnn

Honestly it's really funny. I have the initial idea, and then brainstorm with gemini 2.5 pro a lot, let it design the system. (And in prompt let it think like Jeff Dean and John Carmack ) But most version fails. Then somehow realized I can't let it design from scratch, I give gemini a structure I think is reasonable and efficient after seeing all those versions, let it polish based on that and it works much better.

dmos62

That's a pretty cool approach!

revicon

We've done some experimentation when using Claude Code and taken to just creating a "vendor" folder under our "docs" section of each of our repos and just pull down the readme file for every library we use. Then when I'm prompting Claude to figure something out, I'll remind it to go check "docs/vendor/awesomelib" or whatever and it does a fine job of checking the docs out before it starts formulating an answer.

This has done wonders for improving our results when working with TanStack Start or shadcn/ui or whatever.

I guess there's pieces of this that would be helpful to us, but there's too much setup work for me to mess with it right now, I don't feel like generating a Gemini api key, installing puppeteer, etc.

I already have all the docs pulled down, but reducing the number of tokens used for my LLM to pull up the doc files I'm referencing is interesting.

Is there a command line tool anyone has had luck with that just trims down a .md file but still leaves it in a state that the LLM can understand it?

TheTaytay

I’ve been creating a doc for each of my primary libs (using Claude Code of course). I like your vendor/readme idea. Do you find Claude going and reading more docs if it needs to?

infogulch

I wonder how this compares to KBLaM [1], which also has a preprocessing step to prepare a large amount of reference material for direct access by LLMs. One obvious difference is that it has a modified attention mechanism they call "rectangular attention". The paper was posted on HN a few times, but it hasn't generated any discussion yet.

[1]: Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs | https://www.microsoft.com/en-us/research/blog/introducing-kb...

eric-burel

You'd want to make this tool domain specific - language, type of docs, perhaps target a specific documentation framework/format that is common and standardized enough. I fail to buy a content-agnostic summarization method, though I recognize it could be better than nothing. Also benchmark or it doesn't exist.

obviyus

I recently upgraded a project from Remix to React Router 7 but unfortunately all AI assistants still try to "fix" my code with the Remix imports / conventions. I've had to add a bunch of custom rules but that doesn't seem enough.

This seems super useful though. I'll try it out with the RR7 docs and see how well it works.

jsmith99

I'm also using RR7 and Gemini 2.5 Pro just refused to believe me that I could import Link from react-router. Just ignored my instructions and went down a rabbit hold in copilot agent mode, deeper and deeper, trying every possible package name (none of which were installed). I've now created a copilot instructions file into which I've copied most of the RR7 migration docs.

corytheboyd

FWIW sounds like a great use case for some rules files. I’ve only worked with Cursor and Roo but they both support it.

This of course only works for the “stop recommending X” part of your problem, but maybe something like the project here helps fill in with up-to-date documentation too?

Both Cursor and Roo also support URL context additions, which downloads the page and converts it to a machine readable format to include in context. I throw documentation links into my tasks with that all the time, which works out because I know that I am going to be sanity checking generated code against documentation anyway.

cluckindan

Maybe they have read that RR and Remix are now the same thing.

claar

This project creates a "compact, machine-optimized format designed for efficient AI parsing rather than human readability" called "Structured Knowledge Format (SKF)".

It's not obvious to me that this is a good idea. LLMs are trained on human-readable text.

The author notes that non-reasoning LLMs struggle with these SKFs. Maybe that's a hint that human readable summaries would perform better? Just a guess.

Or perhaps a vector store?

fcoury

Does it work with any technical doc? I see the CLI claims it's Python specific?

  > $ llm-min --help
  
  Usage: llm-min [OPTIONS]
  
  Generates LLM context by scraping and summarizing documentation for Python libraries.

dmos62

There's a sample for svelte in the repo.

fcoury

Guess I missed it. Thank you.