Skip to content(if available)orjump to list(if available)

Show HN: Llm-benchmark – Benchmarks LLM-optimized code across multiple providers

thomasfromcdnjs

I built it because sometimes I have a function in a codebase that needs to be a lot faster. So here you just pass that function, choose some models, it then asks each LLM to perf optimize that function.

Then you will have fn1.original.js, fn1.openai.o3.js, fn1.gemini.2.5.js, and it runs a bench mark over all of them and gives you the results.

Useful for me!