I made my VM think it has a CPU fan
wbenny.github.io
Ask HN: What Are You Working On? (June 2025)
Modelling API rate limits as diophantine inequalities
vivekn.dev
Reverse Engineering the Microchip CLB
mcp-clb.markomo.me
Show HN: Octelium – FOSS Alternative to Teleport, Cloudflare, Tailscale, Ngrok
github.com
Revisiting Knuth's "Premature Optimization" Paper
probablydance.com
4-10x faster in-process pub/sub for Go
github.com
Bitcoin's Security Budget Issue: Problems, Solutions and Myths Debunked
budget.day
Many ransomware strains will abort if they detect a Russian keyboard installed (2021)
krebsonsecurity.com
Using the Internet without IPv4 connectivity
jamesmcm.github.io
Cell Towers Can Double as Cheap Radar Systems for Ports and Harbors (2014)
spectrum.ieee.org
The Medley Interlisp Project: Reviving a Historical Software System [pdf]
interlisp.org
Loss of key US satellite data could send hurricane forecasting back 'decades'
theguardian.com
Why Go Rocks for Building a Lua Interpreter
zombiezen.com
Honda Joins Space Race by Launching Successful Reusable Rocket
forbes.com
Personal care products disrupt the human oxidation field
science.org
ZeroRISC Gets $10M Funding, Says Open-Source Silicon Security Inevitable
eetimes.com
Show HN: Sharpe Ratio Calculation Tool
fundratios.com
Show HN: A tool to benchmark LLM APIs (OpenAI, Claude, local/self-hosted)
llmapitest.com
China Dominates 44% of Visible Fishing Activity Worldwide
oceana.org
Raymond Laflamme (1960-2025)
scottaaronson.blog
Show HN: Rust -> WASM, K-Means Color Quantization Crate for Image-to-Pixel-Art
github.com
I recently built a small open-source tool to benchmark different LLM API endpoints — including OpenAI, Claude, and self-hosted models (like llama.cpp).
It runs a configurable number of test requests and reports two key metrics: • First-token latency (ms): How long it takes for the first token to appear • Output speed (tokens/sec): Overall output fluency
Demo: https://llmapitest.com/ Code: https://github.com/qjr87/llm-api-test
The goal is to provide a simple, visual, and reproducible way to evaluate performance across different LLM providers, including the growing number of third-party “proxy” or “cheap LLM API” services.
It supports: • OpenAI-compatible APIs (official + proxies) • Claude (via Anthropic) • Local endpoints (custom/self-hosted)
You can also self-host it with docker-compose. Config is clean, adding a new provider only requires a simple plugin-style addition.
Would love feedback, PRs, or even test reports from APIs you’re using. Especially interested in how some lesser-known services compare.