Repairable Flatpack Toaster
kaseyhou.com
Comparing Fuchsia components and Linux containers [video]
fosdem.org
Lawrence of Arabia, Paul Atreides, and the Roots of Frank Herbert's Dune (2021)
reactormag.com
Hacking the Xbox 360 Hypervisor Part 2: The Bad Update Exploit
icode4.coffee
Another Conflict Between Privacy Laws and Age Authentication–Murphy v Confirm ID
blog.ericgoldman.org
Ask HN: Who is hiring? (March 2025)
Apple's Software Quality Crisis
eliseomartelli.it
The power of interning: making a time series database smaller
gendignoux.com
One Logo, Three Companies
estilofilos.blogspot.com
SQLite-on-the-Server Is Misunderstood: Better at Hyper-Scale Than Micro-Scale
rivet.gg
The Golden Age of Japanese Pencils, 1952-1967 (2022)
notes.stlartsupply.com
Launch HN: Cuckoo (YC W25) – Real-time AI translator for global teams
An Attempt to Catch Up with JIT Compilers
arxiv.org
Show HN: Agents.json – OpenAPI Specification for LLMs
github.com
How the U.K. broke its own economy
theatlantic.com
Show HN: Knowledge graph of restaurants and chefs, built using LLMs
theophilecantelob.re
Chrome Returns 206 when the Server Returns 403
aoli.al
Show HN: Sonauto API – Generative music for developers
sonauto.ai
Ask HN: What less-popular systems programming language are you using?
The weird Hewlett Packard FreeDOS option (2022)
blog.tmm.cx
Keeling Labs (YC W23) Is Hiring an ML Engineer for Climate Tech (Los Angeles)
keelinglabs.com
Ask HN: Who wants to be hired? (March 2025)
Go-attention: A full attention mechanism and transformer in pure Go
github.com
I've been slightly annoyed by how the Speculative Decoding paper has gotten all the credit for the technique – I first learned about the technique from a paper more than a year older[1], Shallow Aggressive Decoding.
They introduce the same method, but apply it to grammatical error correction, meaning the "draft" output is just the input itself. The Speculative Decoding paper tries to emphasize differences between this and their method, saying that theirs is more general, as they apply it to more domains, allowing the draft to come from a smaller model, and extend it to allow sampling.
All of that is great, and deserves another paper, but doesn't deserve the credit for inventing and rights to rename the method, especially when they were aware of Shallow Aggressive Decoding before uploading their first draft.
[1]: https://arxiv.org/abs/2106.04970