That fractal that's been up on my wall for 12 years
chriskw.xyz
Improving performance of rav1d video decoder
ohadravid.github.io
Fast Allocations in Ruby 3.5
railsatscale.com
Launch HN: WorkDone (YC X25) – AI Audit of Medical Charts
Showh HN: SQLite JavaScript - extend your database with JavaScript
github.com
A South Korean grand master on the art of the perfect soy sauce
theguardian.com
Adventures in Symbolic Algebra with Model Context Protocol
stephendiehl.com
Planetfall
somethingaboutmaps.wordpress.com
Show HN: DockFlow – Switch between multiple macOS Dock layouts instantly
dockflow.appitstudio.com
Why I Built My Own Audio Player
nexo.sh
The scientific “unit” we call the decibel
lcamtuf.substack.com
Social media platforms: what's wrong, and what's next
scottgoci.com
Show HN: Whenish – Plan Group Events in iMessages
apps.apple.com
MCP explained without hype or fluff
blog.nilenso.com
Four years of sight reading practice
sandrock.co.za
Benchmarking Crimes Meet Formal Verification
microkerneldude.org
Show HN: Curved Space Shader in Three.js (via 4D sphere projection)
github.com
Everything’s a bug (or an issue)
bozemanpass.com
Near-infrared spatiotemporal color vision enabled by upconversion contact lenses
cell.com
The Philosophy of Byung-Chul Han (2020)
newintrigue.com
Free-Threaded Python Library Compatibility Checker
ft-checker.com
I'm curious, in image generation, flow matching is said to be better than diffusion, then why do these language models still start from diffusion, instead of jumping to flow matching directly?