AGENTS.md – Open format for guiding coding agents
agents.md
Copilot broke audit logs, but Microsoft won't tell customers
pistachioapp.com
How to Draw a Space Invader
muffinman.io
How we exploited CodeRabbit: From simple PR to RCE and write access on 1M repos
research.kudelskisecurity.com
Pre-Sputnik Earth-Orbit Glints
overcomingbias.com
Tiny microbe challenges the definition of cellular life
nautil.us
D2 (text to diagram tool) now supports ASCII renders
d2lang.com
Monoid-Augmented FIFOs, Deamortised
pvk.ca
Emacs as your video-trimming tool
xenodium.com
How to Scale Your Model: How to Think About GPUs
jax-ml.github.io
Physically Based Rendering in Filament
google.github.io
Without the futex, it's futile
h4x0r.org
Candle Flame Oscillations as a Clock
cpldcpu.com
Rails Charts Using ECharts from Apache
github.com
The Value of Hitting the HN Front Page
mooreds.com
How Figma’s multiplayer technology works (2019)
figma.com
Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life
github.com
Why Semantic Layers Matter (and how to build one with DuckDB)
motherduck.com
Custom telescope mount using harmonic drives and ESP32
svendewaerhert.com
The calculation under “Quiz 2: GPU nodes“ is incorrect, to the best of my knowledge. There aren’t enough ports for each GPU and/or for each switch (less the crossbar connections) to fully realize the 450GB/s that’s theoretically possible, which is why 3.2TB/s of internode bandwidth is what’s offered on all of the major cloud providers and the reference systems. If it was 3.6TB/s, this would produce internode bottlenecks in any distributed ring workload.
Shamelessly: I’m open to work if anyone is hiring.