AV1: A Modern, Open Codec
netflixtechblog.com
BMW PHEV: When EU engineering becomes a synonym for "unrepairable" (EV Clinic)
evclinic.eu
NeurIPS best paper awards 2025
blog.neurips.cc
Trick users and bypass warnings – Modern SVG Clickjacking attacks
lyra.horse
CUDA-l2: Surpassing cuBLAS performance for matrix multiplication through RL
github.com
Brussels writes so many laws
siliconcontinent.com
The Ofcom Files, Part 4: Ofcom Rides Again
prestonbyrne.com
Multivox: Volumetric Display
github.com
Transparent leadership beats servant leadership
entropicthoughts.com
State of AI: An Empirical 100T Token Study with OpenRouter
openrouter.ai
StardustOS: Library operating system for building light-weight Unikernels
github.com
Thoughts on Go vs. Rust vs. Zig
sinclairtarget.com
Why are 38 percent of Stanford students saying they're disabled?
reason.com
How elites could shape mass preferences as AI reduces persuasion costs
arxiv.org
CSS now has an if() conditional function
caniuse.com
Help, My Java Object Vanished (and the GC Is Not at Fault)
arraying.de
Show HN: Onlyrecipe 2.0 – I added all features HN requested – 4 years later
onlyrecipeapp.com
We gave 5 LLMs $100K to trade stocks for 8 months
aitradearena.com
What is better: a lookup table or an enum type?
cybertec-postgresql.com
PyTogether: Collaborative lightweight real-time Python IDE for teachers/learners
github.com
Fighting the age-gated internet
wired.com
I ignore the spotlight as a staff engineer
lalitm.com
Converge (YC S23) is hiring a martech expert in NYC
runconverge.com
I think my favorite of the bunch is the "Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model" paper. Easy to read, gets the point across very intuitively and quickly, and the point is very interesting and relevant to a lot of people.
About the Superposition paper - this is close to what I've been thinking about over the past week. I'm thinking that concepts or choices in a "superposition" are harder for a fully-differentiable neural net to reason about. For example, if there's a "green" vs "purple" choice to be made, it can't fully commit to either (especially if they're 50-50), and will have to reason about both simultaneously (difficult due to nonlinear manifold space). Discretizing to tokens (non-differentiable argmax) forces a choice, and that allows it to reason about a single concept separately and easier.