The Visual World of 'Samurai Jack'
animationobsessive.substack.com
The Princeton INTERCAL Compiler's source code
esoteric.codes
Root shell on a credit card terminal
stefan-gloor.ch
How to post when no one is reading
jeetmehta.com
Is "The Phoenician Scheme" Wes Anderson's Most Emotional Film?
newyorker.com
Gabon longs to cash in on sacred hallucinogenic remedy
phys.org
How Can AI Researchers Save Energy? By Going Backward
quantamagazine.org
The Zach Attack Scratch 'N Solve Puzzle Pack
coincidence.games
Writing your own C++ standard library part 2
nibblestew.blogspot.com
Cinematography of “Andor”
pushing-pixels.org
Show HN: MBCompass – Android Compass App
github.com
HeidiSQL Available Also for Linux
heidisql.com
What works (and doesn't) selling formal methods
galois.com
TPDE: A Fast Adaptable Compiler Back-End Framework
arxiv.org
In POSIX, you can theoretically use inode zero
utcc.utoronto.ca
Nitrogen Triiodide (2016)
fourmilab.ch
A new generation of Tailscale access controls
tailscale.com
Show HN: Agno – A full-stack framework for building Multi-Agent Systems
github.com
How Generative Engine Optimization (GEO) rewrites the rules of search
a16z.com
The Rise of Judgement over Technical Skill
notsocommonthoughts.com
Show HN: Moon Phase Algorithms for C, Lua, Awk, JavaScript, etc.
github.com
It seems like there’s been a lot of progress here, but it also seems like there’s an elephant in the room that RNNs will _always_ have worse memory than self-attention since the latter always has complete access to the full context. We pay for that in other ways, but it seems like the unstated hypothesis of RNNs is that we believe in the long run that RNNs will be “good enough” and their other performance benefits will eventually prevail. I’m not convinced that humanity will ever sink comparable resources into optimizing this family of models that has gone into Transformers to make them practical at the scale they run today.