Heretic: Automatic censorship removal for language models
github.com
Only three kinds of AI products work
seangoedecke.com
Brimstone: ES2025 JavaScript engine written in Rust
github.com
AirPods libreated from Apple's ecosystem
github.com
Garbage Collection Is Useful
dubroy.com
Running the "Reflections on Trusting Trust" Compiler
research.swtch.com
Anthropic's report smells a lot like bullshit
djnn.sh
Measuring the doppler shift of WWVB during a flight
greatscottgadgets.com
PgFirstAid: PostgreSQL function for improving stability and performance
github.com
Vintage Large Language Models
owainevans.github.io
Production-Grade Container Deployment with Podman Quadlets – Larvitz Blog
blog.hofstede.it
Iran begins cloud seeding operations as drought bites
arabnews.com
The Internet Is No Longer a Safe Haven
brainbaking.com
Maybe you’re not trying
usefulfictions.substack.com
Diamonds and Lasers: Thermal Management for Chips
spectrum.ieee.org
IDEmacs: A Visual Studio Code clone for Emacs
codeberg.org
Run Nix Based Environments in Kubernetes
flox.dev
Dissecting Flock Safety: The Cameras Tracking You Are a Security Nightmare [video]
youtube.com
Things that aren't doing the thing
strangestloop.io
UK's first small nuclear power station to be built in north Wales
bbc.com
The talk focuses for a bit on having pure data from before the given date. But it doesn't consider that the data available from before that time may be subject to strong selection bias, based on what's interesting to people doing scholarship or archival work after that date. E.g. have we disproportionately digitized the notes/letters/journals of figures whose ideas have gained traction after their death?
The article makes a comparison to financial backtesting. If you form a dataset of historical prices of stocks which are _currently_ in the S&P500, even if you only use price data before time t, models trained against your data will expect that prices go up and companies never die, because they've only seen the price history of successful firms.