I want everything local – Building my offline AI workspace
instavm.io
Ultrathin business card runs a fluid simulation
github.com
Tor: How a military project became a lifeline for privacy
thereader.mitpress.mit.edu
Jim Lovell, Apollo 13 commander, has died
nasa.gov
Efrit: A native elisp coding agent running in Emacs
github.com
M5 MacBook Pro No Longer Coming in 2025
macrumors.com
Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?
The surprise deprecation of GPT-4o for ChatGPT consumers
simonwillison.net
Astronomy Photographer of the Year 2025 shortlist
rmg.co.uk
How we replaced Elasticsearch and MongoDB with Rust and RocksDB
radar.com
Build durable workflows with Postgres
dbos.dev
Fire hazard of WHY2025 badge due to 18650 Li-Ion cells
wiki.why2025.org
Json2dir: a JSON-to-directory converter, a fast alternative to home-manager
github.com
Unmasking the Sea Star Killer
biographic.com
Apple's history is hiding in a Mac font
spacebar.news
Getting good results from Claude Code
dzombak.com
Poltergeist: File watcher with auto-rebuild for any language or build system
github.com
Overengineering my homelab so I don't pay cloud providers
ergaster.org
A robust, open-source framework for Spiking Neural Networks on low-end FPGAs
arxiv.org
HRT's Python fork: Leveraging PEP 690 for faster imports
hudsonrivertrading.com
Open SWE: An open-source asynchronous coding agent
blog.langchain.com
GPU-rich labs have won: What's left for the rest of us is distillation
inference.net
I don't understand the point of spiking in the context of computer hardware.
Your energy costs are a function of activity factor. Which is how many 0 to 1 transitions you have.
If you wanted to be efficient, the correct thing to do is have most voltages remain unchanged.
What makes more sense to me is something like mixture of experts routing but you only update the activated experts. Stock fish does something similar with partial updating NN for board positions.