A guide to local coding models
aiforswes.com
Disney Imagineering Debuts Next-Generation Robotic Character, Olaf
disneyparksblog.com
Show HN: Books mentioned on Hacker News in 2025
hackernews-readings-613604506318.us-west1.run.app
ONNX Runtime and CoreML May Silently Convert Your Model to FP16
ym2132.github.io
Deliberate Internet Shutdowns
schneier.com
Show HN: WalletWallet – create Apple passes from anything
walletwallet.alen.ro
The Going Dark initiative or ProtectEU is a Chat Control 3.0 attempt
mastodon.online
Show HN: Autograd.c – A tiny ML framework built from scratch
github.com
Evaluating chain-of-thought monitorability
openai.com
CO2 batteries that store grid energy take off globally
spectrum.ieee.org
Autoland saves King Air, everyone reported safe
avbrief.com
You’re not burnt out, you’re existentially starving
neilthanedar.com
I can't upgrade to Windows 11, now leave me alone
idiallo.com
Rue: Higher level than Rust, lower level than Go
rue-lang.dev
ARIN Public Incident Report – 4.10 Misissuance Error
arin.net
Structured outputs create false confidence
boundaryml.com
Indoor tanning makes youthful skin much older on a genetic level
ucsf.edu
Get an AI code review in 10 seconds
oldmanrahul.com
Perron: A Static Site Generator for Ruby on Rails
perron-site.statichost.page
The fix for machine unlearning in vector databases turns out to be conceptually simple, but it requires changing the semantics of retrieval.
Standard FAISS-style indices store vectors and compute:
argmax ⟨q, vᵢ⟩
If you insert -v, nothing happens. It’s just another point. The original vector is still maximally similar to itself and remains rank-1.
This isn’t a bug—it’s a consequence of selection-based retrieval.
If instead you store (vector, weight) pairs and evaluate: φ(q) = Σ wᵢ · K(q, vᵢ)
you get a different object entirely: a field, not a selection. Now inserting the same vector with w = −1 causes destructive interference. The contribution cancels. The attractor disappears.
Deletion becomes O(1) append-only (add the inverse), not a structural rebuild.
FAISS-style: Vec<Vec<f32>> → argmax (selection) Weighted form: Vec<(Vec<f32>, f32)> → Σ (field)
We validated this on 100k vectors: • FAISS: target stays rank-1 after “deletion” • Field-based model: exact cancellation (φ → 0), target unretrievable
The deeper point is that this isn’t a trick—it’s a semantic separation. • FAISS implements a selection operator over discrete points. • The weighted version implements a field operator where vectors act as kernels in a continuous potential. • Retrieval becomes gradient ascent to local maxima. • Deletion becomes destructive interference that removes attractors.
This shifts deletion from structural (modify index, rebuild, filter) to algebraic (append an inverse element). You get append-only logs, reversible unlearning, and auditable deletion records. The negative weight is the proof.
Implication: current vector DBs can’t guarantee GDPR/CCPA erasure without reconstruction. Field-based retrieval can—provably.
Paper with proofs: https://github.com/nikitph/bloomin/blob/master/negative-vect...