Hotline for modern Apple systems
github.com
Turner, Bird, Eratosthenes: An eternal burning thread
cambridge.org
Starlink in the Falkland Islands – A national emergency situation?
openfalklands.com
The PS2's backwards compatibility from the engineer who built it (2020)
freelansations.medium.com
VSCode's SSH agent is bananas
fly.io
Value-Based Deep RL Scales Predictably
arxiv.org
Implementing a Game Boy emulator in Ruby
sacckey.dev
Why gold loves arsenic (2021)
mining.com
25 Years Ago, Joan Didion Kept a Diary. It's About to Become Public
nytimes.com
A brief history of code signing at Mozilla
hearsum.ca
Asahi Linux lead developer Hector Martin resigns from Linux kernel
lkml.org
A better (than Optional) maybe for Java
github.com
A colorful Game of Life
colorlife.quick.jaredforsyth.com
U.K. orders Apple to let it spy on users' encrypted accounts
washingtonpost.com
Visual explanations of mathematics (2020)
agilescientific.com
Station of despair: What to do if you get stuck at end of Tokyo Chuo Rapid Line
soranews24.com
Show HN: ExpenseOwl – Simple, self-hosted expense tracker
github.com
Show HN: A website that heatmaps your city based on your housing preferences
theretowhere.com
Ghostwriter – use the reMarkable2 as an interface to vision-LLMs
github.com
Stop using zip codes for geospatial analysis (2019)
carto.com
Show HN: Mandarin Word Segmenter with Translation
mandobot.netlify.app
Why LLMs still have problems with OCR
runpulse.com
Three-nanite: Unreal Nanite in Three.js
github.com
Can someone explain what distillation is exactly? I keep seeing people posting comments here and elsewhere about how DeepSeek “distilled” OpenAI outputs to train their new model. How could that even work - wouldn’t you need to ask millions of questions to get enough data to be able to train a whole another LLM? Or am I just uneducated about this topic?