LLVM-MOS – Clang LLVM fork targeting the 6502
llvm-mos.org
Windows drive letters are not limited to A-Z
ryanliptak.com
ETH-Zurich: Digital Design and Computer Architecture; 227-0003-10L, Spring, 2025
safari.ethz.ch
Program-of-Thought Prompting Outperforms Chain-of-Thought by 15% (2022)
arxiv.org
ESA Sentinel-1D delivers first high-resolution images
esa.int
Migrating Dillo from GitHub
dillo-browser.org
CachyOS: Fast and Customizable Linux Distribution
cachyos.org
A Second Look at Geolocation and Starlink
potaroo.net
Don't push AI down our throats
gpt3experiments.substack.com
Notes on Shadowing a Hospitalist
humaninvariant.substack.com
RetailReady (YC W24) Is Hiring Associate Product Manager
ycombinator.com
GitHub to Codeberg: My Experience
eldred.fr
Show HN: Real-time system that tracks how news spreads across 200k websites
yandori.io
The Thinking Game Film – Google DeepMind Documentary
thinkinggamefilm.com
There is No Quintic Formula [video]
youtube.com
Show HN: Fixing Google Nano Banana Pixel Art with Rust
github.com
Modern cars are spying on you. Here's what you can do about it
apnews.com
Langjam Gamejam: Build a programming language then make a game with it
langjamgamejam.com
Zigbook Is Plagiarizing the Zigtools Playground
zigtools.org
Paul Hegarty's updated CS193p SwiftUI course released by Stanford
cs193p.stanford.edu
Interesting selection of models for the "instruction count vs. accuracy" plot. Curious when that was done and why they chose those models. How well does ChatGPT 5/5.1 (and codex/mini/nano variants), Gemini 3, Claude Haiku/Sonnet/Opus 4.5, recent grok models, Kimi 2 Thinking etc (this generation of models) do?