DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]
huggingface.co
What Will Enter the Public Domain in 2026?
publicdomainreview.org
India orders smartphone makers to preload state-owned cyber safety app
reuters.com
AI agents find $4.6M in blockchain smart contract exploits
red.anthropic.com
Reverse math shows why hard problems are hard
quantamagazine.org
Arcee Trinity Mini: US-Trained Moe Model
arcee.ai
Last Week on My Mac: Losing confidence
eclecticlight.co
Ghostty compiled to WASM with xterm.js API compatibility
github.com
Ask HN: Who is hiring? (December 2025)
Tested: 1981 Datsun 280ZX Turbo
caranddriver.com
Codex, Opus, Gemini try to build Counter Strike
instantdb.com
Cartographers have been hiding illustrations inside Switzerland’s maps (2020)
eyeondesign.aiga.org
Google, Nvidia, and OpenAI
stratechery.com
John Giannandrea to Retire from Apple
apple.com
Instagram chief orders staff back to the office five days a week in 2026
businessinsider.com
Cloud-Init on Raspberry Pi OS
raspberrypi.com
Around The World, Part 27: Planting trees
frozenfractal.com
10 years of writing a blog nobody reads
flowtwo.io
Ask HN: Who wants to be hired? (December 2025)
Durin is a library for reading and writing the Dwarf debugging format
github.com
Mozilla's latest quagmire
rubenerd.com
Looks like a less good version of qwen 30b3a which makes sense bc it is slightly smaller. If they can keep that effiency going into the large one it'll be sick.
Trinity Large [will be] a 420B parameter model with 13B active parameters. Just perfect for a large Ram pool @ q4.