Things Zig comptime won't do
matklad.github.io
Gemma 3 QAT Models: Bringing AI to Consumer GPUs
developers.googleblog.com
Crows can recognize geometric regularity
phys.org
TikZJax: Embedding LaTeX Drawings in HTML
tikzjax.com
Find the Odd Disk
colors2.alessandroroussel.com
Show HN: "Is This Tech Dead?" A snarky autopsy engine for your dead frameworks
isthistechdead.com
Show HN: Keep your PyTorch model in VRAM by hot swapping code
github.com
Decomposing Transactional Systems
transactional.blog
Falsify: Hypothesis-Inspired Shrinking for Haskell (2023)
well-typed.com
The appeal of serving your web pages with a single process
utcc.utoronto.ca
New Proof Settles Decades-Old Bet About Connected Networks
quantamagazine.org
Which year: guess which year each photo was taken
whichyr.com
Show HN: I built an AI that turns GitHub codebases into easy tutorials
github.com
Demystifying decorators: They don't need to be cryptic
thepythoncodingstack.com
FurtherAI (YC W24) Is Hiring Software and AI Engineers
ycombinator.com
The Joy of Linux Theming in the Age of Bootable Containers
blues.win
Jagged AGI: o3, Gemini 2.5, and everything after
oneusefulthing.org
The movie mistake mystery from "Revenge of the Sith"
fxrant.blogspot.com
Healthy soil is the hidden ingredient
nature.com
Sonic Heritage - the sounds of the world's most famous sights
citiesandmemory.com
Home galleries are hiding in plain sight across Canada
cbc.ca
The OP looks like good work, but it's definitely not a quick read. The authors claim theoretical breakthroughs that enable:
* a data-free LLM quantization method which they claim outperforms all prior data-free approaches, including NF4; and
* a method which they claim is optimal for finding non-uniform per-layer quantization levels which match a given compression constraint in the "medium bitwidth" regime.
They demonstrate improved accuracy-compression trade-offs on popular LLMs.
Thank you for sharing this on HN.