Write the post you wish you'd found
gilesthomas.com
DeepSeek Open Source FlashMLA – MLA Decoding Kernel for Hopper GPUs
github.com
Making any integer with four 2s
eli.thegreenplace.net
Tokio and Prctl = Nasty Bug
kobzol.github.io
Defragging my old Dell's UEFI NVRAM
artemis.sh
Ask HN: What are you working on? (February 2025)
Show HN: Jq-Like Tool for Markdown
github.com
Partnering with the Shawnee Tribe for Civilization VII
civilization.2k.com
European word translator: an interactive map showing words in over 30 languages
ukdataexplorer.com
Sublinear Time Algorithms
people.csail.mit.edu
It is not a compiler error. It is never a compiler error (2017)
blog.plover.com
WhiteSur: macOS-like theme for GTK desktops
github.com
Orchid's nutrient theft from fungi shows photosynthesis-parasitism continuum
phys.org
Vietnamese Graphic Design
vietgd.com
OpenAI Researchers Find That AI Is Unable to Solve Most Coding Problems
futurism.com
But good sir, what is electricity?
lcamtuf.substack.com
Purely Functional Sliding Window Aggregation Algorithm
byorgey.github.io
Pollution from Big Tech's data centre boom costs US public health $5.4B
ft.com
Mascotbot: Real-Time, Engaging Avatar SDK for Al Agents
mascot.bot
O3-mini simulated scikit calculations
emsi.me
Show HN: Benchmarking VLMs vs. Traditional OCR
getomni.ai
Adding Mastodon Comments to Your Blog
beej.us
Ara Agui Nakajima DCT compression algorithm
leetarxiv.substack.com
The first time I used L1 regularization to recover a sparsely sampled signal’s DCT coefficients my mind was blown. If you’ve never tried it out, you should, it’s pretty crazy! The power of priors and choosing the appropriate inductive bias…
The classic learning example is to take a couple of sine waves at high frequencies, add them together, sample the summation at a very low frequency (so that the signals are way above the nyquist sample rate) or better yet just randomly at a handful of points, and then turn it into an optimization problem, where your model parameters are the coefficients of the DCT of the signal; run the coefficients through the inverse transform to recover a time domain signal, and then compute a reconstruction error at the locations in the time domain representation for which you have observed data, and then add an L1 penalty term to request that the coefficients be sparse.
It’s quite beautiful! All sorts of fun more advanced applications but it’s quite awesome to see it happen before your own eyes!
My personal signals & systems knowledge is pretty novice level but the amount of times that trick has come in handy for me is remarkable… even just as a slightly more intelligent data imputation technique, it can be pretty awesome! The prof who taught it to me worked at Bell Labs for a while, so it felt a bit like a guru sharing secrets, even though it’s a well documented technique.