Reverse engineering GitHub Actions cache to make it fast
blacksmith.sh
Cerebras launches Qwen3-235B, achieving 1.5k tokens per second
cerebras.ai
Cops say criminals use a Google Pixel with GrapheneOS – I say that's freedom
androidauthority.com
Manticore Search: Fast, efficient, drop-in replacement for Elasticsearch
github.com
20 years of Linux on the Desktop (part 4)
ploum.net
Geocities Backgrounds
pixelmoondust.neocities.org
The Surprising gRPC Client Bottleneck in Low-Latency Networks
blog.ydb.tech
Qwen3-Coder: Agentic coding in the world
qwenlm.github.io
SQL Injection as a Feature
idiallo.com
Reversing a Fingerprint Reader Protocol (2021)
blog.th0m.as
QuestDB (YC S20) Is Hiring a Technical Content Lead
questdb.com
AI groups spend to replace low-cost 'data labellers' with high-paid experts
ft.com
Extending Emacs with Fennel (2024)
andreyor.st
SDR42E1 modulates Vitamin D absorption and cancer pathogenesis
frontiersin.org
When Is WebAssembly Going to Get DOM Support?
queue.acm.org
Rescuing two PDP-11s from a former British Telecom underground shelter (2023)
forum.vcfed.org
Checking Out CPython 3.14's remote debugging protocol
rtpg.co
Mathematics for Computer Science (2024)
ocw.mit.edu
I'm Unsatisfied with Easing Functions
davepagurek.com
More than you wanted to know about how Game Boy cartridges work
abc.decontextualize.com
Brave blocks Microsoft Recall by default
brave.com
AI coding agents are removing programming language barriers
railsatscale.com
Show HN: Header-only GIF decoder in pure C – no malloc, easy to use
I built it because sometimes I have a function in a codebase that needs to be a lot faster. So here you just pass that function, choose some models, it then asks each LLM to perf optimize that function.
Then you will have fn1.original.js, fn1.openai.o3.js, fn1.gemini.2.5.js, and it runs a bench mark over all of them and gives you the results.
Useful for me!