Building Bluesky Comments for My Blog
natalie.sh
AWS Restored My Account: The Human Who Made the Difference
seuros.com
How to Sell if Your User is not the Buyer
writings.founderlabs.io
Laptop Support and Usability (LSU): July 2025 Report from the FreeBSD Foundation
github.com
Monte Carlo Crash Course: Quasi-Monte Carlo
thenumb.at
New AI Coding Teammate: Gemini CLI GitHub Actions
blog.google
We replaced passwords with something worse
blog.danielh.cc
Arm Desktop: x86 Emulation
marcin.juszkiewicz.com.pl
SUSE Donates USD 11,500 to the Perl and Raku Foundation
perl.com
Lithium Reverses Alzheimer's in Mice
hms.harvard.edu
GoGoGrandparent (YC S16) Is Hiring Back End and Full-Stack Engineers
Budget Car Buyers Want Automakers to K.I.S.S
thedrive.com
An LLM does not need to understand MCP
hackteam.io
The Whispering Earring (Scott Alexander)
croissanthology.com
More shell tricks: first class lists and jq
alurm.github.io
Claude Code IDE integration for Emacs
github.com
Hopfield Networks Is All You Need (2020)
arxiv.org
Cracking the Vault: How we found zero-day flaws in HashiCorp Vault
cyata.ai
Let's stop pretending that managers and executives care about productivity
baldurbjarnason.com
Leonardo Chiariglione: “I closed MPEG on 2 June 2020”
leonardo.chiariglione.org
I did pretty much the same thing some 10 years ago but only for my own images and this creator overcame a problem that I could not. I ingested a collection of images, read the metadata and used that to place the images on a map like this. However, I could not figure out how to determine facing. This project has an arrow indicator to show this, I ended up setting it manually. I would love to know how they determined this because I really cannot figure it. Does image metadata contain more information than it used to? Is there a way to use an AI agent to analyze the light/angles/streets/landmarks to approximate this?