What we talk about when we talk about sideloading
f-droid.org
Tips for stroke-surviving software engineers
blog.j11y.io
ChatGPT's Atlas: The Browser That's Anti-Web
anildash.com
EuroLLM: LLM made in Europe built to support all 24 official EU languages
eurollm.io
Tinkering is a way to acquire good taste
seated.ro
uBlock Origin Lite Apple App Store
apps.apple.com
Wacl – A Tcl Distribution for WebAssembly
github.com
Generative AI Image Editing Showdown
genai-showdown.specr.net
Gluing and framing a 9000-piece jigsaw
river.me
Keeping the Internet fast and secure: introducing Merkle Tree Certificates
blog.cloudflare.com
The AirPods Pro 3 flight problem
basicappleguy.com
Why do some radio towers blink?
jeffgeerling.com
Fil-C: A memory-safe C implementation
lwn.net
Apple will phase out Rosetta 2 in macOS 28
developer.apple.com
Mapping the off-target effects of every FDA-approved drug in existence
owlposting.com
Nvidia takes $1B stake in Nokia
cnbc.com
Falcon: A Reliable, Low Latency Hardware Transport
dl.acm.org
Using AI to negotiate a $195k hospital bill down to $33k
threads.com
We need a clearer framework for AI-assisted contributions to open source
samsaffron.com
One thing that worked for me was, with a particular big database, to use borg instead of restic. Most of the database was historical data that usually don't change, the mysqldump file is almost identical with the exception of the new and old modified data. And there is where borg deduplication and compression works, the new dump will have most blocks identical to the old one so with it I could have several days of backups taking little extra space in the borg repository. And I was able to rclone that borg repository to S3 intelligent tier so I could keep long time backups that way that for most of the space it would end in glacier storage transparently.
Of course, it is not a general solution, but knowing what the data is and how it changes may let you take more efficient approaches than what is usually recommended.