Why I Wrote the Beam Book
happihacking.com
Cloud Run GPUs, now GA, makes running AI workloads easier for everyone
cloud.google.com
Just how bad are we at treating age-related diseases?
ladanuzhna.xyz
DiffX – Next-Generation Extensible Diff Format
diffx.org
Cockatoos have learned to operate drinking fountains in Australia
science.org
Cord didn't win. What now?
jg.gg
Ask HN: Has anybody built search on top of Anna's Archive?
A critical look at NetBSD’s installer
eerielinux.wordpress.com
Writing a postmortem: an interview exercise I like (2017)
danielputtick.com
Consider Knitting
journal.stuffwithstuff.com
The Sky's the limit: AI automation on Mac
taoofmac.com
Depot (YC W23) is hiring an enterprise support engineer (UK/EU)
ycombinator.com
How to Read a Novel
adjacentpossible.substack.com
Ask HN: Startup getting spammed with PayPal disputes, what should we do?
Designing better file organization around tags, not hierarchies (2017)
nayuki.io
What if you could do it all over?
newyorker.com
Click-V: A RISC-V emulator built with ClickHouse SQL
github.com
A deep dive into self-improving AI and the Darwin-Gödel Machine
richardcsuwandi.github.io
Decentralization Hidden in the Dark Ages
bionicmosquito.blogspot.com
Deep learning gets the glory, deep fact checking gets ignored
rachel.fast.ai
I am curious how the last algorithm is an order of magnitude faster than the one based on sorting. There is no benchmark data, and ideally there should be data for different mesh sizes, as that affects the timing a lot (cache vs RAM).
I work on https://github.com/elalish/manifold which works with triangular meshes, and one of the slowest operations we currently have is halfedge pairing, I am interested in making it faster. We are already using parallel merge sort for the stable sort, switching to parallel radix sort which works well on random distribution is not helping and I think we are currently bandwidth bound. If building an edge list for each vertex can improve cache locality and reduce bandwidth, that will be very interesting.