Cloudflare Radar: AI Insights
radar.cloudflare.com
Making Minecraft Spherical
bowerbyte.com
"Turns out Google made up an elaborate story about me"
bsky.app
Effective learning: Twenty rules of formulating knowledge (1999)
supermemo.com
A Review of Nim 2: The Good and Bad with Example Code
miguel-martin.com
Git for Music – Using Version Control for Music Production (2023)
grechin.org
Preserving Order in Concurrent Go Apps: Three Approaches Compared
destel.dev
AI enters the grant game, picking winners
science.org
Cloudflare Search Engine Market Share 2025Q2
radar.cloudflare.com
Zfsbackrest: Pgbackrest style encrypted backups for ZFS filesystems
github.com
Show HN: Simple modenized .NET NuGet server reached RC
github.com
We should have the ability to run any code we want on hardware we own
hugotunius.se
Tetris is NP-hard even with O(1) rows or columns [pdf]
martindemaine.org
UK's largest battery storage facility at Tilbury substation
nationalgrid.com
India's billion-dollar e-waste empire
restofworld.org
Telli (YC F24) is hiring engineers, designers, and interns (on-site in Berlin)
hi.telli.com
Lewis and Clark marked their trail with laxatives
offbeatoregon.com
C++: Strongly Happens Before?
nekrozqliphort.github.io
(Did skim a bit, heading out)
While most research is commendable I think this feels a bit as one that goes in from the wrong starting point.
Unified memory has become a thing (Apple machines, Nvidia AI machines like the GH200, recent AMD "AI" machines) and as people are aware AI workloads (similar to DB) are bandwidth bound (why we often use 4bit and 8bit values today), to become compute bound one would need to do more expensive stuff than graphics shaders (not common in DB queries).
So, the focus of research should be:
A: How are the queries in these setups vs simply running on unified memory machines, is there enough of a win for discrete to trounce the complexity (the GH200 perf advantage seems to partially answer it since iirc it's unified?).
B: What is the overhead of firing off query operations VS just running on-CPU? is query compilation overhead noticable if it's mostly novel non-cached queries?
C: For keeping it on the GPU, are there options today for streaming directly to-GPU bypassing ram / host entirely?