Gemini 2.5 Flash
developers.googleblog.com
An intro to DeepSeek's distributed file system
maknee.github.io
Decreased CO2 during breathwork: emergence of altered states of consciousness
nature.com
Marching Events: What does iCalendar have to do with ray marching?
pwy.io
SQLite Transactions and Virtual Tables
misfra.me
Show HN: AgentAPI – HTTP API for Claude Code, Goose, Aider, and Codex
github.com
Mux (YC W16) is hiring engineering managers for video at scale
mux.com
Milwaukee M18 Battery Reverse Engineering
quagmirerepair.com
Shell-secrets – GPG-encrypted environment variables
github.com
What my stroke taught me (2017)
nautil.us
Google is illegally monopolizing online advertising tech, judge rules
nytimes.com
There are two types of dishwasher people
theatlantic.com
Unauthenticated Remote Code Execution in Erlang/OTP SSH
nvd.nist.gov
UniK3D: Universal Camera Monocular 3D Estimation – Luigi Piccinelli
lpiccinelli-eth.github.io
Discord's face scanning age checks 'start of a bigger shift'
bbc.com
Stainless steel strengthened: Twisting creates submicron 'anti-crash wall'
techxplore.com
N-Params vs. Single Param
carlos-menezes.com
A cute proof that makes e natural
poshenloh.com
Show HN: val – An arbitrary precision calculator language
github.com
Our quantum assembly parser got updated to the QASM 3.0 spec
arxiv.org
Abstract: "Despite the surge of interest in autonomous scientific discovery (ASD) of software artifacts (e.g., improved ML algorithms), current ASD systems face two key limitations: (1) they largely explore variants of existing codebases or similarly constrained design spaces, and (2) they produce large volumes of research artifacts (such as automatically generated papers and code) that are typically evaluated using conference-style paper review with limited evaluation of code. In this work we introduce CodeScientist, a novel ASD system that frames ideation and experiment construction as a form of genetic search jointly over combinations of research articles and codeblocks defining common actions in a domain (like prompting a language model). We use this paradigm to conduct hundreds of automated experiments on machine-generated ideas broadly in the domain of agents and virtual environments, with the system returning 19 discoveries, 6 of which were judged as being both at least minimally sound and incrementally novel after a multi-faceted evaluation beyond that typically conducted in prior work, including external (conference-style) review, code review, and replication attempts. Moreover, the discoveries span new tasks, agents, metrics, and data, suggesting a qualitative shift from benchmark optimization to broader discoveries."