Cloudflare outage on November 18, 2025 post mortem
blog.cloudflare.com
Multimodal Diffusion Language Models for Thinking-Aware Editing and Generation
github.com
I made a down detector for down detector
downdetectorsdowndetector.com
Even Realities Smart Glasses: G2
evenrealities.com
Show HN: Browser-based interactive 3D Three-Body problem simulator
trisolarchaos.com
Pebble, Rebble, and a path forward
ericmigi.com
I wrote a Pong game in a 512-byte boot sector
akshatjoshi.com
Bluetooth Channel Sounding: The Next Leap in Bluetooth Innovation
embedded.com
Gemini 3 Pro Model Card [pdf]
storage.googleapis.com
The code and open-source tools I used to produce a science fiction anthology
compellingsciencefiction.com
Mojo-V: Secret Computation for RISC-V
github.com
Cloudflare Global Network experiencing issues
cloudflarestatus.com
Bret Victor the Future of Programming (2013) [video]
youtube.com
Strace-macOS: A clone of the strace command for macOS
github.com
Exploring the Limits of Large Language Models as Quant Traders
nof1.ai
A Rigorous Approach to the Algorithmic Composition of Iannis Xenakis(2009) [pdf]
monoskop.org
OrthoRoute – GPU-accelerated autorouting for KiCad
bbenchoff.github.io
I am stepping down as the CEO of Mastodon
blog.joinmastodon.org
Google boss says AI investment boom has 'elements of irrationality'
bbc.com
I just want working RCS messaging
wt.gd
> To resolve this, we propose a parallel multimodal diffusion framework, MMaDA-Parallel, that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
> (ParaRL), a novel strategy that applies semantic rewards along the trajectory to enforce cross-modal consistency.
(emphasis mine)
This sounds really cool. The fact that one generation "attends" to the other is really interesting. I'm curious if this would hold for other modalities. I'm thinking coding specific applications, where things can change once something is generated. My hunch is that coding would benefit a lot from this approach, because the "manual" way of writing code often resembles diffusion more than autoregressive (that is, we often edit something here, then because we did that we have to import something, then change something there, then that leads to further changes, etc).
For now coding seems to benefit a lot from <thinking> -> <coding> -> <env_feedback> -> <reflexion> -> <thinking> -> <coding>, but this seems at a glance to be shoehorned in for autoregressive generation... GPT5 in particular seems to be better at this, with multiple "tool calls" interleaved in its thinking sessions. I wonder if this would get better with the paralel denoising thing proposed here, where both thinking and coding are done in paralel, and one can "attend" to the other. Add some feedback (linters, compilers, LSPs, tests, etc.) and this can go places. If it works.