We Diagnosed and Fixed the 2023 Voyager 1 Anomaly from 15B Miles Away [video]
12 comments
·April 18, 2025ordu
bhouston
From the surface, it seems to me like a sensible thing to do. I've written my own decent assembler for a toy CPU in a few days, and probably it is even faster now in the age of agentic coding.
mystified5016
Well, it was a totally bespoke CPU, and we don't have any working models on earth.
Writing an assembler for a bespoke CPU is one thing, many of us have done it as a toy project, but stakes are a bit different here. You'd have to mathematically prove your assembler and disassembler are absolutely 100% correct. When your only working model is utterly irreplaceable and irrecoverable upon error, it probably takes a lot more resources to develop.
october8140
Yes.
jebarker
Puts things into perspective. I often wonder how so many people survive without a UI debugger because cmdline debugging seems too clunky.
thadk
“Hello world” takes on new dimensions in this context.
RamRodification
void explore()
null
metalman
and serious latency
freefaler
Pff... and I can debug a stupid bug from 0.00001 miles for the 3rd day.
I'm a little surprised by their approach. I mean, it did work, it is cool, and it is the most important thing. Still I can't stop thinking that I wouldn't sleep before I wrote an assembler and a disassembler. Judging by the presentation they had no assembler and disassembler for several months and just lived with that.
asm/disasm can help to find typos in listings, they can help to find xrefs or even to do some static analysis to check for mistake classes they knew they could make. It wouldn't replace any of the manual work they've done, but still it can add some confidence on top of it. Maybe they wouldn't end with priors 50/50 for the success, but with something like 90/10.
Strange. Do I underestimate the complexity of writing an asm and disasm pair?