Code and Trust: Vibrators to Pacemakers
42 comments
·July 6, 2025ramity
wcunning
That’s actually the standard in automotive and industrial applications — likelihood of failure vs consequences of failure, set the “acceptable” risk low and show proof that you’re not any higher than that level. Medical devices actually have a much higher “contributes in any way to any patient harm” risk analysis.
fjfaase
I know someone who developed medical devices, not as critical as pacemakers, and he kind of boasts that he probably killed (as in caused premature death of) some people, but also extended the life of many many more.
kitd
Tbh, it's the same kind of survival economics as invasive treatments like surgery anyway. No doctor can or should guarantee 100% survival.
guzik
Had an interesting encounter recently. Was coming back from a festival and got talking to this woman who mentioned she has to be careful about her health. Turns out she has a pacemaker. Since I'm in the medical device space (much lower risk category though), I was curious about her experience. What struck me was how much she knew about her device like the technical specs, failure modes, battery life, everything. Makes total sense when you think about it (if your life depends on a piece of tech sitting in your chest, you'd probably want to understand it inside out too).
MangoToupe
> To be honest, my threshold is quite low, I think in 1 year we will be able to take any piece of code and audit it with `claude`, and I will put my life on the line using it.
Bold! I wonder what leads to this sort of confidence.
> Radical transparency is the only answer. I am convinced that open code, specs and procesees must be requirement going forward.
Yes. Transparency is the only foundation of trust.
jackdoe
> Bold! I wonder what leads to this sort of confidence.
honestly just skimming through MIRSA C 2004
> Rule 17.4 (required): Array indexing shall be the only allowed form of pointer arithmetic. Array indexing is the only acceptable form of pointer arithmetic, because it is clearer and hence less error prone than pointer manipulation.
I don't think compliance with such rules really solve things, and in some cases just introduce complexity, e.g there is also rule for never using recursion: Rule 16.2 (required): Functions shall not call themselves, either directly or indirectly
but also, it is easier to judge if a book is good than to write a good book
> Yes. Transparency is the only foundation of trust.
teenagers and corporations disagree :)
earnestinger
Reasons why:
Corporation does not care about life per se, it looks at impact on profit.
Corporation has option to sue offender into oblivion (by thoroughly writing contracts)
darkwater
(I'm going philosophical here)
> Yes. Transparency is the only foundation of trust.
If there is transparency there is no need to trust. You can verify yourself and if you are verifying yourself, you are not trusting. Trust means believing the output someone told you without following all the trails.
Now, transparency, openness and trails to follow are GOOD, and they should always be there. Because if you don't trust, you can check everything yourself. Or because if you forgot something, or start from scratch, you can go back in time and learn what happened and why and who did what, and have a picture in your mind pretty close to actual reality.
Now, we can argue that after a few iterations where you did check someone/something's output completely due to its transparency, you build trust on it and you will not check it in depth anymore. But you could also trust someone just based on the outcome and not the internal procedure. If the outcome was aligned with the promises and its good enough for you, you end up trusting that person anyway.
pbhjpbhj
Transparency pairs with a regulatory environment(*). I can trust a company because they gave to have their ingredients tested, and aren't allowed to include poisons under UK regulations -- I doubt have to verify (I'm not a biologist of any sort, U couldn't) as the company can be held to account through their transparency I can assume (!) they are not being evil.
Now, it depends what is on the line, even an open-hardware device could have had a routine built into a chip that seeks to set the battery on fire. Do you decap and check the silicon? Dunno all the firmware and indirect every routine?
For almost anything it's impossible to make a thorough check by yourself.
Transparency means the provider enables such actions though. That builds trust.
* regulatory environment includes the possibility that you can find a person and enact violence on them.
tzs
> This program will vibrate with increase frequency using the Fibonacci numbers 2, 3, 5, 8, 13, 21, 34..
No it won't. The sequence it follows is 2, 1, 1, 1, ...
After spotting that I was curious if LLMs would also spot it. I asked Perplexity this and gave it the code:
> This is Python code someone posted for a hypothetical vibrator. They said it will vibrate with increasing frequency following Fibonacci numbers: 2, 3, 5, 8, ...
> Does the frequency really increase following that sequence?
It correctly found the actual sequence, explained why the code fails, and offered a fix.
maxbond
> The sequence it follows is 2, 1, 1, 1, ...
I just ran it, and as long as you start counting at 1, their code works fine, outputting `1, 1, 2, 3, 5, ...`. (Not sure why they say the Fibonacci sequence starts with 2, that's odd.)
amluto
One can reasonably debate where the Fibonacci sequence starts. Their fib implementation is unquestionably broken for negative inputs, but fortunately they don’t supply negative inputs.
But try reading the code that calls fib. It’s so outrageously wrong that it’s fairly easy to read it and pretend they wrote something sensible. Never mind that there isn’t actually a straightforward single-line fix - next_fib would be an entirely different beast from fib.
If they had started with frequency = 4, the real effect of the code would have been to send a couple pulses and then to spend very rapidly (worse than exponentially) increasing time and stack space trying to compute fib.
maxbond
I had missed that, thanks. That is whack. (The input to fibonacci() is it's last output.)
numpad0
this doesn't need if statement but 3 variables: nm2 = 0; nm1 = 1; n; f_next(){n = nm1 + nm2; nm2 = nm1; nm1 = n;}
Fibonacci sequence is defined as "F0 = 0, F1 = 1, Fn = F(n-1) + F(n-2)". The author implemented F1 as a special case but it doesn't need to be. That's the weird part.
modderation
It's even more fun when you extend it to negative integers, reals, and the complex plane!
Matt Parker (Stand-up Maths) delves into this in a very approachable manner: https://www.youtube.com/watch?v=ghxQA3vvhsk
kh_hk
Weird, after the fix vibrator sleeps the longer it is used. Also, at some point it burst into flames
Onavo
Now ask the the AI to write a formal verification program to prove it :)
jackdoe
ah, i fixed it, i wasn't actually thinking about the code when writing it, didnt even try it :facepalm: absolute brainfart haha I guess it makes the transparency point even stronger
amaterasu
I'm trivialising, but a lot of software in medical devices is turning a GPIO pin on/off in response to another pin, then announcing that it did so. The piece missing from the article is that the assumed probability of software/firmware (or anything really) failing is 1.0. Everything is engineered around the assumption that things (_especially_ software) WILL fail and minimising the consequences when they do. LLM's writing the code will happen soon, it's a GPIO pin control after all. LLM's proving the code is as safe as possible and that they have thought about the failure modes will be a while.
mrheosuper
> How do we get to a point to `trust` it?
You don't, similar to you don't trust code written by human. The code needs to be audited, line-by-line.
AstralStorm
This is the case for formal proofs. You can even generate V code from them, much like SEL4 is written.
xwolfi
Audited by ?
AlotOfReading
Redundant verification processes. Think swiss cheese model. No system is perfect, so you layer independently imperfect systems to avoid catastrophic failures slipping through the cracks.
ulf-77723
Great article! I think it always depends which kind of code is being used in different industries. Anything related to your life will need to be guarded by humans. Critical infrastructure, medical devices.
If I think about anything which might not directly impact human life, AI Code is ok for me.
The point where the majority will start to trust the generated code like we trust our E-Mail Spam Filter it may be difficult. „The machine will probably do the right thing, no need to interact.“
gnfargbl
I'm surprised not to see a mention of formal methods in the article. I know these are kind of the "nuclear fusion" of Computer Science, but so were neural nets until relatively recently.
I would have guessed that AI ought to be pretty good at converting code into formally verifiable forms. Is anyone working on that?
fjfaase
Vibrators might have killed more people than pacemakers, because Whenever you use a vibrator it could put pathogen in places that could lead to a fatal infection or cause cancer. There are many more people using vibrators than that have pacemakers installed and for vibrators there a less strong requirements with respect to safety and proper use.
elric
In contrast, I think vibrators and dildos save lives. Now you may think I'm being silly or snarky, but I'm being serious.
I worked at a hospital ER when I was still in school. My job was fetching and archiving patient records (physical paper files back in those days). I had to add the most recent report to the file and then return it to the archives. Of course I frequently read those files, and I was very aware of who was coming in and why.
The number of foreign objects lodged in anuses and vaginas was quite high. And it was always stupid objects, like a stick of deodorant, or a candle (?), or a fucking doorknob. It was never a dildo or a vibrator.
Using sex toys made out of (relatively) body safe materials, which are easy to clean, which have flared ends, definitely makes the whole experience a lot safer, and makes a trip to the ER a lot less likely.
jackdoe
an interesting article gibbitz linked in a dup post
https://www.edn.com/toyotas-killer-firmware-bad-design-and-i...
> Embedded software used to be low-level code we’d bang together using C or assembler. These days, even a relatively straightforward, albeit critical, task like throttle control is likely to use a sophisticated RTOS and tens of thousands of lines of code.
xvilka
You can use stricter languages for both, like Rust, for example. Or even stricter like SPARK dialect of Ada. If AI will be able to produce code in these languages, the code will be way more trustworthy.
imron
Vibe coding…
I'll provide a contrasting, pessimistic take.
> How do you write programs when a bug can kill their user?
You accept that you will have a hand in killing users, and you fight like hell to prove yourself wrong. Every code change, PR approval, process update, unit test, hell, even meetings all weigh heavier. You move slower, leaving no stone unturned. To touch on the pacemakers example, even buggy code that kills X% of users will keep Y% alive/improve QoL. Does the good outweigh the bad? Even small amounts of complexity can bubble up and lead to unintended behavior. In a corrected vibrator example, what if frequency becomes so large it overflows and leads to burning the user? Youch.
The best insight I have to offer is that time is often overlooked and taken for granted. I'm talking Y2K data type, time drift, time skew, special relativity, precision, and more. Some of the most interesting and disturbing bugs I've come across all occurred because of time. "This program works perfectly fine, but after 24 hours it starts infinitely logging." If time is an input, do not underestimate time.
> How do we get to a point to `trust` it?
You traverse the entire input space to validate the output space. This is not always possible. In these cases, audit compliance can take the form of traversing a subset of the input space deemed "typical/expected" and moving forward with the knowledge that edge cases can exist. Even with a fully audited software, oddities like a cosmic bit flip can occur. What then? At some point, in this beautifully imperfect world, one must settle for good enough over perfection.
The astute reading above might be furiously pounding their keyboards mentioning the halting problem. We can't even verifiably prove a particular input will provide an output - moreover an entire space.
> I am convinced that open code, specs and (processes) must be requirement going forward.
I completely agree, but I don't believe this will outright prevent user deaths. Having open code, specs, etc aids towards accountability, transparency, and external verification. I must express I feel there are pressures against this, as there is monumental power in being the only party able to ascertain the facts.