Teaching a new way to prevent outages at Google
30 comments
·March 20, 2025hinkley
> In one particular case at Google, a software controller–acting on bad feedback from another software system–determined that it should issue an unsafe control action. It scheduled this action to happen after 30 days. Even though there were indicators that this unsafe action was going to occur, no software engineers–humans–were actually monitoring the indicators. So, after 30 days, the unsafe control action occurred, resulting in an outage.
Isn't this the time they accidentally deleted governmental databases? I love the attempt at blameless generalization, but wow.
decimalenough
If you're referring to the time they nuked an Australian retirement fund's VMware setup, no, that was basically a billing screwup. An operator left a field blank, the system assumed that meant a 1-year expiry, and dutifully deleted it after 1 year was up.
https://cloud.google.com/blog/products/infrastructure/detail...
cynicalsecurity
The most unbelievable thing about that case was that Google actually deleted data instead of keeping then forever and use for ads.
mimikatz
Thanks to all the people here pointing out how bloated, overly broad and useless this is. I went to read it thinking I would pick up something applicable and it was written in such a overwrought humanless style that I gave up learning nothing and thought the problem was me. I am glad to learn I am not alone.
null
smcameron
> "The class itself is very well structured. I've heard about STPA in past years, but this was the first time I saw it explained with concrete examples. The Google example at the end was also really helpful."
But the article itself contains no concrete examples.
eitland
If you can like examples from outside Google, STPA seems to have been around for years:
https://kagi.com/search?q=STPA&r=no&sh=6ZXVCq1feUflSKjoBMMXm...
irjustin
I don't understand and I really really want to.
This seems so cool at a scale that I can't fathom. Tell me specifically how it's done at google with regards to a specific service, at least enough information to understand what's going on. Make it concrete. Like "B lacks feedback from C", why is this bad?
You've told me absolutely nothing and it makes me angry.
hinkley
This link at the bottom is less confusing:
https://www.usenix.org/publications/loginonline/evolution-sr...
SlightlyLeftPad
This has really always been the case with Google philosophy docs. They tend to be very abstract and academic.
The biggest danger is taking everything at face value and structuring your work or organization the same exact way based solely on these documents. The reality is, the vast majority of companies are not Google and will never encounter Google’s problems. That’s not where the value is though.
bbkane
Maybe less of a philosophy doc, but I found the Google SRE workbook to have plenty of helpful concrete examples
null
null
null
primitivesuave
This would have been a lot more compelling had they provided a single real-world example of STPA actually solving a reliability issue at Google.
dooglius
> After working with the system experts to build this control structure, we immediately noticed missing feedback from controller C to controller B–in other words, controller B did not have enough information to support the decisions it needed to make
There is a feedback loop through D? And why does the same issue not apply to the missing directed edge from B to D?
EDIT: I figured it out on a reread, the vertical up/down orientation matters for whether an edge represents control vs feedback, so B is merely not controlling D, which is fine. But if B is only controlling C as a way to get through to D (which is what I would have guessed, absent other information), what's the issue with that?
MinelloGiacomo
STAMP/STPA work well as a model and methodology for complex systems, I was interested in them a while ago in the context of cyber risk quantification. Having a fairly easy model to reason about unsafe control action is not a given in other approaches. I just wish they were adopted by more companies, I have seen too many of them stuck with ERM-based frameworks that do no make sense most of the time when scaled down to working at the system level granularity.
mianos
This is peak corporate drivel—bloated storytelling, buzzwords everywhere, and a desperate attempt to make an old idea sound revolutionary.
The article spends paragraphs on some childhood radio repair story before awkwardly linking it to STPA, a safety analysis method that’s been around for decades. Google didn’t invent it, but they act like adapting it for software is a major breakthrough.
Most of the piece is just filler about feedback loops and control structures—basic engineering concepts—framed as deep insights. The actual message? "We made an internal training program because existing STPA examples didn’t click with Googlers." That’s it. But instead of just saying that, they pad it out with corporate storytelling, self-congratulation, and hand-wringing over how hard it is to teach people things.
The ending is especially cringe: You can’t afford NOT to use this! Classic corporate play—take something mundane, slap on some urgency, and act like ignoring it is a reckless gamble.
TL;DR: Google is training engineers in STPA. That’s the whole story.
sepositus
I'm not sure if things have changed over the past five years, but this is exactly the stuff you'd throw in a promotion packet or maybe in a performance (perf) review to hit that mythical "superb" rating.
The breaking point for me (and why I left after almost a decade) was when people started getting high ratings for fixing things they had an original hand in causing. Honestly, the comfiest job in the world if you're a professional bullshitter.
dataflow
By "had a hand in causing" do you mean "they should have prevented it", or do you just mean "they were involved in the causation"? Because sometimes you're forced to do things you know are wrong, because that's what other people are making you do, and in that case you still "have a hand" in causing.
praptak
Something in between. Like "pushed to implement a feature without the safety measures". When outages started to happen implemented Outage Prevention Program, i.e. implemented the safety measures that should have been implemented from the start.
Subsequent data collection demonstrated X% outage frequency drop clearly demonstrating readiness for promotion, data driven.
SlightlyLeftPad
What I’ve been seeing from Google’s products lately suggests that these are the only ones still there. It’s a house of cards built with professional bullshitters. Google’s culture has entered or is already deep within the bullshit era.
z3t4
It will happen in all companies that has a monopoly status. If they start to struggle they will just increase the rent.
ikiris
You can’t swoop in and be a hero and make impact without a meteor.
hansmayer
The point about basic engineering concepts is spot on. But I wonder how much it has to do with the creeping in of superficially educated "tech" people across technology sector. Not to downplay the value of self-learning (am a bit of autodidact myself), but the amount of people who switch into the mythical "tech" who have never heard of a differential equation is worrying. Hence companies unfortunately really seem to need to explain concepts like feedback loop to people who only ever heard of it in the context of performance review. The article itself is a word salad though, the start reads like a SEO-optimised cooking blog ;)
agumonkey
Oh wow, shallow communication performative piece in a way ?
croisillon
an early April fool's?
ikiris
... So where's the training or examples of application?
jldugger
I do see one example at the bottom of https://www.usenix.org/publications/loginonline/evolution-sr.... But I'm not sure it's particularly compelling?
In other words STPA is a design review framework for finding some less obvious failure modes. FMEA is more popular but relies on making a list of all of the knowable failure modes in a system, but the failure modes you haven’t thought of don’t make it on the list. STPA helps fill in some of those gaps of failure modes you haven’t thought of.