The Real Story Behind Sam Altman’s Firing From OpenAI
119 comments
·March 29, 2025Philpax
mellosouls
[dead]
togetheragainor
I’m even more confused now than before I read the article:
- Sutskever and Murati compile evidence of Altman’s lies and manipulation to oust him.
- Sutskever emails the evidence to the board and suggests they act on it.
- The board fires Altman but refuses to explain why.
- Murati demands the board explain why.
- The board refuses, and Murati and Sutskever rebel against the board and petition with other employees to reinstate Altman.
It all makes no sense. And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?
botro
I read the article on archive and figured there was a big chunk missing. It really does not make any sense.
Sutskever and Murati were methodical, they waited until the board was favorable to the outcome they wanted, engaged with board members individually laying the groundwork... and then just changed their mind when it actually happened!?
jdminhbg
The article says Sutskever was blindsided by the rank-and-file being on Sam's side. Presumably he thought the outcome was going to be business as more-or-less usual but with Murati or someone as CEO and then panicked when that didn't happen.
ethbr1
Or someone said "If you don't switch and back me, I am going to fight every bit of your compensation. Or you can back me and leave with favorable terms."
Panic is a less likely driver.
Philpax
The board did not plan or execute their ouster well, which forced Murati and Sutskever to coup their coup to maintain the stability of the company. The board and Sutskever were expecting the general support of the company, so they had no real backup plan or evidence ready that they could publicly release.
togetheragainor
Why couldn’t they release the evidence? At least some of it is here in the article, and it’s damaging to Sam but not particularly damaging to the company. If Murati demanded they release the evidence, why refuse?
Philpax
Murati didn't demand they release the evidence, as far as I could tell. The board are describing as not wanting to throw Murati under the bus by stating the evidence came from her, which makes sense if their goal was to install her as the new CEO.
jdminhbg
> And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?
I think because they were in over their heads. They were on the board to run a non-profit and then it metastasized into a high-stakes Fortune 50-sized company.
JCM9
People cared about the OpenAI drama when it looked like they might have some real edge and the future of AI depended on them. Now it’s clear the tech is cool but rapidly converging into a commodity with nobody having any edge that translates into a sustainable business model.
In that reality they can drama all they want now, nobody really cares anymore.
darioush
Yes and the open source models + local inference are progressing rapidly. This whole API idea is kind of limited by the fact that you need to RT to a datacenter + trust someone with all your data.
Imagine when OpenAI has their 23&me moment in 2050 and a judge rules all your queries since 2023 are for sale to the highest bidder.
ptero
It doesn't need to wait until 2050. The queries would be for sale as soon as they stop providing a competitive advantage.
beeflet
Even worse for these LLM-as-a-service companies i that the utility of open source LLMs largely comes down to the customization: you can get a lot of utility by restricting token output, varying temperature, and lightly retraining them for specific applications.
The use-cases for LLMs seem unexplored beyond basic chatbot stuff.
techjamie
I'm surprised at how little their utility for turning unstructured data into structured data, even with some margin of error, is discussed. It doesn't even take an especially large model to accomplish it, either.
I would think entire industries could reform around having an LLM as a first pass on data, with software and/or human error checking at significant cost reduction over previous strategies.
TradingPlaces
Selling tokens is likely to be a tough business in a couple of years
csallen
There's more to business than tech. There's more to business than product.
The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google. Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.
ChatGPT is in a similar position. The fact of the matter is, the average person knows what ChatGPT is and how to use it. Many hundreds of millions of normal people use ChatGPT weekly, and the number is growing. The same cannot be said of Claude, DeepSeek, Grok, or the various open source models.
And the gap is massive. It's not even close. It's like 400M weekly ChatGPT actives vs 30M monthly Claude actives.
So yes, the average Hacker News contrarian who thinks their tiny bubble represents the entire world might think that "nobody cares," in part because nobody they know cares, and in part because that assessment aligns with their own personal biases and desires.
But anyone who's been paying attention to how internet behemoths grow for the past 30 years certainly still cares about OpenAI.
lolinder
You can't compare Facebook with ChatGPT because the costs per user are in totally different orders of magnitude. One $5/mo VPS can serve the traffic of several hundred thousand Facebook users, while ChatGPT needs an array of GPUs per active user. They can optimize this somewhat, but never as much as Facebook can.
This means that they're stuck with more expensive monetization plans to cover their free tier loss leader, hence the $200/mo Pro subscription. And once you're charging that kind of price to try to make ends meet, you're ripe for disruption no matter how good your name recognition.
reasonableklout
"ChatGPT needs an array of GPUs per active user" - nit: you're exaggerating by a few orders of magnitude.
First, queries from users can be combined and fed into servers in batches so that hundreds of queries can be concurrently served by a single node. Second, people aren't on and asking ChatGPT questions every second of every day. I'd guess the median is more like ~single digit queries per day. Assuming average response length of 100 tokens and throughput of 50 tok/s at batch size 50, that's 25 QPS or 2.1M queries per day, or 420k users served per node at 5 queries per user per day.
Now, a single 8xH100 node is a lot more expensive than $5/mo, so you're directionally correct there, but I'd wager you can segment your market aggressively and serve heavily distilled/quantized models (small enough to fit onto single commodity GPUs, or even CPUs) to your free tier. Finally, this is subject to Huang's Law, which says every 2 years the cost of the same performance will more than halve.
csallen
People said similar things about Facebook. "Oh their user growth might be amazing, but they're not making any money, it's not a real business."
But it turns out that with enough funding, you can prioritize growth over profit for a very long time. And with enough growth, you can raise unlimited funds before you get to that point. And going this route is smart and effective if you want to get to a $1T valuation in under a decade.
So yeah, ChatGPT's margins might not be as high as Facebook's. But it doesn't really matter at this point, they're in growth mode. What matters is whether or not they'll be able to turn their lead and their mindshare into massive profits eventually, and while we can speculate on that, it's far too early to definitively say the answer is no.
bagacrap
Rather than getting into the nitty gritty details of monetization, when we ask ourselves if OpenAi can nail product like Facebook did (I guess) to become the next tech giant, I think we have to ask whether it's even possible when the tech industry is as established as it is.
You would think existing megacaps would be all over any new market if there is a profit to be made. Facebook's competition was basically other startups. That said, Google seems to be dropping the ball almost as bad as Yahoo.
But sure, if there's absolutely no way to make money from consumer AI then that will also make it hard for oai to win the game.
mellosouls
The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google.
I remember the search engines of the time and Google was a quantum leap.
ChatGPT is even more revolutionary but whatever Google is now, once it was brilliant.
bagacrap
Useful search unlocked the web. I will take that over LLMs in their present state.
csallen
I agree, just saying, ChatGPT was a quantum leap, too. That's why it has all the consumer mindshare.
CPLX
> Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.
This isn't correct at all. Google's search engine was an important stepping stone to the behavior that actually gave them lock-in, which was an aggressive, anti-competitive and generally illegal effort to monopolize the market for online advertising through acquisitions and boxing out competitors.
It really was only possible because for reasons we decided to completely stop enforcing antitrust laws for a decade or two.
disgruntledphd2
The Microsoft antitrust case also gave them a couple of years to grow without that threat.
player1234
400 million use it for free, you can give away 400 million of anything for free. The question is, how many are willing to pay the monthly fee required to stop OpenAI from bleeding 5billion/year and return the promised trillions to investors.
moralestapia
[flagged]
dang
Regardless of how wrong someone is or you feel they are, can you please make your substantive points thoughtfully? This was not a good Hacker News comment, and you've unfortunately been doing this repeatedly lately.
null
lolinder
OpenAI is spending $2 for every $1 it earns. It's certainly eating its investors' lunch, but it's not a sustainable business yet and from all accounts doesn't have a clear plan for how to become one.
Meanwhile, the ZIRP policies that made this kind of non-strategy strategy feasible are gone.
spwa4
I wouldn't worry. Retracting ZIRP policies gave governments 2 choices: reduce spending by ~10% on average (in Europe), or cheat and scheme to bring them back, on a 2 year timer.
Interest rates raised, but came back down before the 2 years were even up (rate rise started 27 Jul 2022, Started coming back down 12 Jun 2024), governments have been caught cheating, and the number of central bankers replaced has gone up dramatically. Oh, and none of the governments have reduced spending. Literally not a single one. In fact, Germany has agreed to an unprecedented increase in debt financing of their government.
In other words, ZIRP, even negative rates, are coming back, and a lot sooner than most people think. Your next house, despite everything that's happened, will be more expensive. But I doubt this will save either OpenAI or Tesler.
jackschultz
>They had banked on Murati calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board.
I finally finished the 4th of Caro's books about LBJ, "The Passage of Power", largest part about how LBJ dealt with assassination. Over and over shows how LBJ made sure that nobody, meaning world leaders, citizens, others in government, and, relevant here, also those in the Kennedy administration would feel lost and want to resign. Caro made sure to note how this is a very difficult task and required LBJ to act differently than normal, but also how important it is to not have things go into disarray which easily happens.
Side note: Astounding notes of how LBJ was able to get bills that weren't going to get through congress with Kennedy were pushed through and made possible by Johnson. Quote to end a chapter by Richard Russell, southern complete segregationist and racist, says "You know, we could have beaten John Kennedy on civil rights, but we can't Johnson." Other side, Caro makes certain however about how the coming issues of Vietnam show the darker side of LBJ and not get fully caught up in his stability of power and civil rights successes.
Maybe these are all cases of those who want power are usually those who shouldn't have it.
fulafel
To save others the lookup- this is not talking about assassinations carried out by the administration abroad, but about the Kennedy assassination.
Workaccount2
So sam let the cat out of bag (chatgpt) behind the backs of "safety review" and the board. Probably why Google was caught flat footed and how ChatGPT became the household name.
Dubious moral decision but an excellent business one. Perhaps the benefit of hindsight where ChatGPT didn't cause immediate societal collapse helps here.
lolinder
ChatGPT is already out when the story picks up, it's talking about concerns about GPT-4.
And the story isn't about that single incident of Altman dodging review and working behind the backs of the board—it's about a pattern of deception and toxic management practices that culminated in Altman lying to Murati about what the legal department had said, which lie was given to the board as part of a folio of evidence that he needed to be ousted.
You're trying to distill a pattern of toxicity and distrust into a single decision, which ameliorates it more than is fair.
_delirium
Yeah to me the overt lying is more damning than any particular decision. If he owned the decision to bypass ethics review and release a model, fine, we can argue if that was prudent or not, but at least it's honest leadership. Lying that the counsel said it was ok when they hadn't is a whole other thing! When someone starts doing that repeatedly, and it keeps getting back to you that stuff they said was just outright false, you can't work with them at all imo.
If this is something he's been doing for years, it becomes clearer why Y Combinator fired him, though they have been kind of cagey about it.
stogot
The question then remains: if you have a lying, toxic, manipulative boss, who would want to work for them ? Especially the direct reports of one
lolinder
From the story it sounds like the direct reports generally did not want to work with Altman, Brockman excluded. Even Murati was one of the primary instigators of the firing, but she changed her mind for reasons that the article doesn't really explore.
ethbr1
Money.
strogonoff
Aside from becoming the opposite of the values their name suggests, there’s two main mistakes OpenAI made in my view: violate copyright when training, and rush to release the chatbot. Stealing original work is going to bite them legally (opening them to all sorts of lawsuits while killing their own ability to sue competitors piggy-backing off their model output, for example), and is a special case of them being generally shortsighted and passing on an opportunity to make a truly Apple- or Amazon-scale business by applying strategy and longer term thinking (even if someone else got to release an LLM chatbot before them, they could—as in, had the funds and the talent to—build something higher level, properly licensed, and much more difficult to commoditise).
If this was the fault of Altman, it is understandable that certain people would want him out.
Terretta
> violate copyright when training
If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?
When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?
andai
This is one issue with Microsoft's Total Recall thing, right? I wonder how they're dealing with that.
3np
Want to abolish economic copyright alltogether? I could get behind that. Making a legal exception because of some imagined future metaphysical property of this particular platform sounds like being fooled.
_heimdall
I don't think the concern is related specifically to training on computer chips with copyrighted content.
If you are going to use human brain cells to memorize protected content and sell it as a product, that's still an issue based on current copyright laws.
strogonoff
Others replied to this and I am still not sure what your point is. Are you saying big tech should be able to get away with this because LLMs are just like us humans?
anileated
> If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?
The same percentage at which you stop qualifying to be human and become an unthinking tool, fully controlled by its operator to do whatever they want, without free will of its own and without any ethical concerns about abuse and slavery, like is the case with all LLMs.
(Of course, it is a moot point, because creating a human-level consciousness with chips is a thought experiment not grounded in reality.)
> When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?
Any level thanks to the concept called human rights and freedoms, famously not applied to machines and other unthinking tools.
bcoates
Do the copyright claims have any legs at all? ianal, but I thought it was pretty settled that statistical compilation of copyrighted works (indexes, concordances, summaries, full-text search databases) were considered "facts" and not copies.
(This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim)
strogonoff
1. Google was, and still is in some developed countries, under fire for as much as summarising search results too much, so I think yes, the claims have legs.
> This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim
2. Commercial for-profit models were shown to do that, and (other legal arguments aside, such as model and/or its output being a derivative work, etc.) in some cases that was precisely the smoking gun for the lawsuit, if I recall correctly.
I have not seen any conclusive outcome, I suppose it will depend on jurisdiction.
john_texas
[dead]
patall
So, why didn't the board tell the other executives (and employees) what Murati had told them. When it was them in the firing line, why didn't Ilya tell that story? They could have just fired Murati (based on the screenshots presented) and continued as before. Or what am I missing?
siliconc0w
Yeah I don't understand this either. Make a case or don't but keeping it incredibly vague, especially when so much money was on the line due to the secondary wasn't going to work.
dbuser99
So sam was getting paid - possibly in egregious amounts while lying to congress?
jeremyjh
VC huckster lies to the public, news at 11.
ec109685
Why has safety taken such a back seat? Were the fears overblown back in 2022 or have model providers gotten better at fine tuning the worst away?
null
1vuio0pswjnm7
Text-only, works where archive.is is blocked:
https://assets.msn.com/content/view/v2/Detail/en-in/AA1BRU7s
cowpig
From the outside it really seems like Peter Thiel was a brilliant kid who read Lord of the Rings and became obsessed with becoming the real world Sauron, manipulating weak-minded men in Silicon Valley into following the path of soulless corruption.
croes
Did he read the end?
abenga
Skill issue. Easily overcome.
generic92034
Not closely guarding the only means to destroy his source of power was such an obvious plot hole and oversight. ;)
rpmisms
You can innovate that part away /s
tiahura
TLDR:
In November 2023, OpenAI CEO Sam Altman was suddenly fired by the board—not because of AI safety fears or Effective Altruism, but due to concerns over his leadership, secrecy, and possibly misleading behavior. CTO Mira Murati and chief scientist Ilya Sutskever shared evidence of Altman’s actions, like skipping safety protocols and secretly controlling OpenAI’s startup fund.
The board didn’t explain the firing well, and it backfired. Murati, who at first supported the board, turned on them when they wouldn’t give clear reasons. Nearly all OpenAI employees, including Murati and Sutskever, threatened to quit unless Altman came back. With the company on the brink of chaos, the board caved and he was reinstated days later.
https://archive.is/xP4N1