Reflections on OpenAI
171 comments
·July 15, 2025hinterlands
harmonic18374
I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
m00x
Calvin cofounded Segment that had a $3.2B acquisition. He's not your typical employee.
harmonic18374
He is still manipulatable and driven by incentive like anyone else.
null
rrrrrrrrrrrryan
> There's no Bond villain at the helm. It's good people rationalizing things.
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
Bratmon
> It is fairly rare to see an ex-employee put a positive spin on their work experience
Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.
tedsanders
OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)
gwern
> It's no longer hanging over anyone's head,
The tender offer limitations still are, last I heard.
Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)
fragmede
The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.
swat535
Even if it's illegal, you'll have to fight them in court.
OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.
The goal is corporate punishment, not the rule of the law.
rvz
Absolutely correct.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
tptacek
Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
torginus
Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
matco11
> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own
Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
Spooky23
I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
ben_w
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.
curious_cat_163
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
I liked my jobs and bosses!
null
humbleferret
What a great post.
Some points that stood out to me:
- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.
- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.
- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.
- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."
- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."
- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.
mikae1
> I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA)
Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.
smath
lol, I almost missed the sarcasm there :)
bhl
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.
There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.
Worried about how sustainable this is for its people, given the risk of burnout.
alwa
If anyone tried to demand that I work that way, I’d say absolutely not.
But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!
Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.
Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.
bradyriddle
I'd be curious to know about this community. Is this a formal group or just the people that you've collected throughout your life?
alwa
The latter. I mean, I feel like a disproportionate number of folks who hang around here have that kind of disposition.
That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.
In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.
The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!
More than enough is too much :)
ishita159
I think senior folks at OpenAI realized this is not sustainable and hence took the "wellness week".
tptacek
It's not sustainable, at all, but if it's happening just a couple times throughout your career, it's doable; I know people who went through that process, at that company, and came out of it energized.
6gvONxR4sf7o
I couldn't imagine asking my partner to pick up that kind of childcare slack. Props to OP's wife for doing so, and I'm glad she got the callout at the end, but god damn.
kaashif
Working a job like that would literally ruin my life. There's no way I could have time to be a good husband and father under those conditions, some things should not be sacrificed.
datadrivenangel
The author left after 14 months at OpenAI, so that seems like a burnout duration.
Rebelgecko
How did they have any time left to be a parent?
ambicapter
> I returned early from my paternity leave to help participate in the Codex launch.
Obvious priorities there.
harmonic18374
That part made me do a double take. I hope his child never learns they were being put second.
null
sashank_1509
My hot take is I don’t think burn out has much to do with raw hours spent working. I feel it has a lot more to do with sense of momentum and autonomy. You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it. But if it feels like wading through a swamp, you will burn out very quickly, even if it’s just 50 hours a week. I also find ownership has a lot to do with sense of burnout
matwood
And if the work you're doing feels meaningful and you're properly compensated. Ask people to work really hard to fill out their 360 reviews and they should rightly laugh at you.
parpfish
i hope thats not a hot take because it's 100% correct.
people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.
you can fix overwork with a vacation. burnout is a deeper existential wound.
my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.
apwell23
> You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it.
Something about youth being wasted on young.
laidoffamazon
I don't really have an opinion on working that much, but working that much and having to go into the office to spend those long hours sounds like torture.
tptacek
This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
wilkomm
That's a good question!
vFunct
He describes 78,000 public pull requests per engineer over 53 days. LMAO. So it's likely 99.99% LLM written.
Lots of good info in the post, surprised he was able to share so much publicly. I would have kept most of the business process info secret.
Edit: NVM. That 78k pull requests is for all users of Codex, not all engineers of Codex.
theletterf
For a company that has grown so much in such a short time, I continue to be surprised by its lack of technical writers. Saying docs could be better is an euphemism, but I still can't find fellow tech writers working there. Compare this with Anthropic and its documentation.
I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.
mlinhares
The higher ups don't think there's value in that. Back at DigitalOcean they had an amazing tech writing team, with people with years of experience, doing some of the best tech docs in the industry, when the layoffs started the writing team was the first to be cut.
People look at it as a cost a and nothing else.
simonw
Whoa, there is a ton of interesting stuff in this one, and plenty of information I've never seen shared before. Worth spending some time with it.
tomrod
Agreed!
fidotron
> There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or 'solved', it probably won't get worked on.
This is a very interesting nugget, and if accurate this could become their Achilles heel.
ACCount36
It's not "their" Achilles heel. It's the Achilles heel of the way humans work.
Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.
jordanmorgan10
I’m at a point my life and career where I’d never entertain working those hours. Missed basketball games, seeing kids come home from school, etc. I do think when I first started out, and had no kiddos, maybe some crazy sprints like that would’ve been exhilarating. No chance now though
chribcirio
> I’m at a point my life and career where I’d never entertain working those hours.
That’s ok.
Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.
vonneumannstan
>Safety is actually more of a thing than you might guess
Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.
jjani
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in. There's an API you can sign up and use–and most of the models (even if SOTA or proprietary) tend to quickly make it into the API for startups to use.
The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.
Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.
OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.
Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.
harmonic18374
Yes, also OpenAI being this great nimble startup that can turn on a dime, while in reality Google reacted to them and has now surpassed them technically in every area, except image prompt adherence.
levzzz
[dead]
dcreater
This is silicon valley culture on steroids: I really have to question if it is positive for any involved party. Codex almost has no mindshare and rightly so. It's a textbook also ran, except it came from the most dominant player and was outpaced by Claude code on the order of weeks.
Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro
paxys
> An unusual part of OpenAI is that everything, and I mean everything, runs on Slack.
Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.
It is fairly rare to see an ex-employee put a positive spin on their work experience.
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!