OpenAI declares 'code red' as Google catches up in AI race
77 comments
·December 2, 2025rashidujang
wlesieutre
Besides, can't they just allocate more ChatGPT instances to accelerating their development?
palmotea
> It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed.
A lot of advice is that way, which is why it is advice. If following it were easy everyone would just do it all the time, but if it's hard or there are temptations in the other direction, it has to be endlessly repeated.
Plus, there are always those special-snowflake guys who are "that's good advice for you, but for me it's different!"
Also it wouldn't surprise me if Sam Altman's talents aren't in management or successfully running a large organization, but in machiavellian manipulation and maneuvering.
dathinab
the thought that this might be done one recommendation of ChatGPT has me rolling
think about it, with how much bad advice is out there in certain topics it's guaranteed that ChatGPT will promote common bad advice in many cases
amelius
Imho it just shows how relatively simple this technology really is, and nobody will have a moat. The bubble will pop.
deelowe
Not exactly. Infra will win the race. In this aspect, Google is miles ahead of the competition. Their DC solutions scale very well. Their only risk is that the hardware and low level software stack is EXTREMELY custom. They don't even fully leverage OCP. Having said that, this has never been a major problem for Google over their 20+ years of moving away from OTS parts.
amelius
But anyone with enough money can make infra. Maybe not at the scale of Google, but maybe that's not necessary (unless you have a continuous stream of fresh high-quality training data).
simianwords
amazing how the bubble pops either from the technology either being too simple or being too complex to make a profit
amelius
The technology is simple, but you need a ton of hardware. So you lose either because there's lots of competition or you lose because your hardware costs can't be recuperated.
tiahura
Also, google has plenty of (unmatched?) proprietary data and their own money tree to fuel the money furnace.
FinnKuhn
As well as their own hardware and a steady cash flow to finance their AI endevours for longer.
ryandvm
Don't forget the bleak subtext of all this.
All these engineers working 70 hour weeks for world class sociopaths in some sort of fucked up space race to create a technology that is supposed to make all of them unemployed.
tim333
You can have a more upbeat take on it all.
woeirua
Wait, shouldn't their internal agents be able to do all this work by now?
JacobAsmuth
They have a stated goal of an AI researcher for 2028. Several years away.
rappatic
> the company will be delaying initiatives like ads, shopping and health agents, and a personal assistant, Pulse, to focus on improving ChatGPT
There's maybe like a few hundred people in the industry who can truly do original work on fundamentally improving a bleeding-edge LLM like ChatGPT, and a whole bunch of people who can do work on ads and shopping. One doesn't seem to get in the way of the other.
whiplash451
The bottleneck isn’t the people doing the work but the leadership’s bandwidth for strategic thinking
kokanee
I think it's a matter of public perception and user sentiment. You don't want to shove ads into a product that people are already complaining about. And you don't want the media asking questions like why you rolled out a "health assistant" at the same time you were scrambling to address major safety, reliability, and legal challenges.
logsr
There are two layers here: 1) low level LLM architecture 2) applying low level LLM architecture in novel ways. It is true that there are maybe a couple hundred people who can make significant advances on layer 1, but layer 2 constantly drives progress on whatever level of capability layer 1 is at, and it depends mostly on broad and diverse subject matter expertise, and doesn't require any low level ability to implement or improve on LLM architectures, only understanding how to apply them more effectively in new fields. The real key thing is finding ways to create automated validation systems, similar to what is possible for coding, that can be used to create synthetic datasets for reinforcement learning. Layer 2 capabilities do feed back into improved core models, even if you have the same core architecture, because you are generating more and improved data for retraining.
ma2rten
Delaying doesn't necessarily mean they stop working on it. Also it might be a question of compute resource allocation as well.
techblueberry
Far be it from me to backseat drive for Sam Altman, but is the problem really that the core product needs improvement, or that it needs a better ecosystem? I can't imagine people are choosing they're chatbots based on providing the perfect answers, it's what you can do with it. I would assume google has the advantage because it's built into a tool people already use every day, not because it's nominally "better" at generating text. Didn't people prefer chatgpt 4 to 5 anyways?
tim333
ChatGPT's thing always seems to have been to be the best LLM, hence the most users without much advertising and the most investment money to support their dominance. If they drop to second or third best it may cause them problems because they rely on investor money to pay the rather large bills.
Currently they are not #1 in any of the categories on LLM arena, and even on user numbers where they have dominated, Google is catching up, 650m monthly for Gemini, 800m for ChatGPT.
Also Google/Hassabis don't show much sign of slacking off (https://youtu.be/rq-2i1blAlU?t=860)
jinushaun
If that was the case, MS would be on top given how entrenched Windows, Office and Outlook are.
techblueberry
I'm not suggesting that OpenAI write shit integrations with existing ecosystems.
jasonthorsness
ha what an incredible consumer-friendly outcome! Hopefully competition keeps the focus on improving models and prevents irritating kinds of monetization
another_twist
If there's no monetization, the industry will just collapse. Not a good thing to aspire to. I hope they make money whilst doing these improvements.
Ericson2314
If people pay for inference, that's revenue. Ads and stuff is plan B for inference being too cheap, or the value being too low.
thrance
If there's no monetization, the industry will just collapse, except for Google, which is probably what they want.
rob74
I for one would say, the later they add the "ads" feature, the better...
sometimes_all
For regular consumers, Gemini's AI pro plan is a tough one to beat. The chat quality has gotten much better, I am able to share my plan with a couple more people in my family leading to proper individual chat histories, I get 2 TB of extra storage (which is also sharable), plus some really nice stuff like NotebookLM, which has been amazing for doing research. Veo/Nanobanana are nice bonuses.
It's easily worth the monthly cost, and I'm happy to pay - something which I didn't even consider doing a year ago. OpenAI just doesn't have the same bundle effect.
Obviously power users and companies will likely consider Anthropic. I don't know what OpenAI's actual product moat is any more outside of a well-known name.
piva00
Through my work I have access to Google's, Anthropic's, and OpenAI's products, and I agree with you, I barely touch OpenAI's models/products for some reason even though I have total freedom to choose.
Phelinofist
IMHO Gemini surpassed ChatGPT by quite a bit - I switched. Gemini is faster, the thinking mode gives me reliably better answers and it has a more "business like" conversation attitude which is refreshing in comparison to the over-the-top informal ChatGPT default.
cj
Is there a replacement for ChatGPT projects in Gemini yet?
That's the only ChatGPT feature keeping me from moving to Gemini. Specifically, the ability to upload files and automatically make them available as context for a prompt.
hek2sch
Isn't that already nouswise? Nouswise would ground answers based on low level quotes plus you get an api from your projects.
mvdtnz
> [Gemini] has a more "business like" conversation attitude which is refreshing in comparison to the over-the-top informal ChatGPT default.
Maybe "business like" for Americans. In most of the world we don't spend quite so much effort glazing one another in the workplace. "That's an incredibly insightful question and really gets to the heart of the matter". No it isn't. I was shocked they didn't fix this behavior in v3.
theoldgreybeard
You can't make a baby in 1 month with 9 women, Sam.
rf15
This sounds like their medicine might be worse than what they're currently doing...
dwa3592
why couldn't GPT5.1 improve itself? Last I heard, it can produce original math and has phd level intelligence.
skywhopper
There will be a daily call for those tasked
with improving the chatbot, the memo said,
and Altman encouraged temporary team transfers
to speed up development.
Truly brilliant software development management going on here. Daily update meetings and temporary staff transfers. Well known strategies for increasing velocity!trymas
…someone even wrote a book about this. Something about “mythical men”… :D
zingababba
Needs an update re: mythical AI.
another_twist
"The results of this quarter were already baked in a couple of quarters ago"
- Jeff Bezos
Quite right tbh.
tiahura
Like when OpenAI started experiencing a massive brain drain.
lubujackson
Don't forget scuttling all the projects the staff has been working overtime to complete so that they can focus on "make it better!" waves hands frantically
giancarlostoro
I've had ideas for how to improve all the different chatbots for like 3 years, nobodys has implemented any of them (usually my ideas get implemented in software somehow the devs read my mind, but AI seems to be stuck with the same UI for LLMs), none of these AI shops are ran by people with vision it feels like. Everyone's just remaking a slightly better version of SmarterChild.
whiplash451
Did you open-source / publish these ideas?
giancarlostoro
I'm not giving any of these people my ideas for free. Though I did think of making my own UI for some of these services at some point.
simianwords
I really want a UI that visualises branching. I would like to branch out of specific parts of the responses and continue the conversation there but also keep the original conversation. This seems to be a very standard feature but no one has developed it.
giancarlostoro
Would require something like snapshotting context windows, but I agree, something like this would be nice.
theplatman
i agree - it shows a remarkable lack of creativity that we're still stuck with a fairly subpar UX for interacting with these tools
simianwords
Its easy to dismiss it but what would you do instead?
mlmonkey
The beatings will continue until morale^H^H^H^H^H^H chatGPT improves...
alecco
OpenAI was founded to hedge against Google dominating AI and with it the future. It makes me sad how that was lost for pipe dreams (AGI) and terrible leadership.
I fear a Google dystopia. I hope DeepSeek or somebody else will counter-balance their power.
bryanlarsen
That goal has wildly succeeded -- there are now several well financed companies competing against Google.
The goal was supposed to be an ethical competitor as implied by the word "Open" in their name. When Meta and the Chinese are the most ethical of the competitors, you know we're in a bad spot...
alecco
I said DeepSeek because they are very open (not just weights). A young company and very much unlike Chinese Big Tech and American Big Tech.
tiahura
Doesn’t it seem likely that it all depends on who produces the next AIAYN? Things go one way if it’s an academic, and another way if it’s somebody’s trade secret.
zingababba
Does anyone have a link to the contents of the memo?
spwa4
We are in a pretty amazing situation. If you're willing to go down 10% in benchmark scores, you easily 25% your costs. Now with Deepseek 3.2 another shot across the bow.
But if the ML, if SOTA intelligence becomes basically a price war, won't that mean that Google (and OpenAI and Microsoft and any other big model) lose big? Especially Google, as the margin even Google cloud (famously a lot lower than Google's other businesses) requires to survive has got to be sizeable.
> There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.
It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed. Throw in a knee-jerk solution of "daily call" (sound familiar?) for those involved while they are wading knee-deep through work and you have a perfect storm of terrible working conditions. My money is Google, who in my opinion have not only caught up, but surpassed OpenAI with their latest iteration of their AI offerings.