The next chapter of the Microsoft–OpenAI partnership
304 comments
·October 28, 2025philipwhiuk
no_wizard
They’ll devalue the term into something that makes it so. The common conception of it however, no I don’t believe we are anywhere close to it.
It’s no different than how they moved the goalpost on the definition of AI at the start of this boom cycle
ksynwa
Wasn't there already a report that stated Microsoft and OpenAI understand AGI as something like 100 billion dollars in revenue for the purpose of their agreements? Even that seems like a pipe dream at the moment.
gokuldas011011
Definitely. When I started doing Machine Learning in 2018, AI wasn't a next word predictor.
IanCal
When I was doing it in 2005 it definitely included that, and other far more basic things.
htrp
FSD would like a word
waffletower
As a full stack developer suffering from female sexual dysfunction who owns a Tesla, I am really confused about what you are trying to say.
sgustard
SAE automation levels are the industry standard, not FSD (which is a brand name), and FSD is clearly Level 2 (driver is always responsible and must be engaged, at least in consumer teslas, I don't know about robotaxis). The question is if "AGI" is as well defined as "Level 5" as an independent standard.
mberning
They have certainly tried to move the goalposts on this.
dr_dshiv
“Moving the goalposts” in AI usually means the opposite of devaluing the term.
Peter Norvig (former research director at Google and author of the most popular textbook on AI) offers a mainstream perspective that AGI is already here: https://www.noemamag.com/artificial-general-intelligence-is-...
If you described all the current capabilities of AI to 100 experts 10 years ago, they’d likely agree that the capabilities constitute AGI.
Yet, over time, the public will expect AGI to be capable of much, much more.
r_lee
I don't see why anyone would consider the state of AI today to be AGI? it's basically a glorified generator stuck to a query engine
today's models are not able to think independently, nor are they conscious or able to mutate themselves to gain new information on the fly or make memories other than half baked solutions with putting stuff in the context window which just makes it use that to generate stuff related to it, imitating a story basically.
they're powerful when paired with a human operator, I.e. they "do" as told, but that is not "AGI" in my book
nine_k
Consider: "Artificial two-star General intelligence".
I mean, once they "reach AGI", they will need a scale to measure advances within it.
gehwartzen
This is exactly why they will have an “expert panel” to make that determination. They wouldn’t make something up
cmiles74
I expect that the "expert panel" is to ensure that OpenAI and Microsoft are in agreement on what "AGI" means in the context of this agreement.
some_furry
What exactly is the criteria for "expert" they're planning to use, and whomst among us can actually meet a realistic bar for expertise on the nature of consciousness?
alterom
Yeah, they wouldn't make something up, the expert panel would.
Because everyone knows that once you call a group of people an expert panel, that automatically means they can't be biased /s
nl
> they moved the goalpost on the definition of AI at the start of this boom cycle
Who is this "they" you speak of?
It's true the definition has changed, but not in the direction you seem to think.
Before this boom cycle the standard for "AI" was the Turing test. There is no doubt we have comprehensively passed that now.
Vinnl
I don't think the Turing Test has been passed. The test was setup such that the interrogator knew that one of the two participants was a bot, and was trying to find out which. As far as I know, it's still relatively easy to find out you're talking to an LLM if you're actively looking for it.
wholinator2
The turing test point is actually very interesting, because it's testing whether you can tell you're talking to a computer or a person. When Chatgpt3 came out we all declared that test utterly destroyed. But now that we've had time to become accustomed and learn the standard syntax, phraseology, and vocabulary of the gpt's, I've started to be able to detect the AI's again. If humanity becomes completely accustomed to the way AI talks to be able to distinguish it, do we re enter the failed turing test era? Can the turing test only be passed in finite intervals, after which we learn to distinguish it again? I think it can eventually get there, and that the people who can detect the difference becomes a smaller and smaller subset. But who's to say what the zeitgeist on AI will be in a decade
alterom
Is there, really?
ramses0
Jesus, we've gone from Eliza and Bayes Spam Filters to being able to hold an "intelligent" conversation with a bot that can write code like: "make me a sandwich" => "ok, making sandwich.py, adding test, keeping track of a todo list, validating tests, etc..."
We might not _quite_ be at the era of "I'm sorry I can't let you do that Dave...", but on the spectrum, and from the perspective of a lay-person, we're waaaaay closer than we've ever been?
I'd counsel you to self-check what goalposts you might have moved in the past few years...
ok_computer
I think this says more about how much of our tasks and demonstrations of ability as developers revolve around boilerplate and design patterns than it does about the Cognitive abilities of modern LLMs.
I say this fully aware that a kitted out tech company will be using LLMs to write code more conformant to style and higher volume with greater test coverage than I am able to individually.
91bananas
I'd counsel you to work with LLMs daily and agree that we're no where close to LLMs that work properly consistently outside of toy use cases, where examples can be scraped from the internet. If we can agree on that we can agree that General Intelligence is not the same thing as a, sometimes, seemingly random guess at the next word...
IlikeKitties
I think "we" have accidentally cracked language from a computational perspective. The embedding of knowledge is incidental and we're far away from anything that "Generally Intelligent", let alone Advanced in that. LLMs do tend to make documented knowledge very searchable which is nice. But if you use these models everyday to do work of some kind that becomes pretty obvious that they aren't nearly as intelligent as they seem.
furyofantares
You have to keep moving the goalposts if you keep putting them in the wrong place.
crazygringo
Most people didn't think we were anywhere close to LLM's five years ago. The capabilities we have now were expected to be a decades away, depending on who you talked to. [EDIT: sorry, I should have said 10 years ago... recent years get too compressed in my head and stuff from 2020 still feels like it was 2 years ago!]
So I think a lot of people now don't see what the path is to AGI, but also realize they hadn't seen the path to LLM's, and innovation is coming fast and furious. So the most honest answer seems to be, it's entirely plausible that AGI just depends on another couple conceptual breakthroughs that are imminent... and it's also entirely plausible that AGI will require 20 different conceptual breakthroughs all working together that we'll only figure out decades from now.
True honesty requires acknowledging that we truly have no idea. Progress in AI is happening faster than ever before, but nobody has the slightest idea how much progress is needed to get to AGI.
mandeepj
> Most people didn't think we were anywhere close to LLM's five years ago.
Well, Google had LLMs ready by 2017, which was almost 9 years ago.
jlarocco
What people thought about LLMs five years ago, and how close we are to AGI right now are unrelated, and it's not logially sound to say "We were close to LLMs then, so we are close to AGI now."
It's also a misleading view of the history. It's true "most people" weren't thinking about LLMs five years ago, but a lot of the underpinnings had been studied since the 70s and 80s. The ideas had been worked out, but the hardware wasn't able to handle the processing.
> True honesty requires acknowledging that we truly have no idea. Progress in AI is happening faster than ever before, but nobody has the slightest idea how much progress is needed to get to AGI.
Maybe, but don't tell that to OpenAI's investors.
rapind
> Most people didn't think we were anywhere close to LLM's five years ago.
That's very ambiguous. "Most people" don't know most things. If we're talking about people that have been working in the industry though, my understanding is that the concept of our modern day LLMs aren't magical at all. In fact, the idea has been around for quite a while. The breakthroughs in processing power and networking (data) were the hold up. The result definitely feels magical to "most people" though for sure. Right now we're "iterating" right?
I'm not sure anyone really see's a clear path to AGI if what we're actually talking about is the singularity. There are a lot of unknown unknowns right?
dkdcio
I worked in Microsoft's AI platform from 2018-2022. people were very aware of LLMs & AI in general. it's not magical
AGI is a silly concept
fadedsignal
I 100% agree with this. I suggest the other guy to check history of NLP.
nodja
GPT3 existed 5 years ago, and the trajectory was set with the transformers paper. Everything from the transformer paper to GPT3 was pretty much speculated in the paper, it just took people spending the effort and compute to make it reality. The only real surprise was how fast openai producterized an LLM into a chat interface with chatgpt, before then we had finetuned GPT3 models doing specific tasks (translation, summarization, etc.)
gravity13
At this point, AGI seems to be more of a marketing beacon than any sort of non-vague deterministic classification.
We all thought about a future where AI just woke up one day, when realistically, we got philosophical debates over whether the ability to finally order a pizza constitutes true intelligence.
noir_lord
We can order the pizza, it just hallucinated and I'm not entirely sure why my pizza has seahorses instead of anchovies.
airstrike
Notwithstanding the fact that AGI is a significantly higher bar than "LLM", this argument is illogical.
Nobody thought we were anywhere closer to me jumping off the Empire State Building and flying across the globe 5 years ago, but I'm sure I will. Wish me luck as I take that literal leap of faith tomorrow.
JoelMcCracken
what's super weird to me is how people seem to look at LLM output and see:
"oh look it can think! but then it fails sometimes! how strange, we need to fix the bug that makes the thinking no workie"
instead of:
"oh, this is really weird. Its like a crazy advanced pattern recognition and completion engine that works better than I ever imagined such a thing could. But, it also clearly isn't _thinking_, so it seems like we are perhaps exactly as far from thinking machines as we were before LLMs"
armonster
I think what is much more plausible is that companies such as this one benefit greatly from being viewed as being close to, or on the way to AGI.
dreamcompiler
In 1900 we didn't see a viable path to climb Mount Everest or to go to the moon. This does not make the two tasks equally difficult.
dktp
In the initial contract Microsoft would lose a lot of rights when OpenAI achieves AGI. The references to AGI in this post, to me, look like Microsoft protecting themselves from OpenAI declaring _something_ as AGI and as a result Microsoft losing the rights
I don't see the mentions in this post as anyone particularly believing we're close to AGI
jimbokun
Who knows?
I don't see any way to define it in an easily verifiable way.
Pretty much any test you could devise, others will be able to point out ways that it's inadequate or doesn't capture aspects of human intelligence.
So I think it all just comes down to who is on the panel.
peterpans01
Most of the things that the public — even so-called “AI experts” — consider “magic” are still within the in-sample space. We are nowhere near the out-of-sample space yet. Large Language Models (LLMs) still cannot truly extrapolate. It’s somewhat like living in America and thinking that America is the entire world.
shaky-carrousel
AGI as a PR stunt for OpenAI is becoming a meme.
codyb
We're just a year away from full self driving!
ta9000
I wouldn’t be surprised if AGI arrives before Tesla has a full self-driving car though.
pixl97
Full self driving has always required AGI, so no we it without AGI.
ReptileMan
You will be self driven to the fusion plant and you will like it. The AGI will meet you at the front door.
gehwartzen
And then just another year until self selfing and we will have come full circle
embedding-shape
Wasn't it always the explicit goal of OpenAI to bring up AGI? So less of a meme, and more "this is what that company exists for".
Bit like blaming a airplane building company for building airplanes, it's literally what they were created for, no matter how stupid their ideas of the "ideal aircraft" is.
alterom
>Bit like blaming a airplane building company for NOT building airplanes
FTFY. OpenAI has not built AGI (not yet, it you want to be optimistic).
If you really need an analogy, it's more in the vein of giving SpaceX crap for yapping about building a Dyson Sphere Real Soon Now™.
0xWTF
My L7 and L8 colleagues at Google seem to be signaling next 2 years. Errors of -1 and +20 years. But the mood sorta seems like nobody wants to quit when they're building the test stand for the Trinity device.
wrsh07
Yes. Some ai skeptical people (eg Tyler Cowen, who does not think AI will have a significant economic impact) think gpt5 is AGI.
It was news when dwarkesh interviewed Karpathy who said per his definition of AGI, he doesn't think it will occur until 2035. Thus, if karpathy is pessimistic, then many people working in AI today think we will have agi by 2032 (and likely sooner, eg end of 2028)
layer8
2035 is still optimistic at present, IMO, because AGI will require breakthroughs that are impossible to predict.
cjbarber
> Microsoft holds an investment in OpenAI Group PBC valued at approximately $135 billion, representing roughly 27 percent on an as-converted diluted basis
It seems like Microsoft stock is then the most straightforward way to invest in OpenAI pre-IPO.
This also confirms the $500 billion valuation making OpenAI the most valuable private startup in the world.
Now many of the main AI companies have decent ownership by public companies or are already public.
- OpenAI -> Microsoft (27%)
- Anthropic -> Amazon (15-19% est), Alphabet/Google (14%)
Then the chip layer is largely already public: Nvidia. Plus AMD and Broadcom.
Clouds too: Oracle, Alphabet/GCP, Microsoft/Azure, CoreWeave.
paxys
Microsoft is worth $4T, so if you buy one MSFT share only ~3% of that is invested in OpenAI. Even if OpenAI outperforms everyone's expectations (which at this point are already sky high), a tiny swing in some other Microsoft division will completely erase your gains.
ForHackernews
Yeah, but on the plus side when the AI bubble bursts at least you've still got Excel.
pinnochio
I think it's a bit late for that.
Also, you have to consider the size of Microsoft relative to its ownership of OpenAI, future dilution, and how Microsoft itself will fare in the future. If, say, Microsoft is on a path towards decreasing relevance/marketshare/profitability, any gains from its stake in OpenAI may be offset by its diminishing fortunes.
mr_toad
> If, say, Microsoft is on a path towards decreasing relevance/marketshare/profitability
That’s a big if. I see a lot of people in big enterprises who would never even consider anything other than Microsoft and Azure.
no_wizard
C# and .NET have a bigger market share than what gets talked about in trendy circles
marcosdumay
The big question is if we are finally in a moment when big enterprises will be allowed to fail due to the infinite number of bad choices they make.
Because things are going to change soon. What nobody know is what things exactly, and in what direction.
notepad0x90
yeah, this is a take I see by people who work in unix like environments (including macs). If anything Microsoft will grow much bigger. People are consolidating in Azure and away from GCP. easier to manage costs and integrate with their fleet.
Windows workstations and servers are now "joined" to Azure instead, where they used to be joined to domain controller servers. Microsoft will soon enough stop supporting that older domain controller design (soon as in a decade).
dualityoftapirs
Reminds me of how Yahoo had a valuation in the negative billions with their Alibaba holdings taken into account:
https://www.cbsnews.com/news/wall-street-says-yahoos-worth-l...
whizzter
Huh? Windows itself might have had it's heyday but MS is solidly at #2 for clouds only behind AWS with enterprise Windows shops that will be hard pressed to not use MS options if they go to the cloud (Google really has continued to fumble their cloud positions with their reputation for "killedbygoogle.com" nagging on everyones mind).
The biggest real threat to MS position is the Trump administration pushing foreign customers away with stuff like shutting down the ICJ Microsoft accounts, but that'll hurt AWS and Google equally much (The winners of that will be Alibaba and other foregin providers that can't compete in full enterprise stacks today).
ml-anon
Watch this week. Amazon cloud growth has been terrible (Google and Microsoft remains >30%). Amazon have basically no good offerings for AI which is where gcp is bringing to eat their lunch. Anthropic moving to TPU for inference is a big big signal.
yousif_123123
I think the stable coin company Tether is valued at 500 billion also.
saaaaaam
Is the company valued at $500 billion or is the sum of the digital assets they’ve collateralised worth $500 billion?
Because if you buy the tokens you presumably do not own the company. And if you buy the company you hopefully don’t own the tokens - nor the assets that back the tokens.
yousif_123123
I think I read that its valued at 500 billion based on their latest fund raise. I don't know the total holdings they have.
I have no interest in crypto, just wanted to mention this which was surprising to me when I heard it.
sekai
> This also confirms the $500 billion valuation making OpenAI the most valuable private startup in the world.
SpaceX?
throwup238
If SpaceX is still a “startup”, the word has lost all meaning.
whamlastxmas
Around $350 to $400 billion from a couple sources I saw, but it's a lot of speculation
notyourwork
It’s odd to me in clouds you excluded AWS.
awestroke
And included oracle first. OP is probably Larry
outside1234
Or for the inevitable crash when we discover that OpenAI is a round tripping Enron style disaster.
null
enricotal
“OpenAI is now able to release open-weight models that meet requisite capability criteria.”
Was Microsoft the blocker before? prior agreements clearly made true open-weights awkward-to-impossible without Microsoft’s sign-off. Microsoft had (a) an exclusive license to GPT-3’s underlying tech back in 2020 (i.e., access to the model/code beyond the public API), and (b) later, broad IP rights + API exclusivity on OpenAI models. If you’re contractually giving one partner IP rights and API exclusivity, shipping weights openly would undercut those rights. Today’s language looks like a carve-out to permit some open-weight releases as long as they’re below certain capability thresholds.
A few other notable tweaks in the new deal that help explain the change:
- AGI claims get verified by an independent panel (not just OpenAI declaring it).
- Microsoft keeps model/product IP rights through 2032, but OpenAI can now jointly develop with third parties, serve some things off non-Azure clouds, and—critically—release certain open-weights.
Those are all signs of loosened exclusivity.
My read: previously, the partnership structure (not just “Microsoft saying no”) effectively precluded open-weight releases; the updated agreement explicitly allows them within safety/capability guardrails.
Expect any “open-weight” drops to be intentionally scoped—useful, but a notch below their frontier closed models.
danans
> Once AGI is declared by OpenAI ...
I think it's funny and telling that they've used the word "declare" where what they are really doing is "claim".
These guys think they are prophets.
k9294
They have a definition actually) “When AI generates $100 billion in profits” it will be considered an AGI. This term was defined in their previous partnership, not sure if it's still holds after the restructuring.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
cool_man_bob
> These guys think they are prophets.
You say this somewhat jokingly, but I think they 100% believe something along those lines.
danans
>> Whether you are an enterprise developer or BigTech in the US you are on average making twice the median income in your area. There is usually no reason for you not to be stacking cash.
Accidental misquote?
whamlastxmas
It goes on to say it'll be reviewed by independent third party so I think "declare" is accurate, they're declaring a milestone
_jab
Many questioning why Microsoft would agree to this, but to me the concessions they made strike me as minor.
> OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity
This should be the headline - Microsoft maintains its financial and intellectual stranglehold on OpenAI.
And meanwhile, while vaguer, a few of the bullet points are potentially very favorable to Microsoft:
> Microsoft can now independently pursue AGI alone or in partnership with third parties.
> The revenue share agreement remains until the expert panel verifies AGI, though payments will be made over a longer period of time.
Hard to say what a "longer period of time" means, but I presume it is substantial enough to make this a major concession from OpenAI.
creddit
> Hard to say what a "longer period of time" means, but I presume it is substantial enough to make this a major concession from OpenAI.
Depends on how this is meant to be parsed but it may be parsed to be a concession from MSFT. If the total amount of revenue to be shared is the same, then MSFT is worse off here. If this is meant to parse as "a fixed proportion of revenue will be shared over X period and X period has increased to Y" then it is an OAI concession.
I don't know the details but I would be surprised if there was a revenue agreement that was time based.
hdkrgr
As a corporate customer, the main point for me in this is Microsoft now retaining (non-exclusive) rights to models and products after OpenAI decides to declare AGI.
The question "Can we build our stuff on top of Azure OpenAI? What if SamA pulls a marketing stunt tomorrow, declares AGI and cuts Microsoft off?" just became a lot easier. (At least until 2032.)
cjbarber
> Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.
I wonder what criteria that panel will use to define/resolve this.
healsdata
> The two companies reportedly signed an agreement [in 2023] stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
jplusequalt
A sufficiently large profit margin is what constitutes AGI? What a fucking joke.
joomla199
The real AGI was the money we siphoned along the way.
layer8
“Only” means that it is a necessary condition, not a sufficient one.
sigmar
that's very different from OpenAI's previous definition (which was "autonomous systems that surpass humans in most economically valuable tasks") for at least one big reason: This new definition likely only triggers if OpenAI's AI is substantially different or better than other companies' AI. Because in a world where 2+ companies have similar AGI, both would have huge income but the competition would mean their profit margins might not be as large. The only reason their profit would soar to 100B+ would be because of no competition, right?
afavour
It's all so unfathomably stupid. And it's going to bring down an economy.
sekai
> It's all so unfathomably stupid. And it's going to bring down an economy.
Dot-com bubble all over again
tclancy
Hey, don't forget the climate effects too!
coldpie
I'm honestly starting to feel embarrassed to even be employed in the software industry now.
Mistletoe
This is the most sick implementation of Goodhart's Law I've ever seen.
>"When a measure becomes a target, it ceases to be a good measure"
What appalls me is that companies are doing this stuff in plain sight. In the 1920s before the crash, were companies this brazen or did they try to hide it better?
phito
Wow that is so dumb. Can these addicts think about anything else than profits?
Overpower0416
So if their erotic bot reaches $100b in profit, they will declare AGI? lol
ml-anon
Wait until they announce that they’ve been powering OnlyFans accounts this whole time.
null
conartist6
So what, there just won't be a word for general intelligence anymore, you know, in the philosophical sense?
nonethewiser
Well this is why it's framed that way:
>This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI, a nebulous term that means different things to everyone.
Not sure how OpenAI feels about that.
cogman10
lol, this is "autopilot" and "full self driving" all over again.
Just redefine the terms into something that's easy to accomplish but far from the definition of the terms/words/promises.
conartist6
This. This sentence reached off the page and hit me in the face.
It only just then became obvious to me that to them it's a question of when, in large part because of the MS deal.
Their next big move in the chess game will be to "declare" AGI.
TheCraiggers
I think some of this is just the typical bluster of company press releases / earnings reports. Can't ever show weakness or the shareholders will leave. Can't ever show doubt or the stock price will drop.
Nevertheless, I've been wondering of late. How will we know when AGI is accomplished? In the books or movies, it's always been handwaved or described in a way that made it seem like it was obvious to all. For example, in The Matrix there's the line "We marveled at our own magnificence as we gave birth to AI." It was a very obvious event that nobody could question in that story. In reality though? I'm starting to think it's just going to be more of a gradual thing, like increasing the resolution of our TVs until you can't tell it's not a window any longer.
marcosdumay
> How will we know when AGI is accomplished?
It's certainly not an specific thing that can be accomplished. AGI is a useful name for a badly defined concept, but any objective application of it (like in a contract) is just stupid things done by people that could barely be described as having the natural variety of GI.
port3000
"We are now confident we know how to build AGI as we have traditionally understood it." - Sam Altman, Jan 2025
'as we have traditionally understood it' is doing a lot of heavy lifting there
https://blog.samaltman.com/reflections#:~:text=We%20believe%...
baconbrand
This is phenomenally conceited on both companies’ parts. Wow.
jdiff
Don't worry, I'm sure we can just keep handing out subprime mortgages like candy forever. Infinite growth, here we come!
qgin
This makes me feel that the extremely short AGI timelines might be less likely.
To sign this deal today, presumably you wouldn’t bother if AGI is just around the corner?
Maybe I’m reading too much into it.
mossTechnician
If I remember correctly, Microsoft was previously promised ownership of every pre-AGI asset created by OpenAI. Now they are being promised ownership of things post-AGI as well:
Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI...
To me, this suggests a further dilution of the term "AGI."
ViscountPenguin
To be honest, I think this is somewhat assymetric, and kind of implies that openai are truer "Believers" than Microsoft.
If you believe in a hard takeoff, than ownership of assets post agi is pretty much meaningless, however, it protects Microsoft from an early declaration of agi by openai.
skepticATX
I think the more interesting question is who will be on the panel?
A group of ex frontier lab employees? You could declare AGI today. A more diverse group across academia and industry might actually have some backbone and be able to stand up to OpenAI.
adonese
Obligatory the office line:
"I just wanted you to know that you can't just say the word "AGI" and expect anything to happen.
- Michael Scott: I didn't say it. I declared it
rvz
The criteria changes more times than the weather forecast as it depends on the definition of "AGI".
empath75
It's quite possible that GI and thus AGI does not actually exist. Though now the paper the other day by all those heavy hitters in the industry makes more sense in this context.
aeve890
>It's quite possible that GI and thus AGI does not actually exist.
Aren't we humans supposed to have GI? Maybe you're conflating AGI and ASI.
mr_toad
> Aren't we humans supposed to have GI?
Supposed by humans, who might not be aware of their own limitations.
llelouch
what paper?
ossner
> Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.
What were they really expecting as an alternative? Anyone can "declare AGI" especially since it's an inherently ill-defined (and agruably undefinable) concept, it's strange that this is the first bullet point like this was the fruit of intensive deliberation.
I don't fully understand what is going on in this market as a whole, I really doubt anyone does, but I do believe we will look back on this period and wonder what the hell we were thinking believing and lapping up everything these corporations were putting out.
healsdata
I'm not savvy on investment terms, but most of these bullet points seem like a loss for Microsoft.
What's the value in investing in a smaller company and then giving up things produced off that investment when the company grows?
jasode
> and then giving up things produced off that investment when the company grows?
An investor can be stubborn about retaining all rights previously negotiated and never give them up... but that absolutist position doesn't mean anything if the investment fails.
OpenAI needs many more billions to cover many more years of expected losses. Microsoft itself doesn't want to invest any more money. Additional outside investors don't want to add more billions in funding unless Microsoft was willing to give up a few rights so that OpenAI has a better competitive position against Google Gemini, Anthropic, Grok etc.
When a startup is losing money and desperately needs more capital, a new round of investors can chip away at rights the previous investor(s) had. Why would previous original investors voluntarily agree to give up any rights?!? Because their investment is at risk if the startup doesn't get a lot more money. If the original investor doesn't want to re-invest again and would rather others foot the bill, they sometimes have to be a little flexible on their rights for that to happen.
mrweasel
If Microsoft doesn't believe that OpenAI will achieve AGI by 2030 or that there's a chance that OpenAI won't be the premiere AI company in four years, the deal looks less like a lose and more like they are buying their way out of a risky bet. On the other hand, if OpenAI does well, then Microsoft have a 27% stake in the company and that's not nothing.
This looks more like Microsoft ensuring that they'll win, regardless of how OpenAI fairs in the next four to six years.
onion2k
I'm not savvy on investment terms, but most of these bullet points seem like a loss for Microsoft.
Having a customer locked in to buying $250bn of Azure services is a fairly big benefit.
ml-anon
Or a massive opportunity cost. I’d imagine 250Bn of OAI business is way lower margin than 250Bn of some other random companies that don’t need H200s.
creddit
MSFT had a right to compute exclusivity.
"Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider."
Seems like a loss to me!
davey48016
I assume that first refusal required price matching. If the $250B is at a higher price than whatever AWS, GCP, etc. were willing to offer, then it could be a win for Microsoft to get $250B in decent margin business over a larger amount of break even business.
yreg
The risk stays somewhat similar. If OpenAI collapses it won't spend those 250B.
drexlspivey
Yeah poor microsoft, they invested $1B in 2019 and it’s now worth $135B
ForHackernews
Not worth anything until they sell it. There were a lot of excited FTX holders, too.
yas_hmaheshwari
I was thinking exactly the same. Maybe someone who understands these terms and deal better shine light on why would Microsoft agree to this
justinbaker84
I was thinking the same thing.
soared
Exponential growth
gostsamo
If there is need of more capital, you either keep your share without the capital injection and the share goes to zero or you let in more investors, dilute your share, but its overall value increases. Or you can let in more people and sign an agreement that part of the new money will be paid to you in the form of services that you provide.
nalinidash
Shows how much they valued 'AGI' wrt how we valued it in the textbook. https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
Amekedl
Regarding LLMs we're in a race to the bottom. Chinese models perform similarly with much higher efficiency; refer to kimi-k2 and plenty of others. ClopenAI is extremely overvalued, and AGI is not around the corner because among 20T+ tokens trained on it still generates 0 novel output. Try asking for ASP.NET Core .MapOpenAPI() instead of the pre .net9 swashbuckle version. You get nothing. It's not in the training data. The assumption these will be able to innovate, which could explain the value, is unfounded.
energy123
They perform similarly on benchmarks, which can be fudged to arbitrarily high numbers by just including the Q&A into the training data at a certain frequency or post-training on it. I have not been impressed with any of the DeepSeek models in real-world use.
deaux
General data: hundreds of billions of tokens per week are running through Deepseek, Qwen, GLM models solely by those users going through OpenRouter. People aren't doing that for laughs, or "non-real-world use", that's all for work and/or prod. If you look at the market share graph, at the start of the year the big 3 OpenAI/Anthropic/Google had 72% market share on there. Now it's 45%. And this isn't just because of Grok, before that got big they'd already slowly fallen to 58%.
Anecdata: our product is using a number of these models in production.
energy123
Because it's significantly cheaper. It's on the frontier at the price it's being offered, but they're not competitive in the high intelligence & high cost quadrant.
atbvu
Every time they bring up AGI, it feels more like a business strategy to me. It helps them attract investors and dominate the public narrative. For OpenAI, AGI is both a vision and a moat.
butler533
Why do none of OpenAI announcements have an author attributed to them? Are people that ashamed of working there, they don't even want to attach their name to the work? I guess I would be, too.
notatoad
because they're corporate PR statements drafted by a team, and corporate press releases don't normally have an author byline
butler533
Wrong
Lol, even Apple has authors listed https://www.apple.com/newsroom/
testfrequency
Eh, not really. There’s usually a “voice” behind it.
In general I feel like OAI is clown town to work at these days, so they probably don’t want anyone except leadership to take the heat for ~anything
> Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.
> Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI, with appropriate safety guardrails.
Does anyone really think we are close to AGI? I mean honestly?