Skip to content(if available)orjump to list(if available)

Evolving OpenAI's Structure

Evolving OpenAI's Structure

357 comments

·May 5, 2025

atlasunshrugged

I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:

> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.

phreeza

That is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.

istjohn

I'm not surprised that they found a reason to uncap their profits, but I wouldn't try to infer too much from the justification they cooked up.

lanthissa

AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.

Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.

Night_Thastus

Nothing OpenAI is doing, or ever has done, has been close to AGI.

dr_dshiv

https://www.noemamag.com/artificial-general-intelligence-is-...

Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...

pinkmuffinere

I agree with you, but that’s kindof beside the point. Open AI’s thesis is that they will work towards AGI, and eventually succeed. In the context of that premise, Open AI still doesn’t believe AGI would be winner-takes-all. I think that’s an interesting discussion whether you believe the premise or not.

abtinf

Agreed and, if anything, you are too generous. They aren’t just not “close”, they aren’t even working in the same category as anything that might be construed as independently intelligent.

AndrewKemendo

I agree with you

I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?

voidspark

Their multimodal models are a rudimentary form of AGI.

EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".

https://arxiv.org/abs/2311.02462

JumpCrisscross

> AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity

The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.

But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.

TeMPOraL

AGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...

null

[deleted]

sz4kerto

Or they consider themselves to have low(er) chance of winning. They could think either, but they obviously can't say the latter.

bhouston

OpenAI is winning in a similar way that Apple is winning in smartphones.

OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.

I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.

I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."

jjani

IE once captured all of the value in browserland, with even much higher mindshare and market dominance than OpenAI has ever had. Comparing with Apple (= physical products) is Apples to oranges (heh).

Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.

MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.

screamingninja

> ban all non-US LLM providers

What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.

retrorangular

> I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."

Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.

If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.

For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.

But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.

pphysch

Switching between Apple and Google/Android ecosystems is expensive and painful.

Switching from ChatGPT to the many competitors is neither expensive nor painful.

wincy

Companies that are contractors with the US government already aren’t allowed to use Deepseek even if its an airgapped R1 model is running on our own hardware. Legal told us we can’t run any distills of it or anything. I think this is very dumb.

ignoramous

> I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:

Yeah; and:

  We want to open source very capable models. 
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!

kvetching

Not to mention, @Gork, aka Grok 3.5...

dingnuts

to me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!

and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."

This is another nail in the coffin

the_duke

AGI is matter of when, not if.

It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.

ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.

But AIs that are on a level with humans for many common tasks is not that far off.

runako

Either that, or this AI boom mirrors prior booms. Those booms saw a lot of progress made, a lot of money raised, then collapsed and led to enough financial loss that AI went into hibernation for 10+ years.

There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.

JumpCrisscross

> AGI is matter of when, not if

We have zero evidence for this. (Folks said the same shit in the 80s.)

manquer

Progress is not just a function of technical possibility( even if it exists) it is also economics.

It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.

If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.

Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.

From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.

[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.

[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.

bdangubic

AGI is matter of when, not if

probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong

foobiekr

I think this is right but also missing a useful perspective.

Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.

That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.

There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).

The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.

I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.

So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.

Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.

blibble

> AGI is matter of when, not if.

LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day

Kabukks

Could you elaborate on the progress that has been made? To me, it seems only small/incremental changes are made between models with all of them still hallucinating. I can see no clear steps towards AGI.

m_krebs

"X increased exponentially in the past, therefore it will increase exponentially in the same way in the future" is fallacious. There is nothing guaranteeing indefinite uncapped growth in capabilities of LLMs. An exponential curve and a sigmoidal curve look the same until a certain point.

lenerdenator

That, or they don't care if they get to AGI first, and just want their payday now.

Which sounds pretty in-line with the SV culture of putting profit above all else.

foobiekr

If they think AGI is imminent the value of that payday is very limited. I think the grandparent is more correct: OpenAI is admitting that near term AGI - which, being that the only one anyone really cares about is the case with exponential self improvement - isn't happening any time soon. But that much is obvious anyway despite the hyperbolic nonsense now common around AI discussions.

jjani

SamA is in a hurry because he's set to lose the race. We're at peak valuation and he needs to convert something now.

If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.

The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.

caseyy

It's doubtful if there even is a race anymore. The last significant AI advancement in the consumer LLM space was fluent human language synthesis around 2020, with its following assistant/chat interface. Since then, everything has been incremental — larger models, new ways to prompt them, cheaper ways to run them, more human feedback, and gaming evaluations.

The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.

ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.

roflmaostc

Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....

Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...

So I think your timeline and views are slightly off.

caseyy

> Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....

GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.

> Most people in society connect AI directly to ChatGPT and hence OpenAI.

I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.

> And there has been a lot of progress in image generation, video generation, ...

These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.

This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.

orionsbelt

Saying LLMs have only incrementally improved is like saying my 13 year old has only incrementally approved over the last 5 years. Sure, it's been a set of continuous improvements, but that has taken it from a toy to genuinely insanely useful.

Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.

Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.

bigstrat2003

No, it's still just a toy. Until they can make the models actually consistently good at things, they aren't going to be useful. Right now they still BS you far too much to trust them, and because you have to double check their work every time they are worse than no tool at all.

csours

To extend your illustration, 5 years ago no one could train an LLM with the capabilities of a 13 year old human; now many companies can both train LLMs and integrate them into products.

> taken it from a toy to genuinely insanely useful.

Really?

paulddraper

You saying —- with a straight face —- that post 2020 LLM AIs have made only incremental progress?

caseyy

Yep, compared to beating the Turing test, the progress has been linear with exponentially growing investment. That's diminishing marginal returns.

ReptileMan

Yes. But they have also improved a lot. Incremental just means that the function is going up without breaking points. We haven't seen anything revolutionary, just evolutionary in the last 3 years. But the models do provide 2 or 3 times more value. So their pace of advancement is not slow.

grey-area

Well I think you’re correct that they know the jig is up, but I would say they know the AI bubble is about to burst so they want to cash out before that happens.

There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.

atleastoptimal

AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients. I know it’s fun to imagine AI is some big scam like crypto, but you’d have to be ignoring a lot of genuine non hype economic movement at this point to assume GAI isn’t making any money.

Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?

JumpCrisscross

> AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients

I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.

directevolve

Doctors were using Google to diagnose patients before. The thing is, it's still the doctor delivering the diagnosis, the doctor writing the prescription, and the doctor billing insurance. Unless and until patients or hospitals are willing and legally able to use ChatGPT as a replacement for a doctor (unwise), ChatGPT is not about to eat any doctor's lunch.

gscott

When the wright brothers made their plane they didn't expect today that there are thousands of planes flying at a time.

When the Internet was developed they didn't imagine the world wide Web.

When cars started to get popular people still thought there would be those who are going to stick with horses.

I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.

Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.

davidcbc

> Doctors are straight up using ChatGPT to diagnose patients

This makes me want to invest in malpractice lawyers, not OpenAI

paulddraper

Yes. The answer is yes.

The world is changing and that is scary.

Jefro118

They made $4 billion last year, not really "little to no money". I agree it's not clear they can justify their valuation but it's certainly not a bubble.

mandevil

But didn't they spend $9 billion? If I have a machine that magically turns $9 billion of investor money into $4 billion in revenue, I need to have a pretty awesome story for how in the future I am going to be making enormous piles of money to pay back that investment. If it looks like frontier models are going to be a commodity and it is not going to be winner-take-all... that's a lot harder story to tell.

SirensOfTitan

I guarantee you that I could surpass that revenue if I started a business that would give people back $9 if they gave me $4.

OpenAI models are already of the most expensive, they don’t have a lot of levers to pull.

nativeit

Cognitive dissonance is a psychological phenomenon that occurs when a person holds two contradictory beliefs at the same time.

crorella

But he said he was doing it just for love!! [1]

1: https://www.techpolicy.press/transcript-senate-judiciary-sub...

tedivm

Even Alibaba is releasing some amazing models these days. Qwen 3 is pretty remarkable, especially considering the variety of hardware the variants of it can run on.

pi-err

Sounds a lot like "Google+ will catch Facebook in no time".

OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet.

Everybody else like you describe is trying to add some AI crap behind a button on a congested UI.

B2B market will stay open but OpenAI has certainly not peaked yet.

no_wizard

Facebook had immense network effects working for it back then.

What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.

So again, I ask, what makes it sticky?

miki123211

OpenAI (or, more specifically, Chat GPT) is CocaCola, not Facebook.

They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.

I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.

Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.

cshimmin

Yep, I mostly interact with these AIs through Cursor. When I want to ask it a question, there's a little dropdown box and I can select openai/anthropic/deepseek whatever model. It's as easy as that to switch.

rileyphone

From talking to people, the average user relies on memories and chat history, which is not easy to migrate. I imagine that's the part of the strategy to keep people from hopping model providers.

jwarden

Brand counts for a lot

JumpCrisscross

> OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet

OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.

[1] https://iot-analytics.com/leading-generative-ai-companies/

[2] https://www.enterpriseappstoday.com/stats/openai-statistics....

NBJack

Defacto victory.

Facebook wasn't some startup when Google+ entered the scene; they were already cash flow positive, and had roughly 30% ads market share.

OpenAI is still operating at a loss despite having 50+% of the chatbot "market". There is no easy path to victory for them here.

chrisweekly

IMHO "ChatGPT the default chatbot" is a meaningful but unstable first-mover advantage. The way things are apparently headed, it seems less like Google+ chasing FB, more like Chrome eating IE + NN's lunch.

jameslk

OpenAI is a relatively unknown company outside of the tech bubble. I told my own mom to install Gemini on her phone because she's heard of Google and is more likely going to trust Google with whatever info she dumps into a chat. I can’t think of a reason she would be compelled to use ChatGPT instead.

Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters

kranke155

Facebook couldnt be overtaken because of network effects. What network effects are there to a chatbot.

If you look at Gemini, I know people using it daily.

null

[deleted]

ricardobeat

I know a single person who uses ChatGPT daily, and only because their company has an enterprise subscription.

My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.

jjani

The comparison of Chrome and IE is much more apt, IMO, because the deciding factor as other mentioned for social media is network effects, or next-gen dopamine algorithms (TikTok). And that's unique to them.

For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.

nfRfqX5n

ask 10 people on the street about chatgpt or gemini and see which one they know

postalrat

Now switch chatgpt and gemini on them and see if they notice.

jjani

Ask 10 people on the street in 2009 about IE and Chrome and ask which one they knew.

The names don't even matter when everything is baked in.

TrackerFF

On the other hand...If you asked, 5-6-7 years ago, 100 people which of the following they used:

Slack? Zoom? Teams?

I'm sure you'd get a somewhat uniform distribution.

Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.

Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.

jmathai

That's the wrong question. See how many people know Google vs. ChatGPT. As popular as ChatGPT is, Google's the stronger brand.

kranke155

thats just brand recognition.

The fact that people know Coca Cola doesnt mean they drink it.

jimbokun

It doesn’t?

That name recognition made Coca Cola into a very successful global corporation.

All4All

But whether the competition will emerge as Pepsi or as RC-Cola is still tbd.

blueprint

or that they would drink it if a well designed, delicious, but no HFCS nor sugar alternative were marketed with funding

jampa

The real money is for enterprise use (via APIs), so public perception is not as crucial as for a consumer product.

moralestapia

Sorry but perhaps you haven't looked at the actual numbers.

Market share of OpenAI is like 90%+.

null

[deleted]

drewbeck

I see OpenAI's original form as the last gasp of a kind of liberal tech; in a world where "doing good" was seen as very important, the non-profit approach made sense and got a lot of people on board. These days the Altmans and the pmarcas of the world are much more comfortable expressing their authoritarian, self-centered world views; the "evolving" structure of Open AI is fully in line with that. They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".

stego-tech

That world never existed. Yes, pockets did - IT professionals with broadband lines and spare kit hosting IRC servers and phpBB forums from their homes free of charge, a few VC-funded companies offering idealistic visions of the net until funding ran dry (RIP CoHost) - but once the web became privatized, it was all in service of the bottom line by companies. Web 2.0 onwards was all about centralization, surveillance, advertising, and manipulation of the populace at scale - and that intent was never really a secret to those who bothered to pay attention. While the world was reeling from Cambridge Analytica, us pre-1.0 farts who cut our teeth on Telnet and Mosaic were just kind of flabbergasted that ya'll were surprised by overtly obvious intentions.

That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.

But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.

dgreensp

I don’t think the parent was saying that everyone’s intentions were pure until recently, but rather that naked greed wasn’t cool before, but now it is.

The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.

jon_richards

As recently as the Silicon Valley tv show, the joke was that every startup pitch claimed they were “making the world a better place”.

null

[deleted]

JumpCrisscross

> That world never existed

It absolutely did. Steve Wozniak was real. Silicon Valley wasn't always a hive of liars and sycophants.

jimbokun

They deeply believe in the Ayn Rand mindset that the system that brings them the most individual wealth is also the best system for humanity as a whole.

ballooney

Hopelessly over-idealistic premise. Sama and pg have never been anything other than opportunistic muck. This will be my last ever comment on HN.

drewbeck

Oh I'm not saying they every believed more than their self-centered views, but that in a world that leaned more liberal there was value in trying to frame their work in those terms. Now there's no need to pretend.

byearthithatius

I feel this so hard, I think this may be my last time using the site as well. They don't care about advancement, they only care about money.

stego-tech

Like everything, it's projection. Those who loudly scream against something are almost always the ones engaging in it.

Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.

Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.

If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.

HaZeust

inb4 deleted

ignoramous

> They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".

You mean, AGI will benefit all of humanity like War on Terror spread democracy?

nickff

Why are you changing the subject? The “War on Terror” was never intended to spread democracy as far as I know; democracy was a means by which to achieve the objective of safety from terrorism.

null

[deleted]

sneak

Is it reasonable to assign the descriptor “authoritarian” to anyone who simply does not subscribe to the common orthodoxy of one faction in the american culture war? That is what it seems to me is happening here, though I would love to be wrong.

I have not seen anything from sama or pmarca that I would classify as “authoritarian”.

tastyface

Donating millions to a fascist president (in Altman’s case) seems pretty authoritarian to me. And he seems happy enough hanging out with Thiel and other Yarvin groupies.

sidibe

Yup, if Elon hadn't gotten so jealous and spiteful to him I'm sure he'd be one of Elon's leading sycophants.

sanderjd

No, "authoritarian" is a word with a specific meaning. I'm not sure about applying it to Sam Altman, but Marc Andreessen has expressed views that I consider authoritarian in his victory lap tour since last year's presidential election.

bee_rider

I’m not sure exactly what they meant by “liberal” in this case, but since they put it in contrast with authoritarianism, I assume they meant it in the conventional definition of the word (where it is the polar opposite of authoritarianism). Instead of the American politics-as-sports definition that makes it a synonym for “team blue.”

drewbeck

correct. "liberal" as in the general ideas that ie expanding the franchise is important, press freedoms are good, that government can do good things for people and for capital etc. Wikipedia's intro paragraph does a good job of describing what I was getting at (below). In prior decades Republicans in the US would have been categorized as "liberal" under this definition; in recent years, not so much.

>Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, the right to private property, and equality before the law. Liberals espouse various and often mutually conflicting views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history.

drewbeck

No I don't think it is. I DO think those two people want to be in charge (along with other billionaires) and they want the rest of us to follow along, which is in my book an authoritarian POV. pmarca's recent "VC is the only job that can't be done by AI" is a good example of that; the rest of us are to be managed and controlled by VCs and robots.

blibble

are you aware of worldcoin?

altman building a centralised authority of who will be classed as "human" is about as authoritarian as you could get

sneak

Worldcoin is opt-in, which is the opposite of authoritarian. Nobody who doesn’t like it is required to participate.

pants2

It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...

esafak

Lots of people in academia and industry are calling for more oversight. It's the US government that's behind. Europe's AI Act bans applications with unacceptable risk: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act

lenerdenator

The US government probably doesn't think it's behind.

Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.

Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?

I don't know. I'm just asking questions.

azinman2

Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.

esafak

What kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?

We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.

nicce

This thought process it not different than it was with nuclear weapons.

The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.

saubeidl

[dead]

jimbokun

US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.

saubeidl

The EU does and has passed the AI act to reign in the worst consequences of this nuclear weapon. It has not been received well around here.

The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.

rchaud

Absolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.

Xenoamorphous

I guess they think that the “digital god” has a chance to become real (and soon, even), unlike the non-digital one?

lenerdenator

Roko's Basilisk is basically Pascal's wager with GPUs.

saubeidl

[dead]

modeless

I don't know what sources you're reading. There's so much eye-batting I'm surprised people can see at all.

jimbokun

Most of us are batting our eyelashes as rapidly as possible but have no idea how to stop it.

atleastoptimal

Because many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.

otabdeveloper4

Well, because it's obviously bullshit and everyone knows it. Just play the game and get rich like everyone else.

esafak

Are you sure about that? AI-powered robotic soldiers are around the corner. What could go wrong...

devinprater

Ooo I know, Cybermen! Yay.

null

[deleted]

martinohansen

Imagine having a mission of “ensure[ing] that artificial general intelligence (AGI) benefits all of humanity” while also believing that it can only be trusted in the hands of the few

> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.

jb_rad

He's very clearly stating that trusting AI to a few hands was an old, naiive idea that they have evolved from. Which establishes their need to keep evolving as the technology matures.

There is a lot to criticize about OpenAI and Sama, but this isn't it.

TZubiri

To the benefit of OpenAI. I think LLMs would still exist, but we wouldn't have access to them.

Whether they are a net positive or a net negative is arguable. If it's a net negative, then unleashing them to the masses was maybe the danger itself.

bluelightning2k

Turns out the non profit structure wasn't very profitable

modeless

Huh, so Elon's lawsuit worked? The nonprofit will retain control? Or is this just spin on a plan that will eventually still sideline the nonprofit?

blagie

To be specific: The nonprofit currently retains control. It will stop once more dilution sets in.

j_maffe

It more sounds like the district attorneys won

null

[deleted]

Tenoke

For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.

lolinder

So the non-profit retains control but we all know that Altman controls the board of the non-profit and I'd be shocked if he won't have significant stock in the new for-profit (from TFA: "we are moving to a normal capital structure where everyone has stock"). Which means that regardless of whether the non-profit has control on paper, OpenAI is now even better structured for Sam Altman's personal enrichment.

No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".

elAhmo

We have seen how much power does the board have after the firing of Altman - none.

Let's see how this plays out. PBC effectively means nothing - just take a look at Xai and its purchase of Twitter. I would love to listen reasoning explaining why this ~33 billion USD move is benefiting public.

paulddraper

The board had plenty of power.

There was never a coherent explanation of its firing the CEO.

But they could have stuck with that decision if they believed in it.

michaelt

The explanation seemed pretty obvious to me: They set up a nonprofit to deliver an AI that was Open.

Then things went unexpectedly well, people were valuing them at billions of dollars, and they suddenly decided they weren't open any more. Suddenly they were all about Altman's Interests Safety (AI Safety for short).

The board tried to fulfil its obligation to get the nonprofit to do the things in its charter, and they were unsuccessful.

insane_dreamer

The explanation was pretty clear and coherent: The CEO was no longer adhering to the mission of the non-profit (which the board was upholding).

But they found themselves alone in that it turns out the employees (who were employed by the for-profit company) and investors (MSFT in particular) didn't care about the mission and wanted to follow the money instead.

So the board had no choice but to capitulate and leave.

freejazz

The question is not if they could, it is if they would.

ignoramous

> We have seen how much power does the board have after the firing of Altman - none.

Right; so, "Worker Unions" work.

wmf

ChatGPT is free. That's the public benefit.

patmcc

Google offers a great many things for free. Should they get beneficial tax treatment for it?

insane_dreamer

That's like saying AWS is free. ChatGPT has a limited use free tier just like most other SaaS products out there.

sekai

They don't collect data?

nativeit

Define “free”.

richardw

Or, alternatively, it’s much harder to fight with one hand behind your back. They need to be able to compete for resources and talent given the market structure, or they fail on the mission.

This is already impossibly hard. Approximately zero people commenting would be able to win this battle in Sam’s shoes. What would they need to do to begin to have a chance? Rather than make all the obvious comments “bad evil man wants to get rich”, think what it would take to achieve the mission. What would you need to do in his shoes, aside from just give up and close up shop? Probably this, at the very least.

Edit: I don’t know the guy and many near YC do. So I accept there may be a lens I don’t have. But I’d rather discuss the problem, not the person.

kadushka

It seems like they lost most of their top talent - probably because of Altman.

k__

The moment we stop treating "bad evil man wants to ge it rich" as a straw man, we can heal.

thegreatpeter

Extra! Extra! Read all about it! "Bad evil man wants to get rich! We should enrich Google and Microsoft instead!"

whynotminot

Isn’t Sam already very rich? I mean it wouldn’t be the first time a guy wanted to be even richer, but I feel like we need to be more creative when divining his intentions

sigilis

Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate.

Being rich results in a kind of limitation of scope for ambition. To the sufferer, a person who has everything they could want, there is no other objective worth having. They become eccentric and they pursue more money.

We should have enrichment facilities for these people where they play incremental games and don’t ruin the world like the paperclip maximizers they are.

whynotminot

> Why would we need to be more creative? The explanation of him wanting more money is perfectly adequate. Being rich results in a kind of limitation of scope for ambition.

The dude announces new initiatives from the White House, regularly briefs Senators and senior DoD leaders, and is the top get for interviews around the world for AI topics.

There’s a lot more to be ambitious about than just money.

null

[deleted]

Yizahi

It seems a defining feature of nearly every single extremely rich person is their belief that they somehow are smarter than filthy peasants, and so he decides to "educate" them of the sacred knowledge. This may take vastly different forms - genocide, war, trying to create via bribes a better government, create a city from scratch, create a new corporate "culture", do public proselytizing of their "do better" faith, write books, classes etc.

St. Altman plans to create a corporate god for us dumb schmucks, and he will be it's prophet.

senderista

"It's not about the money, it's about winning"

--Gordon Gekko

paulddraper

OpenAI doesn’t have the lead anymore.

Google/Anthropic are catching up, or already surpassed.

6510

how? The internet says 400 m weekly chatgpt users, 19 m weekly Anthropic, 47.3 m Monthly Gemini, Grok 6.7 m daily, 430 m Baidu.

null

[deleted]

MPSFounder

Never understood his appeal. Lacks charisma. Not technically savvy relative to many engineers at OpenAI(I doubt he would pass their own intern interviews, even less so their FT). Very unlikeable in person (comes off as fake for some reason, like a political plant). Who is vouching for this guy. When I met him, for some reason, he reminded me of Thiel. He is no Jobs

JumpCrisscross

> he reminded me of Thiel

If someone reminds you of Thiel, you're going to cut a cheque.

gsibble

Altman is a clear sociopath. He's a sales guy and good executive. But he's only out for himself.

null

[deleted]

mrandish

I agree that this is simply Altman extending his ability to control, shape and benefit from OpenAI. Yes, this is clearly (further) subverting the original intent under which the org was created - and that's unfortunate. But in terms of impact on the world, or even just AI safety, I'm not sure the governance of OpenAI matters all that much anymore. The "governance" wasn't that great after the first couple years and OpenAI hasn't been "open" since long before the board spat.

More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.

everybodyknows

> transition to a Public Benefit Corporation

Can some business person give us a summary on PBCs vs. alternative registrations?

fheisler

A PBC is just a for-profit company that has _some_ sort of specific mandate to benefit the "public good" - however it chooses to define that. It's generally meant to provide some balance toward societal good over the more common, strictly shareholder profit-maximizing alternative.

(IANAL but run a PBC that uses this charter[1] and have written about it here[2] as part of our biennial reporting process.)

[1] https://github.com/OpenCoreVentures/ocv-public-benefit-compa...

[2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-repo...

cs702

The charter of a public-benefit corporation gives the company's board and management a bit of legal cover for making decisions that don't serve to maximize, or may even limit, financial returns to shareholders, when those decisions are made for the benefit of the public.

blagie

Reality: It is the same as any other for-profit with a better-sounding name. It confuses a lot of people into thinking it's a non-profit without being one.

Theory: It allows the CEO to make decisions motivated not just by maximizing shareholder value but by some other social good. Of course, very few PBC CEOs choose to do that.

imkevinxu

you could've just asked this to chatgpt....