Elon Musk wanted an OpenAI for-profit
677 comments
·December 13, 2024lsy
ben_w
> Is nobody in these very rich guys' spheres pushing back on their thought process?
Yes, frequently and loudly.
When Altman was collecting the award at Cambridge the other year, protesters dropped in on the after-award public talk/Q&A session, and he actively empathised with the protestors.
> So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit, much less a superintelligence.
I just got back from an Indian restaurant in the middle of Berlin, and the table next to me I overheard a daughter talking to her mother about ChatGPT and KI (Künstliche Intelligenz, the German for AI).
The product market fit is fantastic. This isn't the first time I've heard random strangers discussing it in public.
What's not obvious is how to monetise it. Old meme parroted around was "has no moat", which IMO is like saying Microsoft has no moat for spreadsheets: sure, anyone can make the core tech, and sure we don't know who is Microsoft vs StarOffice vs ClarisWorks vs Google Docs, but there's more than zero moat. From what I've seen, if OpenAI didn't develop new products, they'd be making enough to be profitable, but it's a Red Queen race to remain worth paying for.
As for "much less a superintelligence": even the current models meet every definition of "very smart" I had while growing up, despite their errors. As an adult, I'd still call them book-smart if not abstractly smart. Students or recent graduates, but not wise enough to know their limits and be cautious.
For current standards of what intelligence means, we'd better hope we don't get ASI in the next decade or two, because if and when that happens then "humans need not apply" — and by extension, foundational assumptions of economics may just stop holding true.
this_user
> When Altman was collecting the award at Cambridge the other year, protesters dropped in on the after-award public talk/Q&A session, and he actively empathised with the protestors.
He always does that to give himself cover, but he has clearly shown that his words mean very little in this regard. He always dodges criticism. He used to talk about the importance of him being accountable to the OpenAI board and them being able to fire him if necessary when people were questioning the dangers of having one person have this much control over something as big as bleeding edge AI. He also used to mention how he had no direct financial interests in the company since he had no equity.
Then the board did fire him. What happened next? He came back, the board is gone, he now openly has complete control over OpenAI, and they have given him a potentially huge equity package. I really don't think Sam Altman is particularly trustworthy. He will say whatever he needs to say to get what he wants.
kulahan
Wasn't he fired for questionable reasons? I thought everyone wanted him back, and that's why he was able to return. It was, as I remember, just the board that wanted him out.
I imagine if he was doing something truly nefarious, opinions might have been different, but I have no idea what kind of cult of personality he has at that company, so I might be wrong here.
yard2010
This guy is outright scary. He gives me the chills.
Waterluvian
Indeed. Words are very very inexpensive and fool a lot of people. Never pay attention to any words. Judge people by their actions.
manquer
> The product market fit is fantastic. This isn't the first time I've heard random strangers discussing it in public.
Hardly the evidence of PMF. There is always something new in the zeitgeist, that every one is talking about, some more so than others .
2 years before it was VR, few years before that NFTs and blockchain everything, before that it was self driving cars before that personal voice assistants like Siri and so on .
- self driving has not transformed us into minority report and despite how far it has come it cannot in next 30 years be ubiquitous, even if the l5 magic tech exists today in every new car sold it will take 15 years for current cars to lifecycle.
- Crypto has not replaced fiat currency , even in most generous reading you can see it as store of value like gold or whatever useless baubles people assign arbitrary value to, but has no traction for 3 out of other 4 key functions of money .
- VR is not transformative to every day life and is 5 fundamental breakthroughs away.
- Voice assistants are useless setting alarms and selecting music 10 years in.
There has been meaningful and measurable in each of these fields, but none of them have met the high bar of world transforming .
AI is aiming for much higher bar of singularity and consciousness. Just in every hype cycles we are in peak of inflated expectations, we will reach a plateau of productivity where it is will be useful in specific areas (as it already is) and people will move on to the next fad.
ben_w
> 2 years before it was VR, few years before that NFTs and blockchain everything, before that it was self driving cars before that personal voice assistants like Siri and so on .
I never saw people talking about VR in public, nor NFTs, and the closest I got to seeing blockchain in public were adverts, not hearing random people around me chatting about it. The only people I ever saw in real life talking about self-driving cars were the ones I was talking to myself, and everyone else was dismissive of them. Voice assistants were mainly mocked from day 1, with the Alex advert being re-dubbed as a dystopian nightmare.
> AI is aiming for much higher bar of singularity and consciousness.
No, it's aiming to be economically useful.
"The singularity" is what a lot of people think is an automatic consequence of being able to solve tasks related to AI; me, I think that's how we sustained Moore's Law so long (computers designing computers, you can't place a billion transistors by hand, but even if you could the scale is now well into the zone where quantum tunnelling has to be accounted for in the design), and that "singularities" are a sign something is wrong with the model.
"Consciousness" has 40 definitions, and is therefore not even a meaningful target.
> Just in every hype cycles we are in peak of inflated expectations, we will reach a plateau of productivity where it is will be useful in specific areas (as it already is) and people will move on to the next fad.
In that at least, we agree.
DennisP
Self-driving won't take over by just being available in the new cars people are buying anyway.
It'll take over when people find it cheaper to ride robotaxis than to own a vehicle at all. That's potentially a much quicker transition, requiring significantly fewer new vehicles.
mlyle
> despite how far it has come it cannot in next 30 years be ubiquitous, even if the l5 magic tech exists today in every new car sold it will take 15 years for current cars to lifecycle.
We must have a different meaning of ubiquitous. Ubiquitous means "found everywhere" not "has eliminated everything else."
You don't even need to cycle the fleet once to meet the definition of ubiquitous. If you can get a self driving car in any city and many towns, you see them all over the place, and a third of trips are done with them, that'd be ubiquitous in my book.
I don't see why you couldn't get there in 3 decades. I don't think it's likely in 12 years but it seems possible in that timeframe.
tim333
>people will move on to the next fad
AI isn't really a fad. It's going to something more like electricity, say.
eastbound
Common, VR, NFTs and blockchain were always abysses of void looking for a usecase. Driving cars maybe, but development has been stalling for 15 years.
bee_rider
Consciousness is a stupid and unreasonable goal, it is basically impossible to confirm that a machine isn’t just faking it really well.
Singularity is at least definable…Although I think it is not really the bar required to be really impactful. If we get an AI system that can do the work of like 60% of hardcore knowledge workers, 80% of office workers, and 95% of CEOs/politicians and other pablumists, it could be really change how the economy works without actually being a singularity.
og_kalu
Comparing the site that was #8 in Internet worldwide traffic last month, has 300M weekly active users, 1B messages a day and is basically the fastest adopted software product in history to NFTs, VR and Blockchain does not make any sense.
sneak
Did you really just compare the thing that makes me able to code and ship 5x faster to... NFTs?
jkaptur
Can you expand on your spreadsheet analogy?
I think Joel Spolsky explained the main Office moat well here: https://www.joelonsoftware.com/2008/02/19/why-are-the-micros...
> ... it might take you weeks to change your page layout algorithm to accommodate it. If you don’t, customers will open their Word files in your clone and all the pages will be messed up.
Basically, people who use Office have extremely specific expectations. (I've seen people try a single keyboard shortcut, see that it doesn't work in a web-based application, and declare that whole thing "doesn't work".) Reimplementing all that stuff is really time consuming. There's also a strong network effect - if your company uses Office, you'll probably use it too.
On the other hand, people don't have extremely specific expectations for LLMs because 1) they're fairly new and 2) they're almost always nondeterministic anyway. They don't care so much about using the same one as everyone they know or work with, because there's no network aspect of the product.
I don't think the moats are similar.
pj_mukh
"Basically, people who use Office have extremely specific expectations."
Interesting point, but to OP's point- This wasn't true when Office was first introduced and Office still created a domineering market share. In fact, I'd argue these moat-by-idiosyncracy features are a result of that market share. There is nothing stopping OpenAI from developing their own over time.
ben_w
> Can you expand on your spreadsheet analogy?
Sure.
(I've been coding long enough that what Joel writes about there just seems obvious to me: of course it happened like that, how else would it have?)
So, a spreadsheet in the general sense — not necessarily compatible with Microsoft's, but one that works — is quite simple to code. Precisely because it's easy, that's not something you can sell directly, because anyone else can compete easily.
And yet, Microsoft Office exists, and the Office suite is basically a cost of doing businesses. Microsoft got to be market-dominant enough to build all that complexity that became a moat, that made it hard to build a clone. Not the core tech of a spreadsheet, but everything else surrounding that core tech.
OpenAI has a little bit of that, but not much. It's only a little because while their API is cool, it's so easy to work with that you can (I have) asked the original 3.5 chat model to write its own web UI. As it happens, mine is already out of date, because the real one can better handle markdown etc., so the same sorts of principles apply, even on a smaller scale like this where it's more of "keeping up in real time" rather than "349 page PDF file just to get started".
OpenAI is iterating very effectively and very quickly with all the stuff around the LLM itself, the app, the ecosystem. But so is Anthropic, so is Apple, so is everyone — the buzz across the business world is "how are you going to integrate AI into your business?", which I suspect will go about the same as when it was "integrate the information superhighway" or "integrate apps", and what we have now in the business world is to the future of LLMs as Geocities was to the web: a glorious chaotic mess, upon which people cut their teeth in order to create the real value a decade later.
In the meantime, OpenAI is one of several companies that has a good chance of building up enough complexity over time to become an incumbent by a combination of inertia and years of cruft.
But also only a good chance. They may yet fail.
> On the other hand, people don't have extremely specific expectations for LLMs because 1) they're fairly new and 2) they're almost always nondeterministic anyway. They don't care so much about using the same one as everyone they know or work with, because there's no network aspect of the product.
For #1, I agree. That's why I don't want to bet if OpenAI is going to be to LLMs what Microsoft is to spreadsheets, or if they'll be as much a footnote to the future history of LLMs as Star Division was to spreadsheets.
For #2, network effects… I'm not sure I agree with you, but this is just anecdotal, so YMMV: in my experience, OpenAI has the public eye, much more so than the others. It's ChatGPT, not Claude, certainly not grok, that people talk about. I've installed and used Phi-3 locally, but it's not a name I hear in public. Even in business settings, it's ChatGPT first, with GitHub Copilot and Claude limited to "and also", and the other LLMs don't even get named.
sangnoir
> ...like saying Microsoft has no moat for spreadsheets
Which would be very inaccurate as network-effects are Excel's (and Word's) moat. Excel being bundled with Office and Windows helped, but it beat Lotus-123 by being a superior product at a time the computing landscape was changing. OpenAI has no such advantage yet: a text-based API is about as commoditized as a technology can get and OpenAI is furiously launching interfaces with lower interoperability (where one can't replace GPT-4o with Claude 3.5 via a drop-down)
sumedh
> OpenAI has no such advantage yet: a text-based API is about as commoditized as a technology
It has branding, for most people AI is ChatGpt. Once you reach critical mass, getting people to switch becomes difficult if your product is good enough and most people are happy.
shitloadofbooks
In my opinion the difference is that a recent graduate knows to say “I don’t know” to questions they’re not sure on, whereas LLMs will extremely confidently and convincingly lie to your face and tell you dangerous nonsense.
ben_w
My experience is that intellectual humility is a variable, not a universal.
Seen some students very willing to recognise their weaknesses, others who are hamstrung by their hubris. (And not just students, the worst code I've seen in my career generally came from those most certain they're right).
https://phys.org/news/2023-12-teens-dont-acknowledge-fact-ea...
And yes, this is a problem with some LLMs that are trained to always have an answer rather than to acknowledge their uncertainty.
naming_the_user
> For current standards of what intelligence means, we'd better hope we don't get ASI in the next decade or two, because if and when that happens then "humans need not apply" — and by extension, foundational assumptions of economics may just stop holding true.
I'm not sure that we need superintelligence for that to be the case - it may depend on whether you include physical ability in the definition of intelligence.
At the point that we have an AI that's capable of every task that say a 110 IQ human is, including manipulating objects in the physical world, then basically everyone is unemployed unless they're cheaper than the AI.
ben_w
While I would certainly expect a radical change to economics even from a middling IQ AI — or indeed a low IQ, as I have previously used the example of IQ 85 because that's 15.9% of the population that would become permanently unable to be economically useful — I don't think it's quite as you say.
Increasing IQ scores seem to allow increasingly difficult tasks to be performed competently — not just the same tasks faster, and also not just "increasingly difficult" in the big-O-notation sense, but it seems like below certain IQ thresholds (or above them but with certain pathologies), some thoughts just aren't "thinkable" even with unbounded time.
While this might simply be an illusion that breaks with computers because silicon outpaces synapses by literally the degree to which jogging outpaces continental drift, I don't see strong evidence at this time for the idea that this is an illusion. We may get that evidence in a very short window, but I don't see it yet.
Therefore, in the absence of full brain uploads, I suspect that higher IQ people may well be able to perform useful work even as lower IQ people are outclassed by AI.
If we do get full brain uploads, then it's the other way around, as a few super-geniuses will get their brains scanned but say it takes a billion dollars a year to run the sim in real time, then Moore's and Koomey's laws will take n years to lower that to $10 million dollars a year, 2n years to lower it to a $100k a year, and 3n years to lower it to $1k/year.
someothherguyy
> At the point that we have an AI that's capable of every task that say a 110 IQ human is, including manipulating objects in the physical world, then basically everyone is unemployed unless they're cheaper than the AI.
Until a problem space is "solved", you will still need AIs that are more capable than those with 110 IQ to review the other AIs work. All evidence points to "mistakes will be made" with any AI system I have used and any human I have worked with.
015a
> he actively empathised with the protestors.
I have significant doubt that Sam is capable of empathy, period. It seems like what he's capable of is an extremely convincing caricature of it which he has practiced for many years.
disgruntledphd2
I mean, one could say the same about basically every human (and indeed, some AI systems).
lyu07282
> foundational assumptions of economics may just stop holding true
Those assumptions are already failing billions of people, some people might still be benefiting from those "assumptions of economics" so they don't see the magnitude of the problem. But just as the billions who suffer now have no power, so will you have no power once those assumptions fail for you too.
madrox
I think an assumption that a lot of people make about people with power is that they say what they actually believe. In my experience, they do not. Public speech is a means to an end, and they will say whatever is the strongest possible argument that will lead them to what they actually want.
In this case, OpenAI wants look like they're going to save the world and do it in a noble way. It's Google's "don't be evil" all over again.
sourcepluck
I'll sleep easier tonight after reading this! I know other people know what you say there, but at times reading HN one would suspect that it's not that commonly known around these parts.
Maybe public figures were "saying what they meant" in, I don't know, the mid-1800s. People who grew up with "modern" communications and media infrastructure (smartphones, brain rot, streaming garbage 24/7, ads everywhere, etc) do not have the capacity to act in a non-mediatic fashion in public space anymore.
So that's the reality, I think. Not only is Sam Altman "fake" in public, so is everyone else (more or less), including you and I.
Nonetheless, it's a national sport at least in massive chunks of the English-speaking world now to endlessly speculate about the real intentions of these pharaonic figures. I've said it before, but I'll say it again: what a very peculiar timeline.
imiric
It's human nature to assume honesty and good will. Societies couldn't function if that wasn't the case. Lies and deception are only common in interactions outside of our immediate tribe and community. They're the primary tools of scammers, politicians, and general psychopaths, who seek to exploit this assumption of honesty that most people have, and they're very successful at it. The problem is that technology has blurred the line between close communities and the rest of the world, and forced everyone to accept living in a global village. So while some of us would want the world to function with honesty first, the reality is that humans are genetically programmed to be tribal and deceitful, and that can't change in a few generations.
It's hilariously easy to be "successful" in the modern world. It's so easy that a dumb person following a playbook of "deny, divert, discredit" can become president.
pxmpxm
This. All this bullshit about "AI will kill us all" is purely a device to ensure regulatory capture for the largest players, ie the people saying said bullshit.
Pretend your meme pic generator is actually a weapon of mass destruction and you can eliminate most of your competition by handwaving about "safety".
pinewurst
Nobody seems to remember how the Segway was going to change our world, backed by many of the VC power figures at the time + Steve Jobs.
mbreese
The hype cycle for Segway was insane. Ginger (code name) wasn’t just going to change the world, it was going to cause us to rethink how cities were laid out and designed. No one would get around the world in the same way.
The engineering behind it was really quite nice, but the hype set it up to fail. If it hasn’t been talked up so much in the media, the launch wouldn’t have felt so flat. There was no way for them to live up to the hype.
tim333
I guess it depends on what media you follow. As a Brit my recollection was hearing it was a novelty gadget that about a dozen American eccentrics were using, and then there was the story that a guy called Jimi Heselden bought the company and killed himself by driving one off a cliff and then that was about it. Not the same category as AI at all.
pinewurst
The "Code Name: Ginger" book, by a writer embedded with the team, is excellent btw.
dghlsakjg
The Segway was a bit early, and too expensive, but I would defend it... sort of.
Electric micromobility is a pretty huge driver of how people negotiate the modern city. Self-balancing segways, e-skate, e-bike and scooters are all pretty big changes that we are seeing in many modern cityscapes.
Hell, a shared electric bike was just used as a getaway vehicle for an assassination in NYC.
jpalawaga
e-/bikes and e-/scooters are big changes to city navigation.
e-skate and segways are non-factors. And that's the difference between a good product (ebike or even just plain old bikeshare) and a bad one (segway).
numpad0
Segway is just an electric pizza delivery bike _that don't look like one_. That's it. The Segway City is just electric Ho Chi Minh 1995 in style of 2000 Chicago.
People wants to die in style so much that they turn blind eyes to plasticky scooters and Nissan Leafs. To them, the ugly attempt don't exist, and the reskinned clones are the first evers in the history. But reality prevails. Brits aren't flying EE Lightning jets anymore.
Segways were kind of cool to me too, to be fair. To lesser extent Lime scooters too. Sort of. But they're still just sideways pizza bikes.
klik99
In Segways defense that self balancing tech has made and will continue to make an impact, just not world changing amount (at least not yet) and not their particular company but the companies they influenced - the same may end up true about openai
stolenmerch
I remember serious discussions about how we'd probably need to repave all of our sidewalks in the US to accommodate the Segway
hanspeter
I think we all remember, and if we forget, we're reminded every time we see them at airports or doing city tours.
Calavar
I don't think I've seen a Segway in close to ten years. Also I suspect most people under 25 have never even heard of Segway.
arthurcolle
It reappeared as e-bikes and e-scooters - Lime, Bird, etc.
ldbooth
Reappeared as Electric unicycles, which look hilarious, dangerous, and like a lot of fun.
bee_rider
Apparently the first e-bike was invented in 1895. So I don’t think it is accurate to give Segway too much credit in their creation. Anyway the innovation of Segway was the balance system, which e-bikes don’t need.
(I’m not familiar with the site in general, but I think there’s no reason for them to lie about the date, and electric vehicles always show up surprisingly early).
https://reallygoodebikes.com/blogs/electric-bike-blog/histor...
n144q
None of which is doing great.
RockyMcNuts
more as hoverboard, onewheel etc. the e-bikes and e-scooters don't really have similar balancing mechanisms
anothertroll123
No it didn't
elif
You mean Steve Wozniak.
Close but no LSD
pinewurst
https://www.theguardian.com/world/2001/dec/04/engineering.hi...
"Steve Jobs, Apple's co-founder, predicted that in future cities would be designed around the device, while Jeff Bezos, founder of Amazon, also backed the project publicly and financially."
favorited
Steve Jobs said that Segways had the potential to be "as big a deal as the PC."
pantalaimon
What makes you so sure about the LSD?
sebzim4500
>and no obvious large-scale product-market fit
I'm afraid you are in as much an echo chamber as anyone. 200 million+ weekly active users is large scale pmf
jillesvangurp
Exactly. There's plenty of return on investment. Knowledge workers around the world are paying nice subscription fees for access to the best models just to do their work. There are hundreds of millions of daily active users already. And those are just the early adopters. Lots of people are still in denial about that they need this to do their work very soon. Countless software companies pay for API access to these models as well. As they add capabilities, the market only becomes larger.
OpenAI is one of a handful of companies that is raking in lots of cash here. And they've barely scratched the surface. It's only a few years ago that Chat GPT was first released. OpenAI is well funded, has lots of revenue, and lots of technology coming up that looks like it's going to increase demand for their services.
There's a very obvious product market fit.
foooorsyth
It’s the fastest product to 100m users ever. Even if they never update their models from here on out, they have an insanely popular and useful product. It’s better at search than Google. Students use it universally. And programmers are dependent on it. Inference is cheap — only training is expensive.
To say they don’t have PMF is nuts.
dom96
> And programmers are dependent on it.
that is clearly not the case
dingnuts
>It’s better at search than Google
in what world? what it's good at is suggesting things to search, because half of what it outputs is incorrect, so you have to verify everything anyway
it does, slightly, improve search, but it's an addition, not a replacement.
ceejayoz
> It’s better at search than Google.
That’s hardly a high bar now.
> And programmers are dependent on it.
Entry level ones, perhaps.
numpad0
s/programmers/front\ end\ html\ authors/
numpad0
but they're not making money and there are plenty substitutes. I bet they have practically zero paid customer retention rate. People say they love it, so what.
sebzim4500
>I bet they have practically zero paid customer retention rate
Why do you think that?
I know a few people with a subscription but I don't think I know anyone who has cancelled. Even people who have mainly moved to Claude kept the plus subscription because it's so cheap and o1 is sometimes useful for stuff Claude can't do.
hengheng
These guys didn't get to where they are now by admitting mistakes and making themselves accountable. In power play terms, that would be weak.
And once you are way up there and you have definitely left earth, there is no right or wrong anymore, just strong and weak.
elif
>So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit
Literally every market has been disrupted and some are being optimized into nonexistence.
You don't know anyone who's been laid off by a giant corporation that's also using an AI process that people did 3 years ago?
munk-a
I know companies that have had layoffs - but those would have happened anyways - regular layoffs are practically demanded by the market at this point.
I know companies that have (or rather are in the process of) adopting AI into business workflows. The only companies I know of that aren't using more labor to correct their AI tools are the ones that used it pre-ChatGPT/AI Bubble. Plenty of companies have rolled out "talk to our AI" chat bubbles on their websites and users either exploit and jailbreak them to run prompts on the company's dime or generally detest them.
AI is an extremely useful tool that has been improving our lives for a long time - but we're in the middle of an absolutely bonkers level bubble that is devouring millions of dollars for projects that often lack a clear monetization plan. Even code gen seems pretty underwhelming to most of the developers I've heard from that have used it - it may very well be extremely impactful to the next generation of developers - but most current developers have already honed their skills to out-compete code gen in the low complexity problems it can competently perform.
Lots of money is entering markets - but I haven't seen real disruption.
hyeonwho4
> Even code gen seems pretty underwhelming to most of the developers I've heard from that have used it - it may very well be extremely impactful to the next generation of developers - but most current developers have already honed their skills to out-compete code gen in the low complexity problems it can competently perform.
I'm in academia, and LLMs have completely revolutionized the process of data analysis for scientists and grad students. What used to be "page through the documentation to find useful primitives" or "read all the methods sections of the related literature" is now "ask an assistant what good solutions exist for this problem" or "ask LLMs to solve this problem using my existing framework." What used to be days of coding is now hours of conversation.
And it's also above-average at talking through scientific problems.
dbreunig
Altman appears to say AGI is far away when he shouldn't be regulated, right around the corner when he's raising funds, or is going to happen tomorrow and be mundane when he's trying break a Microsoft contract.
ksec
Millenarianism or millenarism (from Latin millenarius 'containing a thousand' and -ism) is the belief by a religious, social, or political group or movement in a coming fundamental transformation of society, after which "all things will be changed".. - From Wiki
Correct me if this the wrong meaning in the context. I will admit this is the first time I see this word. When I first read it I thought it was something to do with "millennials" also known as Gen Y.
>Robotics to be "completely solved" by 2020,
And we still dont have Lv4 / Lv 5 autonomous vehicles. Not close and likely still not in 2030. And with all the regulation hurdle in place it means even if we achieve it by 2030 in lab it wont be widespread until 2035 or later.
ta12653421
++1
me too
factorialboy
For-profit isn't the problem. Lying about being non-profit to raise funds, and _then_ become for-profit, that's the underlying concern.
lesuorac
Too bad not disclosing that you always intended to convert the non-profit into a for-profit during your testimony while numerous senators congratulate you about your non-profit values isn't problematic.
https://www.techpolicy.press/transcript-senate-judiciary-sub...
crowcroft
Some might characterize OpenAI leadership as not 'consistently candid'.
catigula
Yes, which means his post is an attempt to smear the credibility of musk, not make a legal defense.
If this were a legal defense, this would be heard in court.
toasteros
[flagged]
boringg
Its true. Also it looks like Musks original statement was correct that they should have gone with a C-corp instead of a NFP.
ozim
That should be top comment on all OpenAI threads.
Just like “open source and forever free” - until of course it starts to make sense charging money.
ksec
>For-profit isn't the problem
It was a problem around 2014 - 2022. Even the lying part isn't new. It has always been there. But somehow most were blind to it during the same period.
Something changed and I dont know what it is. But the pendulum is definitely swinging back.
m463
Also, the missing message here is "we want to become for-profit"
AlanYx
Ignoring all the drama, this part is interesting:
"On one call, Elon told us he didn’t care about equity personally but just needed to accumulate $80B for a city on Mars."
vasco
One of the best things I've ever read. I'm going to use this in my next salary negotiation.
Oh you know I don't really care about the number, it's just that I'm working on this plan to desalinate all the water in the oceans.
qingcharles
Made me think of this:
https://glasstire.com/wp-content/uploads/2011/03/288_Image1....
NetOpWibby
This made me laugh but also made me think, "...that's it? That's all it takes?"
marcosdumay
Land on Mars is cheap...
But no, I really doubt that's all it takes. Unless you discount all of the R&D costs as SpaceX operational expenses.
kulahan
I imagine that's what he's doing. He's willing to put a lot of company money into getting the city on Mars started, because if he's first there, he's gonna set himself (or his dynasty?) up to make hundreds of billions of dollars. Being effectively in control of your own planet? Neat. Scary too.
Yeul
Lol Apollo took a few percent of America's entire GDP.
Also astronauts were willing to risk their life putting the stars and stripes on the moon I doubt that Musk can inspire the same zeal...
burnte
Of course not, but Musk habitually underestimates the difficulty of things by about an order of magnitude.
niceice
SpaceX started with a small faction of that, just $100 million.
mlindner
With a reusable launch vehicle yeah that's ballpark. Depends how you define "city" though.
beepbooptheory
I will always love Kim Stanley Robinson but I dont care: please, Musk, go to Mars, you can have it.
UltraSane
$80 trillion would not be enough.
pram
Measured another way, just a bit under 2 Twitters.
ShakataGaNai
When he bought it, sure? Probably more like 20 X's
SV_BubbleTime
[flagged]
iaseiadit
Offset by the value of toppling a hostile administration that had him in its crosshairs. Is that worth $XXB? Maybe not, but it's worth something.
talldayo
The fact that owning Twitter was worth half a Mars colony to him should give you an idea of how seriously he's taking this whole thing. It's up there next to "Full Self Driving" and "$25,000 EV" in the Big Jar of Promises Used To Raise Capital and Nothing Else.
adabyron
He bought Twitter for $40B.
His Tesla stock was 0% ytd until the election.
Post election it is up roughly 70% ytd & has paid for Twitter & the Mars colony multiple times.
Hard to say if that happens without him owning Twitter.
elif
[flagged]
bravetraveler
Town squares go a lot further than I imagined
lupire
Elon Musk is nothing if not famous for astronomically over-promising and under-delivering.
vasco
You can see from the emails he outright does it just to "create a sense of urgency" and wants others to do the same. It does have its results but it churns through employees a lot as well. It's a good recipe to achieve great things but the problem is random middle managers of random SaaS b2b products thinking they need to do the same.
buzzerbetrayed
He is overly optimistic about timelines, but he usually delivers. Or did I imagine his company catching a fucking rocket out of the air with chopsticks? Guess that was under delivering
powderpig
Red Dragon? Falcon 9 Booster 24-hr turn around? The plethora of missed milestones required to land on the moon's surface?
tonygiorgio
Just about the only open part about OpenAI is how their dirty laundry is constantly out in the open.
frognumber
I think one issue is this highlights is how despicable all these individuals are.
The way I read this: Elon Musk wanted OpenAI to commit fraud. They refused. He went away. They decided to commit the same fraud. He sued.
They're a strong case in both directions. There's a legal principle that you can't take opposing views on the same legal issue.
DirkH
Don't come here with your nuance! It'll confuse people and they won't know what side to take! The horror!
milleramp
Is this one of the 12 days of OpenAI?
heavyarms
If this is a GPT-generated joke, I'd say they cracked AGI.
exprofmaddy
It seems the humans pursuing AGI lack sufficient natural intelligence. I'm sad that humans with such narrow and misguided perspectives have so much power, money, and influence. I worry this won't end well.
kandesbunzler
then surely you can create your own company and do it much better than them
lobsterthief
Great idea, I’ll just drop everything in my life and do that
exprofmaddy
Sorry I triggered you.
roca
I'm amazed OpenAI made these disclosures. My main takeaway was how wrong all the predictions of Sutskever and Brockman turned out to be, and how all their promises have been broken. Also interesting that up to 2019 OpenAI was focused on RL rather than LLMs.
nightowl_games
What were their predictions?
roca
"we have reason to believe AGI can ultimately be built with less than $10B in hardware"
"Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots" (2017)
"We will completely solve the problem of adversarial examples by the end of August"
Melting_Harps
> "we have reason to believe AGI can ultimately be built with less than $10B in hardware"
As a person who actually builds this infrastructure for Data Centers: Bwahaha!!!
This guy should have been laughed out of the room, and probably been out of a job if ANYONE took this guy serious. There are Elon levels of dillusions, and then there is this!
Mistletoe
What would robotics completely solved even mean?
cma
LLMs never took off until they combined them with RL via RLHF. RLHF was discovered in their RL research on game playing. GPT3 was out for quite a while with much lower impact than the chatgpt release and finished training in like december 2019 I read somewhere, released mid 2020. There were later better checkpoints, but it still didn't have much impact except for code completion.
With just a raw language model instructions and chat didn't work to near the same degree.
Both elements are important and they were early in both. Illya's first email here talks about needing progress on language:
2016
Musk: Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn’t sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach.
Illya: It is not the case that once we solve “concepts,” we get AI. Other problems that will have to be solved include unsupervised learning, transfer learning, and lifetime learning. We’re also doing pretty badly with language right now.
freedomben
> You can’t sue your way to AGI. We have great respect for Elon’s accomplishments and gratitude for his early contributions to OpenAI, but he should be competing in the marketplace rather than the courtroom.
Isn't that exactly what he's doing with x.ai? Grok and all that? IIRC Elon has the biggest GPU compute cluster in the world right now, and is currently training the next major version of his "competing in the marketplace" product. It will be interesting to see how this blog post ages.
I'm not dismissing the rest of the post (and indeed I think they make a good case on Elon's hypocrisy!) but the above seems at best like a pretty massive blindspot which (if I were invested in OpenAI) would cause me some concern.
codemac
> biggest GPU compute cluster in the world right now
This is wildly untrue, and most in industry know that. Unfortunately you won't have a source just like I won't, but just wanted to voice that you're way off here.
freedomben
> This is wildly untrue, and most in industry know that. Unfortunately you won't have a source just like I won't, but just wanted to voice that you're way off here.
Sure, we probably can't know for sure who has the biggest as they try to keep that under wraps for competition purposes, but it's definitely not "wildly untrue." A simple search will show that they have if not the biggest, damn near one of the biggest. Just a quick sample:
https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...
https://www.yahoo.com/tech/worlds-fastest-supercomputer-plea...
https://www.tomshardware.com/pc-components/gpus/elon-musk-to...
https://www.capacitymedia.com/article/musks-xais-colossus-cl...
codemac
i've physically visited a larger one, it is not even a well kept secret. we all see each other at the same airports and hotels.
threeseed
Technically, it maybe the world's biggest single AI supercomputer.
But it ignores Amazon, Google and Microsoft/OpenAI being able to run training workloads across their entire clouds.
boringg
I don't think you've been paying attention to the industry even though your posturing like an insider.
enslavedrobot
The distinction is that larger installations cannot form a single network. Before xAI's new network architecture, only around 30k GPUs could train a model simultaneously. It's not clear how many can train together with xAI's new approach, but apparently it is >100k.
rvz
It is true. [0]
[0] https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...
verdverm
Really? Meta looks to be running larger clusters of Nvidia GPUs already
https://engineering.fb.com/2024/03/12/data-center-engineerin...
This doesn't account for inhouse silicon like Google where the comparison becomes less direct (different devices, multiple subgroups like DeepMind)
sigh_again
Even just Meta dwarfs Twitter's cluster, with an estimated 350k H100s by now.
boringg
It's rich coming from Sam Altman -- the guy who famously tried to use regulatory capture to block everyone else.
sangnoir
Game recognizes game.
kiernanmcgowan
> IIRC Elon has the biggest CPU compute cluster in the world right now
Do you have a source for this? I don’t buy this when compared to Google, Amazon, Lawrence Livermore National Lab…
mrshu
The claim seems to mostly be coming fro NVIDIA marketing [0].
[0] https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...
freedomben
I first heard it on the All-In podcast, but I do see many articles/blogs about it as well. Quick note though, I mistyped CPU (and rapidly caught and fixed, but not fast enough!) when I meant GPU.
[1]: https://www.yahoo.com/tech/worlds-fastest-supercomputer-plea...
Etheryte
Surely Meta has the biggest compute in that category, no? I wouldn't be surprised if Elon went around saying that to raise money though.
axus
Maybe Elon is doing both, competing in the marketplace and in the courtroom. And in advising the president to regulate non-profit AI .
freedomben
Agree, he is doing both. But if he's competing in the marketplace, it seems pretty off base for Open AI to tell him he should be competing in the marketplace. So I think my criticism stands.
nativeit
I don’t think their suggestion ever implies that he isn’t.
llm_nerd
>Isn't that exactly what he's doing with x.ai? Grok and all that?
They aren't saying he isn't. But he is trying to handicap OpenAI, while his own offering at this point is farcical.
>It will be interesting to see how this blog post ages.
Whether Elon's "dump billions to try to get attention for The Latest Thing" attempt succeeds or not -- the guy has an outrageous appetite to be the center of attention, and sadly people play along -- has zero bearing on the aging of this blog post. Elon could simply be fighting them in the marketplace, instead he's waging a public and legal campaign that honestly makes him look like a pathetic bitch. And that's regardless of my negative feelings regarding OpenAI's bait and switch.
elif
Eh grok is bad but I wouldn't call it farcial. It's terrible at multimodal, but in terms of up to date recent cultural knowledge, sentiments, etc. it's much better than the stale GPT models (even with search added)
hangonhn
> biggest GPU compute cluster in the world right now
Really? I'm really surprised by that. I thought Meta was the one who got the jump on everyone by hoarding H100s. Or did you mean strictly GPUs and not any of the AI specific chips?
freedomben
Good point, I don't know if it's strictly GPUs or also includes some other AI specific chips.
Nvidia wrote about it: https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...
hangonhn
oh wow. I think your original assertion is correct. Wow. What a crazy arms race.
meagher
People change their minds all the time. What someone wanted in 2017 could be the same or different in 2024?
gkoberger
Sure, but the nuance is Elon only wants what benefits him most at the time. There was no philosophical change, other than now he’s competing.
He's allowed these opinions, we’re allowed to ignore them and lawyers are allowed to use this against him.
fallingknife
That is true of most people and is the most common reason people change their minds.
niek_pas
[citation needed]
ganeshkrishnan
> but the nuance is Elon only wants what benefits him most at the time.
Isn't that almost everyone? The people who left OpenAi could have joined forces but everyone went ahead and created their own company "for AGI"
It's like the wild west where everyone dreams of digging up gold.
j16sdiz
> Isn't that almost everyone?
Sure. That's why we have contracts and laws to restricts how much one can change without paying or jail. Not every changes are equals.
Alifatisk
I am used to their articles being sterile and formal, this reads like some teenager spilling their tea on social media.
nachox999
so true, it's cringey af
linotype
Google/Gemini have none of this baggage.
dtquad
Google/Gemini are also the only ones who are not entirely dependent on Nvidia. They are now several generations into their in-house designed and TSMC manufactured TPUs.
bgnn
Broadcom is now the second biggest AI chip producer thanks to Google. Apple recently announced they will also work with Broadcom on something similar.
null
int_19h
Google is its own very special kind of baggage, though.
willy_k
Neither does Anthropic/Claude.
agnosticmantis
It's amusing how Sutskever kept musking Musk over the years (overpromising with crazy deadlines and underdelivering):
In 2017 he wrote
"Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots (though no one should pass the Turing test)."
"We will completely solve the problem of adversarial examples by the end of August."
Very clever to take a page from Musk's own playbook of confidently promising self-driving by next year for a decade.
ks2048
That’s embarrassing and should be noted when he’s treated as a guru (as in today when I guess he gave a talk at Neurips conference) Of course, he should be listened to and treated as a true expert. But, it’s becoming more clear in viewing public people that extreme success can warp people’s perspective.
CamperBob2
I mean, he wasn't that far off. The Turing test is well and truly beaten, regardless of how you define it, and I sure wouldn't want to go up against o1-pro in a programming or math contest.
Robotics being "solved" was indeed a stupid thing to assert because that's a hornet's nest of wicked problems in material science, mechanical engineering, and half a dozen other fields. Given a suitable robotic platform, though, 2020-era AI would have done a credible job driving its central nervous system, and it certainly wouldn't be a stumbling block now.
It's been a while since I heard any revealing anecdotes about adversarial examples in leading-edge GPT models, but I don't know if we can say it's a solved problem or not.
philipwhiuk
> The Turing test is well and truly beaten, regardless of how you define it
Unless the question the human asks is 'How many l's in llama'
samatman
This month, a computer solved the first Advent of Code challenge in eight seconds.
Everyone on Hacker News was saying "well of course, you can't just feed it to a chatbot, that's cheating! the leaderboard is a human competition!" because we've normalized that. It's not surprising, it's just obvious, oh yeah you can't have an Advent of Code competition if the computers get to play as well.
Granted it took seven years. Not three.
agnosticmantis
I think the achievements in the past couple of years are astonishing, bordering on magic.
Yet, confidently promising AGI/self-driving/mars landing in the next couple of years over and over when the confidence is not justified makes you a conman by definition.
If the number 3 means nothing and can become 7 or 17 or 170 why keep pulling these timelines out of their overconfident asses?
Did we completely solve robotics or prove a longstanding theorem in 2020? No. So we should lose confidence in their baseless predictions.
CamperBob2
Self-driving is not so much a technological problem as it is a political problem. We have built a network of roads that (self-evidently) can't be safely navigated by humans, so it's not fair to demand better performance of machines. At least, not as long as they have to share the road with us.
'AI' landings on Mars are the only kind of landings possible, due to latency. JPL indisputably pwned that problem long before anyone ever heard of OpenAI.
Theorem-proving seems to require a different toolset, so I don't know what made him promise that. Same with robotics, which is more an engineering problem than a comp-sci one.
I guess it's not news but it is pretty wild to see the level of millenarianism espoused by all of these guys.
The board of OpenAI is supposedly going to "determine the fate of the world", robotics to be "completely solved" by 2020, the goal of OpenAI is to "avoid an AGI dictatorship".
Is nobody in these very rich guys' spheres pushing back on their thought process? So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit, much less a superintelligence.
As a bonus, they lay out the OpenAI business model:
> Our fundraising conversations show that:
> * Ilya and I are able to convince reputable people that AGI can really happen in the next ≤10 years
> * There’s appetite for donations from those people
> * There’s very large appetite for investments from those people