Is Sora the beginning of the end for OpenAI?
177 comments
·October 21, 2025zerosizedweasle
Karrot_Kream
What signals have you seen that point to investment being predicated around AGI? Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts. That's a much more "sober" outlook than AGI.
In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.
port3000
"Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts"
AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday
.....so where is this 100X increase in inference demand going to come from?
Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
Karrot_Kream
Integrations and inference costs aren't necessarily 1:1. Integrations can use more AI, reasoning models can cause token explosion, Jevons Paradox can drive more inference tokens, big businesses and government agencies (around the world, not just the US) can begin using more LLMs. I'm not sure integrations are that simple. A lot of integrations that I know of are very basic integrations.
> Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.
hyperpape
My experience suffering with JIRA daily is that the AI is useless and fairly easy to ignore. If it were actually helpful, I could imagine using it more, and the costs would increase proportionately.
mola
I think the motivation for someone like Altman is not AGI, it's power and influence. And when he wields billions he has power, it doesn't really matter if there's AGI coming.
hu3
Yep, he just wants to become too big to fail at this point.
I view OpenAI like a pyramid scheme: taking in increasing amounts of money to pursuit ever growing promisses that can be dangled like a carrot to the next investor.
If you owe investors $100 million, that's your problem. If you owe investors $100 billion, that's their problem.
tmaly
We were promised AGI and all we are getting is Bob Ross coloring on the walls of a Target store.
The app is fun to use for about 10 minutes then that is it.
Same goes for Grok imagine. All people want to do is generate NSFW content.
What happened to improving the world?
null
qwery
I apologise for talking past the point you're making, but, Bob Ross was a human being, you know, with thoughts and stuff. How could any of these AI toys possibly compare?
I would love to have Bob Ross, wielding a crayon, add some happy little trees to the walls of a Target.
cratermoon
What was predicted to be next: AGI
What we got next: porn
quantified
Porn has driven everyday tech. Online payment systems, broadband adoption.
Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.
There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.
Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.
noir_lord
> There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.
Attractive people in sexually fulfilling relationships still look at porn.
It's just human.
rchaud
This is a meme I see online often (and in the show Silicon Valley), but I don't think it holds up in practice.
Re: payment systems, Visa and MC are notoriously unfriendly to porn vendors, sending them into the arms of crooked payment processors like Wirecard. Paypal grew to prominence because it was once the only way to buy and sell on Ebay. Crypto went from nerd hobby to speculative asset, shipping the "medium of exchange for porn purchases" entirely.
As for broadband adoption, it's as likely to have occurred for MP3 piracy and being 200X faster than dialup, as it was for porn.
c0balt
To be very fair here, a long time before gpt-5 porn was already being produced with stable diffusion (and other open models). Civitai in particular was an open playground for this with everything from NSFW loras, prompts to fined tuned models.
I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.
droptablemain
to be fair we also got Stephen Hawking bungee jumping | snowboarding | wrestling | drag racing | ice skating | bull-fighting | half-pipe
blibble
I can't imagine the republican party is going to be particularly happy about AI being used for mass porn generation
overfeed
The party of grindr-crashing sexual repression[1] outwardly denounces such depravity, but inwardly rejoices at all the shameful images they intend to generate.
1. Red states are way ahead on porn consumption, based on past annual reports by Aylo.
neonnoodle
prompt records = mass blackmail generation
layer8
At least the valuations make sense now. ;)
knicholes
Wait, we got porn?
benbayard
Yes, Sam A said that "erotica" was coming to openAI. I don't think he's mentioned visual pornography though https://www.axios.com/2025/10/14/openai-chatgpt-erotica-ment...
standardUser
AGI is like L5 automated driving - academic concepts that have no bearing on the ability of these technologies to transform the economy.
hollerith
And no bearing on the ability of these technologies to thoroughly screw us.
xwowsersx
This take feels like classic Cal Newport pattern-matching: something looks vaguely "consumerish," so it must signal decline. It's a huge overreach.
Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.
You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.
The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.
So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.
null
impossiblefork
There are also interesting things one could do with models like Sora, depending how it actually performs in practice: prompting to segment, for example; and the thing could very possibly, if it's fast enough etc. become a foundation for robotics.
softwaredoug
I don't think that's fair.
ChatGPT clearly is "for consumers". Whereas Sora is a kind of enshitification to monetize engagement. It's right to question the latter.
bossyTeacher
I agree. My bet is that OpenAI will not fullfill its mission of developing AGI by 2035. And I would be surprised if they ever did. As much as they might want to, there is only so many dreams you can whisper into rich people's ears before they tell you to go away. And without rich people's money, OpenAI will fall like a house of cards. The wealthy won't have infinite patience
schnable
OpenAI is making a wild number of product plays at once, trying to leverage the value of the frontier model, brand value, and massive number of eyeballs they own. Sora is just one of many. Some will fail and maybe some will succeed.
It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
FloorEgg
On some level they know that LLMs alone won't lead to AGI so they have to take a shotgun approach to diversify, and also because integrating some parts of all these paths is more likely to lead to the outcome they want than going all in on one.
Also because they have the funding to do it.
Reminds me a bit of the early Google days, Microsoft, Xerox, etc,
This is just what the teenage stage of the top tech startup/company in an important new category looks like.
mortsnort
The massive cost of this product is unique though (not even counting the copyright lawsuits/settlements coming). I can't think of any side projects that require this level of investment.
furyofantares
> It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them.
Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.
I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.
truelson
It's to their benefit to try everything right now. And quickly.
xnx
> OpenAI is making a wild number of product plays at once
It's similar to the process of electrification. Every existing machine/process needed to be evaluated to see if electricity would improve it: dish washing, clothes drying, food mixing, etc.
OpenAI is not alone. Every one of their products has an (sometimes superior) equivalent from Google (e.g. Veo for Sora) and other competitors.
bossyTeacher
>It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
It makes them look desperate though. Nothing like starting tons of services at once to show you have a vision
kbos87
I could see Sora having a significant negative impact on short form video products like TikTok if they don’t quickly and accurately find a way to categorize its use. A steady stream of AI generated video content hurts the value prop of short form video in more than one way… It quickly desensitizes you and takes the surprise out that drives consumption of a lot of content. It also of course leaves you feeling like you can’t trust anything you see.
kulahan
Do people on the dopamine drip really care how real their content is? Tons and tons of it is staged or modified anyways. I'm not sure there's anything Real™ on TikTok anyways.
bemmu
I find Sora refreshing in that I don't have to worry about being tricked by something fake. It's just a fun multiplayer slopfest.
duxup
It certainly seems there are some who don't care.
You always get the "who cares if it is fake" folks and even on reddit folks will point out something is AI and inevitably folks "who cares".
But I'm not sure how many people that is or what kind of content they care or don't care about.
janwl
I mean, it's entertainment content. It's like saying a movie is fake, they are actors playing roles. Of course. Who cares?
wobfan
Thought the same. The human-generated content is just as brainless as the AI-generated slop. People who watched the first will also watch the latter. This will not change a lot, I think.
huevosabio
Didn't explicitly think about this, but you're right. I already dismiss off the bat a lot of surprising video content because I don't trust it.
ToucanLoucan
I mean, this is basically already status quo for YouTube Shorts. Tons and tons of shorts are AI-voice over either AI video or stock video covering some pithy thing in no actual depth, just piggybacking off of trending topics. And TikTok has had the same sort of content for even longer.
The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.
softwaredoug
The counter argument is that OpenAI has to make fairly bolder moves.
Social was _already_ becoming the domain of AI generated content. In the benign sense, there's been social content of people sharing their silly AI content since early DALL-e. Its a good idea to make a social app that's actually _about that_, because you can remix and play with the content in a novel way.
The first Sora was sort of already going in this direction.
ilickpoolalgae
> It’s unclear whether this app will last. One major issue is the back-end expense of producing these videos. For now, OpenAI requires a paid ChatGPT Plus account to generate your own content. At the $20 tier, you can pump out up to 50 low-resolution videos per month. For a whopping $200 a month, you can generate more videos at higher resolutions. None of this compares favorably to competitors like TikTok, which are exponentially cheaper to operate and can therefore not only remain truly free for all users, but actually pay their creators .
fwiw, there's no requirement to have a subscription to create content.
draw_down
[dead]
1vuio0pswjnm7
"It wasn't that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb, and many commentators took Dario Amodei at his word when he proclaimed* 50% of white collar jobs might soon be automated* by LLM-based tools.
A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn't be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn't be entertaining the idea, as Altman did last week, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy"
freefaler
They might be forced to do so because the current inference pricing is not really covered by the 20$ monthly fee. Who knows what they have promised to the investors and the real cashflow is hard to be certain about with the circular nature of cross-investing between the biggest players.
bilekas
I got the feeling when this was released that it was just another metric to justify further investment, they were guaranteed to have a lot of users, they can turn around and say "well we have 2 huge applications and were just getting started" investors don't care too much about product quality we've seen, just large numbers.
f33d5173
> wouldn’t be seeking to make a quick buck selling ads against deep fake videos
This isn't a money making venture for them, and you basically admitted as much. They poured no doubt massive amounts of money into developing this and have little hope of earning it back soon. This is an attempt to keep up with other ai companies also developing video models in order to not look behind to investors. Making it available to users is similarly about increasing active user counts in order to look more successful. If people incidentally get off to it that's not their concern
mentalgear
The fact that OpenAI is pushing Sora, and Altman now even hinting at introducing "erotic roleplay"[0] makes it obvious: openAI has stopped being a real AI research lab. Now, they’re just another desperate player in a no-moat market, scrambling to become the primary platform of this hype era and imprison users onto their platform, just like Microsoft and Facebook did before in the PC and social era.
[0] https://www.404media.co/openai-sam-altman-interview-chatgpt-...
gilfoy
Why is it one or the other? They have enough money to do both.
mentalgear
But if you followed them, there are focusing only on product for the last 2 years. The grand GPT-5 and their scaling laws, from which all their LLM AGI hopes originated, turned out to be a dud.
sixothree
The amount of animal abuse videos I've seen is a bit disturbing. It only demonstrates how careless they have been, possibly intentionally. I know people on HN have been describing the various reasons why OpenAI has not been a good player, but seeing it first-hand is visceral in a way that makes me concerned about them as a company.
Sohcahtoa82
I'm more concerned about Sora (and video-generating AI in general) being the final pour that cements us into our post-truth world.
People will be swayed by AI-generated videos while also being convinced real videos are AI.
I'm kinda terrified of the future of politics.
hombre_fatal
The problem is that we're already post truth.
Just consider how a screenshot of a tweet or made-up headline already spreads like a wildfire: https://x.com/elonmusk/status/1980221072512635117
Sora involves far more work than what is required to spread misinfo.
Finally, people don't really care about the truth. They care about things that confirm their world view, or comfortable things. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.
nothrabannosir
> Finally, people don't really care about the truth.
That same link has two “reader notes” about truth.
The lie is half way around the world etc, but that can also be explained by people’s short term instincts and reaction to outrage. It’s not mutually exclusive with caring about truth.
Maybe I’m being uncharitable — did you mean something like “people don’t care about truth enough to let it stop them from giving into outrage”? Or..?
mat_b
> Finally, people don't really care about the truth. They care about things that confirm their world view. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.
People have always been this way though. The tribes are just organized differently in the internet age.
LexiMax
I strongly suspect future generations are going to look back on the age of trying to cram the entire world into one of several shared social spaces and say "What were those idiots thinking?"
raw_anon_1111
Oh well, I’ll put it out there. If people cared about verified provable truths, religion of any kind wouldn’t exist.
philipallstar
> If people cared about verified provable truths, religion of any kind wouldn’t exist.
Can you provide a verified proof of this statement please?
FloorEgg
Assuming this is all true, what's the most optimistic view you can take looking ~20 years out?
How could all of this wind up leading to a much more fair, kind, sustainable and prosperous future?
Acknowledging risks is important, but where do YOU want all this to go?
Eisenstein
As adults already, we grew up with things that are either not relevant or give us the wrong responses to our heuristics.
But the kids who grow up with this stuff will just integrate into their life and proceed. The society which results from that will be something we cannot predict as it will be alien to us. Whether it will be better or not -- probably not.
Humans evolved to spend most of their time with a small group of trusted people. By removing ourselves from that we have created all sorts of problems that we just aren't really that equipped to deal with. If this is solvable or not has yet to be seen.
marshfarm
We're probably post-narrative and post-lexical (words) but haven't become aware of what to possibly update these tools with. Post-truth is an abstraction rooted in the arbitrary.
Reality is specific. Actions, materials. Words and language are arbitrary, they're processes, and they're simulations. They don't reference things, they represent them in metaphors, so sure they have "meaning" but the meanings reduce the specifics in reality which have many times the meaning possibility to linearity, cause and effect. That's not conforming to the reality that exists, that's severely reducing, even dumbing down reality.
AnimalMuppet
There is a reality which exists. Words have meaning. Words are more or less true as the meaning they convey conforms more or less well to the reality that exists. So no, truth is not rooted in the arbitrary. Quite the opposite.
Or at least, words had meaning. As we become post-lexical, it becomes harder to tell how well any sequence of words corresponds to reality. This is post truth - not that there is no reality, but that we no longer can judge the truth content of a statement. And that's a huge problem, both for our own thought life, and for society.
code4life
> Finally, people don't really care about the truth.
What is truth? Pontius Pilate
godelski
Not to mention that the president posts AI slop frequently[0]. He even posted, and took down, a video promising people a "med bay". A fictional device that just cures everything.
[0] Trump as "King Trump" flying a jet that dumps shit onto protesters https://truthsocial.com/@realDonaldTrump/posts/1153982516232...
[1] https://www.snopes.com/news/2025/09/30/medbed-trump-ai-video...
highwaylights
I'm surprised this isn't a bigger concern given that:
For over a year now we've been at the point whereby a video of anyone saying or doing anything can be generated by anyone and put on the Internet, and it's only becoming more convincing (and rapidly)
We've been living in a post-truth world for almost ten years, so it's now become normalized
Almost half of the population has been conditioned to believe anything that supports their political alignment
People will actually believe incredibly far-fetched things, and when the original video has been debunked, will still hold the belief because by that point the Internet has filled up with more garbage to support something they really want to believe
It's a weird time to be alive
cruffle_duffle
Absolutely! And don’t kid yourself into thinking you are immune from this either. You can find support of basically anything you want to believe. And your friendly LLM will be more than happy to confirm it too!
Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?
Sohcahtoa82
> Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?
Truth absolutely is a thing. But sometimes, it's nuanced, and people don't like nuance. They want something they can say in a 280-character tweet that they can use to "destroy" someone online.
Eisenstein
People forget that critical thinking means thinking critically about everything, even things you already think are true because they fit into your worldview.
bonoboTP
We will adjust. And guess what, before photography, people managed somehow. People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not. The way people dealt with it was witness testimony and physical evidence.
afavour
We'll have to adjust, certainly. But that doesn't mean nothing bad will happen.
> People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not.
And there were things like witch trials where people were burnt at the stake!
The resolution was a shared faith in central authority. Witness testimony and physical evidence don't scale to populations of millions, you have to trust in the person getting that evidence. And that trust is what's rapidly eroding these days. In politics, in police, in the courts.
mola
Yes, that adjustment could well be monarchy.
I can't see how functioning democracy can survive without truth as shared grounds of discussion.
safety1st
The media's been lying to us for as long as it has existed.
Prior to the Internet the range of opinions which you could gain access to was far more limited. If the media were all in agreement on something it was really hard to find a counter-argument.
We're so far down the rabbit hole already of bots and astroturfing online, I doubt that AI deepfake videos are going to be the nail in the coffin for democracy.
The majority of the bot, deepfake and AI lies are going to be created by the people who have the most capital.
Just like they owned the traditional media and created the lies there.
bonoboTP
I don't think the US was a monarchy for its first hundred years.
Teever
Of course we will adjust. That is a truism that is besides the point.
What matters is how many people will suffer during this adjustment period.
How many Rwandan genocides will happen because of this technology? How many lynchings or witch burnings?
bonoboTP
It's not beside the point.you can lie with words, you can lie with cartoons and drawings and paintings. You can lie with movies.
We will collectively understand that pixels on a screen are like cartoons or Photoshop on steroid.
sofixa
> The way people dealt with it was witness testimony and physical evidence.
Which are inapplicable today.
> We will adjust
Will we? Maybe years later... per event. It's finally now dawning on the majority of Britons that Brexit was a mistake they were lied about.
wongarsu
Brexit is a great example how you can just lie by writing stuff on the side of a bus, no fake photos or videos required
schnable
> Maybe years later...
It is a concern... it took a few centuries for the printing press to spur the Catholic/Protestant wars and then finally resolve them.
bonoboTP
That has nothing to do with GenAI.
pessimizer
> Which are inapplicable today.
No, they are not.
hn_throw2025
[flagged]
pmontra
Maybe they'll have to tour and meet people in person because videos will be devoid of trust.
On the other side we want to believe in something, so we'll believe in the video that will suit our beliefs.
It's an interesting struggle.
Sohcahtoa82
> Maybe they'll have to tour and meet people in person
That doesn't scale.
During campaign season, they're already running as many rallies as they can. Outside the campaign train, smaller Town Hall events only reach what, a couple hundred people, tops? And at best, they might change the minds of a couple dozen people.
EDIT: It's also worth mentioning that people generally don't seek to have their mind changed. Someone who is planning on voting for one candidate is extremely unlikely to go to a rally for the opposition.
tinfoilhatter
Most members of the US congress and the current presidential administration, are already devoid of trust. I can't speak for other countries governments, but it seems to be a fairly common situation.
bilekas
Yeah I'm just as annoyed with the AI slop that's coming out as anyone, but the next generation of voters won't believe a thing and so they will be pushed towards believing what they see in real life like campaigners who go door to door etc. It could be a great thing and would give meaning to he electoral system again ironically!
kulahan
Honestly I can't see a solution beyond concentrating power to highly localized regions. Lots more mayors, city councils, etc. so there is a real chance you can meet someone who represents you.
I don't fully believe anything I see on the internet that isn't backed up by at least two independent sources. Even then, I've probably been tricked at least once.
mentalgear
Well, maybe it's less about Sora, but how they push the world towards making their next product essential: WorldCoin [0], Altman's blockchain token system (the one with the alien orb) to scan everybody's biometric fingerprint and serve as the only Source of Truth for the World - controlled by one private company.
It's like the old saying: They create their own ecosystem. Circular stock market deals being the most obvious, but the WorldCoin has been for years in the making and Altman often described it as the only alternative in a post-truth world (the one he himself is making of course).
[0] https://www.forbes.com.au/news/innovation/worldcoin-crypto-p...
roadside_picnic
> the final pour that cements us into our post-truth world.
I find it a bit more concerning that anyone would not already understand how deeply we exist in a "post-truth" world. Every piece of information we've consumed for the last few decades has increasingly been shaped by algorithms optimizing someone else's target.
But the real danger of post-truth is when there is a still enough of a veneer of truth that you can use distortions to effectively manipulate the public. Losing that veneer is essentially a collapse of the whole system, which will have consequences I don't think we can really understand.
The pre and early days of social media were riddled with various "leaks" of private photos and video. But what does it mean to leak a nude photo of a celebrity when you can just as easily generate a photo that is indistinguishable? The entire reason leaks like that were so popular is precisely because people wanted a glimpse into something real about the private life of these screen personalities (otherwise 'leaks' and 'nude scenes' would have the same value). As image generation reaches the limit, it will be impossible to ever really distinguish between voyeurism and imagination.
Similarly we live in an age of mass surveillance, but what does surveillance footage mean when it can be trivially faked. Think of how radicalizing surveillance footage has been over the past few decades. Consider for example the video of the Rodney King beating. Increasingly such a video could not be trusted.
> I'm kinda terrified of the future of politics.
If you aren't already terrified enough of the present of politics, then I wouldn't be worried about what Sora brings us tomorrow. I honestly think what we'll see soon is not increasingly more powerful authoritarian systems, but the break down of systems of control everywhere. As these systems over-extend themselves they will collapse. The peak of social media power was to not let it go further than it was a few years ago, Sora represents a larger breakdown of these systems of control.
uvaursi
Agreed, but this is mostly coming from people who would normally discredit you bashing MSM as a kook/conspiracy theorist.
People forget, or didn’t see, all the staged catastrophes in the 90s that were shortly afterwards pulled off the channel once someone pointed out something obvious (f.e. dolls instead of human victims, wrong location footage, and so on).
But if you were there, and if you saw that, and then saw them pull it off and pretend like it didn’t happen for rest of the day, then this AI thing is a nothing burger.
shnp
Flawless AI generated videos will result in video footage not being trusted.
This will simply take us back about 150 years to the time before the camera was common.
The transition period may be painful though.
OJFord
It just makes trusted/verified sources more important, and more people to care about it. I wouldn't be terrified for politics so much as the raised barrier to entry (and concentration) of the press - people will pay attention to the BBC, Guardian, Times, but not (even less so) independentjourno.com; those sources will be more sceptical of whistleblowers and freelance investigative contributions, etc.
bdcravens
More likely it's the beginning of the end for TikTok, since the amount of posts that use seems to be flooding the platform, lowering trust and credibility in each video.
fishmicrowaver
Hmm I have my doubts. I don't really understand the appeal of hyper-consumerism, brand focus, and big SM platform engagement. But I look over at my wife and she is clearly plugged in to a giant worldwide cultural movement of womens interests, product recommendations, and politics. It also appears to be the case that women are less likely to engage with AI. It's one of the big reasons I think it's a massive blunder to pursue AI erotica. I think a big part of the TikTok user base (women) will have to be persuaded to jump on the AI bandwagon and I'm not sure the industry is pursuing any products or features to woo that market.
"Whether Sora lasts or not, however, is somewhat beside the point. What catches my attention most is that OpenAI released this app in the first place.
It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."
That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.