Google's Genie is more impressive than GPT5
52 comments
·August 8, 2025surround
theahura
EDIT: I updated the article to account for this perspective.
------
This can't be right -- they're using LMArena without style control to resolve the market, and GPT-5 is ahead right? (https://lmarena.ai/leaderboard/text/overall-no-style-control)
> This market will resolve according to the company which owns the model which has the highest arena score based off the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab is checked on August 31, 2025, 12:00 PM ET.
> Results from the "Arena Score" section on the Leaderboard tab of https://lmarena.ai/leaderboard/text with the style control off will be used to resolve this market.
> If two models are tied for the top arena score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order (e.g. if both were tied, "Google" would resolve to "Yes", and "xAI" would resolve to "No")
> The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
surround
You may have already figured this out, but the leaderboard you linked to (https://lmarena.ai/leaderboard/text/overall-no-style-control) shows gemini-2.5-pro ahead with a score of 1471 compared to gpt-5 at 1462.
JimDabell
> This is an incorrect interpretation. The benchmark which the betting market is based upon currently ranks Gemini 2.5 higher than GPT-5.
You can see from the graph that Google shot way up from ~25% to ~80% upon the release of GPT-5. Google’s model didn’t suddenly get way better at any benchmarks, did it?
dcre
It's not about Google's model getting better. It is that gpt-5 already has a worse score than Gemini 2.5 Pro had before gpt-5 came out (on the particular metric that determines this bet: Overall Text without Style Control).
https://lmarena.ai/leaderboard/text/overall-no-style-control
That graph is a probability. The fact that it's not 100% reflects the possibility that gpt-5 or someone else will improve enough by the end of the month to beat Gemini.
tunesmith
I felt like it was getting somewhere and then it pivoted to the stupid graph thing, which I can't seem to escape. Anyway, I think it'll be really interesting to see how this settles out over the next few weeks, and how that'll contrast to what the 24-hour response has been.
My own very naive and underinformed sense: OpenAI doesn't have other revenue paths to fall back on like Google does. The GPT5 strategy really makes sense to me if I look at this as a market share strategy. They want to scale out like crazy, in a way that is affordable to them. If it's that cheap, then they must have put a ton of work in to some scaling effort that the other vendors just don't care about as much, whether due to loss-leader economics or VC funding. It really makes me wonder if OpenAI is sitting on something much better that also just happens to be much, much more expensive.
Overall, I'm weirdly impressed because if that was really their move here, it's a slight evidence point that shows that somewhere down in their guts, they do really seem to care about their original mission. For people other than power users, this might actually be a big step forward.
mirblitzarmaven
> Imagine asking a model a question like “what's the weather in Tibet” and instead of doing something lame like check weather.com, it does something awesome like stimulate Tibet [...]
Let's not stimulate Tibet
standardUser
> The goal of AGI is to make programs that can do lots of things.
What do Genie and GPT have to do with AGI? I'm sure the people who stand to make billions love to squint and see their LLM as only steps away from an AGI. Or that guy at Google who fell in love with one. But the rest of us know better.
throwup238
Ostensibly a model like Genie3 probably encodes physical laws into the weights like LLMs encode language. That's generally considered a prerequisite for true AGI to have an intuitive grasp of physics as part of their "world model." It's a minor but significant step towards AGI (assuming Genie3 plays out successfully).
dsadfjasdf
The rest of us still can't prove we our conscious, either.. remember?
therein
Neither are even close to AGI. Here is something they can't do and won't be able to do for a very long time:
If you're inferring in English and ask it a question, it will never be able to pull from the knowledge it has ingested in another language. Humans are able to do this without relying on a neurotic inner voice spinning around in circles and doing manual translations.
This should be enough to arrive at the conclusion that there is no real insights in the model. It has no model of the world.
OsrsNeedsf2P
This article has zero substance
raincole
It's just someone noticed that people are not happy with GPT5 release and came up with an apple-to-screech-owl comparison (two completely different kinds of models, one product ready and the other internal test only) to farm clicks.
aerhardt
Why bother with substance in the era of vibes?
aydyn
Sounds like someone needs to come up with VibesBench.
Maybe it could just be a distilled scoring of social media sentiment the day after announcement? The more positive hype, the higher the VibeScore.
olddustytrail
> Sounds like someone needs to come up with VibesBench
Possibly. Have you grabbed that domain? Might be worth doing.
floren
A substack article? Zero substance? Whaaaaaaaaaaaaaaaaat
theahura
:(
whimsicalism
blogspam
bko
It's pretty incredible a model like Genie can deduce the laws of physics from mere observation of video. Even fluid dynamics which is a notoriously difficult problem. It's not obvious that this would happen or would even be possible from this kind of architecture. It's obviously doing something deep here.
As an aside, I think it's funny that the AI Doomer crowd ignores image and video AI models when it comes to AI models that will enslave humanity. It's not inconceivable that a video model would have a better understanding of the world than an LLM. So perhaps it would grow new capabilities and sprout some kind of intent. It's super-intelligence! Surely these models if trained long enough will deduce hypnosis or some similar kind of mind control and cause mass extinction events.
I mean, the only other explanation why LLMs are so scary and likely to be the AI that kills us all is that they're trained on a lot of sci-fi novels so sometimes they'll say things mimicking sentient life and express some kind of will. But obviously that's not true ;-)
ChrisMarshallNY
If you watched Ex Machina, there was a little twist at the end, which basically showed that she ("it," really) was definitely a machine, and had no human "drivers."
I thought that was a clever stroke, and probably a good comment on how we'll be perceiving machines; and how they'll be perceiving us.
gmueckl
These models aren't rigorously deriving the future state of a system from a quantitative model based in physical theories. Their understanding of the natural environment around is in line the innate understanding that animals and humams have that is based on the experience of living in an environment that follows deterministic patterns. It is easy learn that a river flows faster in the middle by empirical observation. But that is not correlated with a deeper understanding of hydrodynmics.
chairhairair
I don't know how one would think doomers "ignore image and video AI models". They (Yudkowsky, Hinton, Kokotajlo, Scott Alexander) point at these things all the time.
null
reducesuffering
It's completely apparent that HN dismissing doomers with strawmen is because these HN'ers simply don't even read their arguments and just handwave away based on vibes they heard through the grapevine
jeremyjh
> Imagine asking a model a question like “what's the weather in Tibet” and instead of doing something lame like check weather.com, it does something awesome like stimulate Tibet exactly so that it can tell you the weather based on the simulation.
Was where I stopped reading.
justonceokay
We already automate away all possible human interaction. Maybe in the future we can automate away our senses themselves.
My roommate already looks at his weather app to see what to wear instead of putting his hand out the window. Simulating the weather instead of experiencing it is just the next logical step
gundmc
When I get dressed in the morning, it's 58 degrees outside. Today there's a high of ~88. It's totally normal to look at the weather to determine what to wear.
bookofjoe
I'm reminded of an article about Truman Capote I read sometime last century in which he related visiting a European princess at the Plaza Hotel in the dead of winter during an ongoing snowstorm. He entered her room completely covered in snow; she looked at him and asked, is it still snowing?
zb3
Is Genie available for me to try? No? Then I can't tell, because I won't blindly trust Google.
Remember Imagen? They advertised Imagen 4 level quality long before releasing the original Imagen model. Not falling for this again.
beepbooptheory
> The goal of AGI is to make programs that can do lots of things.
Wait, is it?
lm28469
That certainly how it feels to me. Every demo seems like it's presenting some kind of socially maladjusted silicon valley nerd's wet dream. Half of it doesn't interest non tech people, the other half seems designed for teenagers.
Look at this image of Zuckerberg demoing his new product: https://imgur.com/1naGLfp
Or gpt5 press release: "look at this shitty game it made", "look at the bars on this graph showing how we outperform some other model by 2% in a benchmark that doesn't actually represent anything"
mind-blight
GPT-5 is a bit better -particularly around consistency - and a fair amount cheaper. For all of my use cases, that's a huge win.
Products using AI powered days processing (a lot of what I use it for) don't need mind blowing new features. I just want it to be better at summarizing and instruction following, and I want it to be cheaper. GPT-5 seems to knock all of that out of the park
benjiro
> GPT-5 is a bit better -particularly around consistency - and a fair amount cheaper. For all of my use cases, that's a huge win.
What is more or less a natural evolution of LLMs... The thing is, where are my benefits as a developer?
If for instance CoPilot charges 1 Premium request for Claude and 1 Premium request for GPT-5, despite that GPT-5 is (with resource usage), supposed to be on a level of GPT 4.1 (a free model). Then (from my point of view) there is no gain.
So far from coding point of view, Claude does coding (often) still better. I made the comparison that Claude feels like a Senior dev, with years of experience, where GPT 5 feels like a academic professor, that is too focus on analytic presentation.
So while its nice to see more competition in the market, i still rank (with Copilot):
Claude > Gemini > GPT5 ... big gap ... GPT4.1 (beast mode) > GPT 4.1
LLM's are following the same progression these days like GPUs, or CPU ... Big jumps at first, then things slow down, you get more power efficiency but only marginal jumps on improvements.
Where we will see benefits, is specialized LLMs, for instance, Anthropic doing a good job for creating a programmer focused LLM. But even those gates are starting to get challenged by Chinese (open source) models, step by step.
GPT5 simply follows a trend. And within a few months, Anthropic will release something probably not much of a improvement over 4.0 but cheaper. Probably better with tool usage. And then comes GPT5.1, 6 months later, and ...
GPT-5.0 in my opinion, for a company with the funding that openAI has, needed to be beat the competition with much more impact.
pton_xd
> "look at this shitty game it made"
This is basically every agentic coding demo I've seen to date. It's the future but man we're still years and years away.
rvnx
We reached AGI about 30 years ago then
thewebguyd
lol. The definition of AGI seems to change on the daily, and usually coincides with whatever the describer is trying to sell.
adeelk93
I’d amend that to - it coincides with whatever the describer is trying to get funding for
SV_BubbleTime
Geez… Make me pick between trusting Google, or trusting OpenAI… I’ll go with Anthropic.
sirbutters
Honestly, same. Anthropic CEO radiates good vibes.
wagwang
The anthropic CEO dooms all day about how AI is going to kill anyone and yet works on frontier models and gives them agentic freedom.
tekno45
yay! security and privacy are just VIBES!!!
38
[dead]
thegrim33
I like how we've just collectively forgotten about the absolutely disastrous initial release of Gemini. Were the people responsible for that fired? Are they still still there making decisions? Why should I ever trust them and give them a second chance when I could just choose to use a competitor that doesn't have that history?
rvnx
We did not forget this scam that was Google Bard, but still, it is the past now
echelon
I know this is sarcasm, but a misstep like this by OpenAI will harm their future funding and hiring prospects.
They're supposed to be in the lead against a company 30x their size by revenue, and 10,000x their might. That lead is clearly slipping.
Despite ChatGPT penetration, it's not clear that OpenAI can compete toe to toe with a company that has distribution on every pane of glass.
While OpenAI has incredible revenue growth, they also have incredible spending and have raised at crazier and crazier valuations. It's a risky gamble, but one they're probably forced to take.
Meanwhile, Meta is hiring away all of their top talent. I'll bet that anyone that turned down offers is second guessing themselves right now.
raincole
So where can I try out Genie 3? Did the author try it out?
If not it's just vibe^2 blogging.
echelon
Genie 3 just had the Sora treatment.
Lots of press for something by invitation only.
This probably means it takes an incredible amount of resources to power in its current form. Possibly tens of H100s (or TPUs) simultaneously. It'll take time to turn that from a wasteful tech preview into a scaleable product.
But it's clearly impressive, and it did the job of making what OpenAI did look insignificant.
password54321
Basically free advertising for something not released.
> The betting markets were not impressed by GPT-5. I am reading this graph as "there is a high expectation that Google will announce Gemini-3 in August", and not as "Gemini 2.5 is better than GPT-5".
This is an incorrect interpretation. The benchmark which the betting market is based upon currently ranks Gemini 2.5 higher than GPT-5.