zaptrem
swatcoder
> We look forward to learning more about its strengths, capabilities, and potential applications in real-world settings. If GPT‑4.5 delivers unique value for your use case, your feedback (opens in a new window) will play an important role in guiding our decision.
"We don't really know what this is good for, but spent a lot of money and time making it and are under intense pressure to announce new things right now. If you can figure something out, we need you to help us."
Not a confident place for an org trying to sustain a $XXXB valuation.
jodrellblank
> "Early testing shows that interacting with GPT‑4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater “EQ” make it useful for tasks like improving writing, programming, and solving practical problems. We also expect it to hallucinate less."
"Early testing doesn't show that it hallucinates less, but we expect that putting that sentence nearby will lead you to draw a connection there yourself".
LeifCarrotson
That's some top-tier sales work right there.
I suck at and hate writing the mildly deceptive corporate puffery that seems to be in vogue. I wonder if GPT-4.5 can write that for me or if it's still not as good at it as the expert they paid to put that little gem together.
esafak
GPT-4.5 may be an awesome model, some say!
istjohn
According to a graph they provide, it does hallucinate significantly less on at least one benchmark.
willy_k
So they made Claude that knows a bit more.
FridgeSeal
“It’s not actually better, but you’re all apparently expecting something, so this time we put more effort into the marketing copy”
justspacethings
The usage of "greater" is also interesting. It's like they are trying to say better, but greater is a geographic term and doesn't mean "better" instead it's closer to "wider" or "covers more area."
zaptrem
This seems like it should be attributed to better post training, not a bigger model.
refulgentis
You are lying.
They provided extensive data on benchmarks, including hallucination.
GPT-4.5's rate is 0.19, 4o's is 0.52.
Please, don't do stuff like this.
I love this community. I grew up on it. And this is toxic. The first 7 screenfuls of this page are third-order complaints based on a made up complaint followed by mind-reading.
I have plenty of other places to read commentary that maintains a standard of conversation thats:
"well I was joking, maybe? except for the part I care about, of course I was dead serious about that. are you seriously claiming that's a good rate?", and calling out blatant lying, as I am now, is treated as disagreeable party-poopin'.
EDIT: - 45m in, -2, 0 replies. We're firmly in a "reality? that's disagreeable party-poopin'!" stage :/
pinkmuffinere
This is a very harsh take. Another interpretation is “We know this is much more expensive, but it’s possible that some customers do value the improved performance enough to justify the additional cost. If we find that nobody wants that, we’ll shut it down, so please let us know if you value this option”.
mechagodzilla
I think that's the right interpretation, but that's pretty weak for a company that's nominally worth $150B but is currently bleeding money at a crazy clip. "We spent years and billions of dollars to come up with something that's 1) very expensive, and 2) possibly better under some circumstances than some of the alternatives." There are basically free, equally good competitors to all of their products, and pretty much any company that can scrape together enough dollars and GPUs to compete in this space manages to 'leapfrog' the other half dozen or so competitors for a few weeks until someone else does it again.
riwsky
Said the quiet part out loud! Or as we say these days, “transparently exposed the chain of thought tokens”.
Terr_
"I knew the dame was trouble the moment she walked into my office."
"Uh... excuse me, Detective Nick Danger? I'd like to retain your services."
"I waited for her to get the the point."
"Detective, who are you talking to?"
"I didn't want to deal with a client that was hearing voices, but money was tight and the rent was due. I pondered my next move."
"Mr. Danger, are you... narrating out loud?"
"Damn! My internal chain of thought, the key to my success--or at least, past successes--was leaking again. I rummaged for the familiar bottle of scotch in the drawer, kept for just such an occasion."
---
But seriously: These "AI" products basically run on movie-scripts already, where the LLM is used to append more "fitting" content, and glue-code is periodically performing any lines or actions that arise in connection to the Helpful Bot character. Real humans are tricked into thinking the finger-puppet is a discrete entity.
These new "reasoning" models are just switching the style of the movie script to film noir, where the Helpful Bot character is making a layer of unvoiced commentary. While it may make the story more cohesive, it isn't a qualitative change in the kind of illusory "thinking" going on.
porridgeraisin
Lol, nice one
crystal_revenge
> "We don't really know what this is good for, but spent a lot of money and time making it and are under intense pressure to announce new things right now. If you can figure something out, we need you to help us."
Having worked at my fair share of big tech companies (while preferring to stay in smaller startups), in so many of these tech announcement I can feel the pressure the PM had from leadership, and hear the quiet cries of the one to two experience engineers on the team arguing sprint after sprint that "this doesn't make sense!"
riwsky
> the quiet cries of the one to two experienced engineers on the team arguing sprint after sprint that "this doesn't make sense!"
“I have five years of Cassandra experience—and I don’t mean the db”
spaceman_2020
Really don’t understand what’s the use case for this. The o series models are better and cheaper. Sonnet 3.7 smokes it on coding. Deepseek R1 is free and does a better job than any of OAI’s free models
amarcheschi
I have a professor who founded a few companies, one of these was funded by gates after he managed to spoke with him and convinced him to give him money. This guy is goat, and he always tells us that we need to find solutions to problems, not to find problems to our solutions. It seems at openai they didn't get the memo this time
fsndz
it's so over, pretraining is ngmi. maybe sam Altman was wrong after all ? https://www.lycee.ai/blog/why-sam-altman-is-wrong
anxoo
AI skeptics have predicted 10 of the last 0 bursts of the AI bubble. any day now...
FpUser
>"I also agree with researchers like Yann LeCun or François Chollet that deep learning doesn't allow models to generalize properly to out-of-distribution data—and that is precisely what we need to build artificial general intelligence."
I think "generalize properly to out-of-distribution data" is too weak of criteria for general intelligence (GI). GI model should be able to get interested about some particular area, research all the known facts, derive new knowledge / create theories based upon said fact. If there is not enough of those to be conclusive: propose and conduct experiments and use the results to prove / disprove / improve theories. And it should be doing this constantly in real time on bazillion of "ideas". Basically model our whole society. Fat chance of anything like this happening in foreseeable future.
crazygringo
> We don't really know what this is good for
Oh come on. Think how long of a gap there was between the first microcomputer and VisiCalc. Or between the start of the internet and social networking.
First of all, it's going to take us 10 years to figure out how to use LLM's to their full productive potential.
And second of all, it's going to take us collectively a long time to also figure out how much accuracy is necessary to pay for in which different applications. Putting out a higher-accuracy, higher-cost model for the market to try is an important part of figuring that out.
With new disruptive technologies, companies aren't supposed to be able to look into a crystal ball and see the future. They're supposed to try new things and see what the market finds useful.
mandevil
ChatGPT had its initial public release November 30th, 2022. That's 820 days to today. The Apple II was first sold June 10, 1977, and Visicalc was first sold October 17, 1979, which is 859 days. So we're right about the same distance in time- the exact equal duration will be April 7th of this year.
Going back to the very first commercially available microcomputer, the Altair 8800 (which is not a great match, since that was sold as a kit with binary stitches, 1 byte at a time, for input, much more primitive than ChatGPT's UX), that's four years and nine months to Visicalc release. This isn't a decade long process of figuring things out, it actually tends to move real fast.
aylmao
I generally agree with the idea of building things, iterating, and experimenting before knowing their full potential, but I do see why there's negative sentiment around this:
1. The first microcomputer predates VisiCalc, yes, but it doesn't predate the realization of what it could be useful for. The Micral was released in 1973. Douglas Engelbart gave "The Mother of All Demos" in 1968 [2]. It included things that wouldn't be commonplace for decades, like a collaborative real-time editor or video-conferencing.
I wasn't yet born back then, but reading about the timeline of things, it sounds like the industry had a much more concrete and concise idea of what this technology would bring to everyone.
"We look forward to learning more about its strengths, capabilities, and potential applications in real-world settings." doesn't inspire that sentiment for something that's already being marketed as "the beginning of a new era" and valued so exorbitantly.
2. I think as AI becomes more generally available, and "good enough" people (understandably) will be more skeptical of closed-source improvements that stem from spending big. Commoditizing AI is more clearly "useful", in the same way commoditizing computing was more clearly useful than just pushing numbers up.
Again, I wasn't yet born back then, but I can imagine the announcement of Apple Macintosh with its 6MHz CPU and 128KB RAM was more exciting and had a bigger impact than the announcement of the Cray-2 with its 1.9GHz and +1GB memory.
nyc_data_geek1
The Internet had plenty of very productive use cases before social networking, even from its most nascent origins. Spending billions building something on the assumption that someone else will figure out what it's good for, is not good business.
nyrikki
The TRS-80, Apple ][, and PET all came out in 1977, VisiCalc was released in 1979.
Usenet, Bitnet, IRC, BBSs all predated the commercial internet, which are all forms of Online social networks.
dminik
They keep saying this about crypto too and yet there's still no legitimate use in sight.
rsynnott
Arguably social networking is older than the internet proper; USENET predates TCP/IP (though not ARPANet).
null
harlanlewis
The price really is eye watering. At a glance, my first impression is this is something like Llama 3.1 405B, where the primary value may be realized in generating high quality synthetic data for training rather than direct use.
I keep a little google spreadsheet with some charts to help visualize the landscape at a glance in terms of capability/price/throughput, bringing in the various index scores as they become available. Hope folks find it useful, feel free to copy and claim as your own.
https://docs.google.com/spreadsheets/d/1foc98Jtbi0-GUsNySddv...
jwr
This is incredibly useful, thank you for sharing!
isoprophlex
Thats... incredibly thorough. Wow. Thanks for sharing this.
sfink
> feel free to copy and claim as your own.
That's a nice sentiment, but I'd encourage you to add a license or something. The basic "something" would be adding a canonical URL into the spreadsheet itself somewhere, along with a notification that users can do what they want other than removing that URL. (And the URL would be described as "the original source" or something, not a claim that the particular version/incarnation someone is looking at is the same as what is at that URL.)
The risk is that someone will accidentally introduce errors or unsupportable claims, and people with the modified spreadsheet won't know that it's not The spreadsheet and so will discount its accuracy or trustability. (If people are trying to deceive others into thinking it's the original, they'll remove the notice, but that's a different problem.) It would be a shame for people to lose faith in your work because of crap that other people do that you have no say in.
gwyllimj
That is an amazing resource. Thanks for sharing!
mwigdahl
Wow, what awesome information! Thanks for sharing!
dumpsterdiver
Nice, thank you for that (upvoted in appreciation). Regarding the absence of o1-Pro from the analysis, is that just because there isn't enough public information available?
adinb
I cannot overstate how good your shared spreadsheet is. Thanks again!
bglusman
very impressive... also interested in your trip planner, it looks like invite only at the moment, but... would it be rude to ask for an invite?
minimaxir
Sam Altman's explanation for the restriction is a bit fluffier: https://x.com/sama/status/1895203654103351462
> bad news: it is a giant, expensive model. we really wanted to launch it to plus and pro at the same time, but we've been growing a lot and are out of GPUs. we will add tens of thousands of GPUs next week and roll it out to the plus tier then. (hundreds of thousands coming soon, and i'm pretty sure y'all will use every one we can rack up.)
chefandy
I’m not an expert or anything, but from my vantage point, each passing release makes Altman’s confidence look more aspirational than visionary, which is a really bad place to be with that kind of money tied up. My financial manager is pretty bullish on tech so I hope he is paying close attention to the way this market space is evolving. He’s good at his job, a nice guy, and surely wears much more expensive underwear than I do— I’d hate to see him lose a pair powering on his Bloomberg terminal in the morning one of these days.
igor47
You're the one buying him the underwear. Don't index funds outperform managed investing? I think especially after accounting for fees, but possibly even after accounting that 50% of money managers are below average.
Terr_
> each passing release makes Altman’s confidence look more aspirational than visionary
As an LLM cynic, I feel that point passed long go, perhaps even before Altman claimed countries would start wars to conquer the territory around GPU datacenters, or promoting the dream of a 7 T-for-trillion dollar investment deal, etc.
Alas, the market can remain irrational longer than I can remain solvent.
g-mork
release blog post author: this is definitely a research preview
ceo: it's ready
the pricing is probably a mixture of dealing with GPU scarcity and intentionally discouraging actual users. I can't imagine the pressure they must be under to show they are releasing and staying ahead, but Altman's tweet makes it clear they aren't really ready to sell this to the general public yet.
pk-protect-ai
Yeap, that the thing, they are not ahead anymore. Not since last summer at least. Yes they have probably largest customer base, but their models are not the best for a while already.
serjester
I suppose this was their final hurrah after two failed attempts at training GPT-5 with the traditional pre-training paradigm. Just confirms reasoning models are the only way forward.
granzymes
> Compared to OpenAI o1 and OpenAI o3‑mini, GPT‑4.5 is a more general-purpose, innately smarter model. We believe reasoning will be a core capability of future models, and that the two approaches to scaling—pre-training and reasoning—will complement each other. As models like GPT‑4.5 become smarter and more knowledgeable through pre-training, they will serve as an even stronger foundation for reasoning and tool-using agents.
DebtDeflation
GPT 5 is likely just going to be a router model that decides whether to send the prompt to 4o, 4o mini, 4.5, o3, or o3 mini.
swores
My guess is that you're right about that being what's next (or maybe almost next) from them, but I think they'll save the name GPT-5 for the next actually-trained model (like 4.5 but a bigger jump), and use a different kind of name for the routing model.
Even by their poor standards at naming it would be weird to introduce a completely new type/concept, that can loop in models including the 4 / 4.5 series, while naming it part of that same series.
My bet: probably something weird like "oo1", or I suspect they might try to give it a name that sticks for people to think of as "the" model - either just calling it "ChatGPT", or coming up with something new that sounds more like a product name than a version number (OpenCore, or Central, or... whatever they think of)
lolinder
Except minus 4.5, because at these prices and results there's essentially no reason not to just use one of the existing models if you're going to be dynamically routing anyway.
crystal_revenge
> Just confirms reasoning models are the only way forward.
Reasoning models are roughly the equivalent to allow Hamiltonian Monte-Carlo models to "warm up" (i.e. start sampling from the typical set). This, unsurprisingly, yields better results (after all LLMs are just fancy Monte-carlo models in the end). However, it is extremely unlikely this improvement is without pretty reasonable limitations. Letting your HMC warm up is essential to good sampling, but letting "warm up more" doesn't result in radically better sampling.
While there have been impressive results in efficiency of sampling from the typical set seen in LLMs these days, we're clearly not making the major improvements in the capabilities of these models.
jstummbillig
What it confirms, I think, is, that we are going to need a lot more chips.
georgemcbay
Further confirmation, IMO, that the idea that any of this leads to anything close to AGI is people getting high on their own supply (in some cases literally).
LLMs are a great tool for what is effectively collected knowledge search and summary (so long as you are willing to accept that you have to verify all of the 'knowledge' they spit back because they always have the ability to go off the rails) but they have been hitting the limits on how much better that can get without somehow introducing more real knowledge for close to 2 years now and everything since then is super incremental and IME mostly just benchmark gains and hype as opposed to actually being purely better.
I personally don't believe that more GPUs solves this, like, at all. But its great for Nvidia's stock price.
prisenco
Or, possibly, we're stuck waiting for another theoretical breakthrough before real progress is made.
DannyBee
Eh, no. More chips won't save this right now, or probably in the near future (IE barring someone sitting on a breakthrough right now).
It just means either
A. Lots and lots of hard work that get you a few percent at a time, but add up to a lot over time.
or
B. Completely different approaches that people actually think about for a while rather than trying to incrementally get something done in the next 1-2 months.
Most fields go through this stage. Sometimes more than once as they mature and loop back around :)
Right now, AI seems bad at doing either - at least, from the outside of most of these companies, and watching open source/etc.
While lots of little improvements seem to be released in lots of parts, it's rare to see anywhere that is collecting and aggregating them en masse and putting them in practice. It feels like for every 100 research papers, maybe 1 makes it into something in a way that anyone ends up using it by default.
This could be because they aren't really even a few percent (which would be yet a different problem, and in some ways worse), or it could be because nobody has cared to, or ...
I'm sure very large companies are doing a fairly reasonable job on this, because they historically do, but everyone else - even frameworks - it's still in the "here's a million knobs and things that may or may not help".
It's like if compilers had no "O0/O1/O2/O3' at all and were just like "here's 16,283 compiler passes - you can put them in any order and amount you want". Thanks! I hate it!
It's worse even because it's like this at every layer of the stack, whereas in this compiler example, it's just one layer.
At the rate of claimed improvements by papers in all parts of the stack, either lots and lots and lots is being lost because this is happening, in which case, eventually that percent adds up to enough for someone to be able to use to kill you, or nothing is being lost, in which case, people appear to be wasting untold amounts of time and energy, then trying to bullshit everyone else, and the field as a whole appears to be doing nothing about it. That seems, in a lot of ways, even worse. FWIW - I already know which one the cynics of HN believe, you don't have to tell me :P. This is obviously also presented as black and white, but the in-betweens don't seem much better.
Additionally, everyone seems to rush half-baked things to try to get the next incremental improvement released and out the door because they think it will help them stay "sticky" or whatever. History does not suggest this is a good plan and even if it was a good plan in theory, it's pretty hard to lock people in with what exists right now. There isn't enough anyone cares about and rushing out half-baked crap is not helping that. mindshare doesn't really matter if no one cares about using your product.
Does anyone using these things truly feel locked into anyone's ecosystem at this point? Do they feel like they will be soon?
I haven't met anyone who feels that way, even in corps spending tons and tons of money with these providers.
The public companies - i can at least understand given the fickleness of public markets. That was supposed to be one of the serious benefit of staying private. So watching private companies do the same thing - it's just sort of mind-boggling.
Hopefully they'll grow up soon, or someone who takes their time and does it right during one of the lulls will come and eat all of their lunches.
usaar333
For OpenAI perhaps? Sonnet 3.7 without extended thinking is quite strong. Swe-bench scores tie o3
stavros
How do you read those scores? I wanted to see how well 3.7 with thinking did, but I can't even read that table.
newfocogi
I think this is the correct take. There are other axes to scale on AND I expect we'll see smaller and smaller models approach this level of pre-trained performance. But I believe massive pre-training gains have hit clearly diminished returns (until I see evidence otherwise).
sebastiennight
I think it's fairer to compare it to the original GPT-4 which might the equivalent in term of "size" (though we don't have actual numbers for either).
GPT-4: Input $30.00 / 1M tokens ; Output $60.00 / 1M tokens
So 4.5 is 2.5x more expensive.
I think they announced this as their last non-reasoning model, so it was maybe with the goal of stretching pre-training as far as they could, just to see what new capabilities would show up. We'll find out as the community gives it a whirl.
I'm a Tier 5 org and I have it available already in the API.
minimaxir
The marginal costs for running a GPT-4-class LLM are much lower nowadays due to significant software and hardware innovations since then, so costs/pricing are harder to compare.
sebastiennight
Agreed, however it might make sense that a much-larger-than-GPT-4 LLM would also, at launch, be more expensive to run than the OG GPT-4 was at launch.
(And I think this is probably also scarecrow pricing to discourage casual users from clogging the API since they seem to be too compute-constrained to deliver this at scale)
spoaceman7777
There are some numbers on one of their Blackwell or Hopper info pages that notes the ability of their hardware in hosting an unnamed GPT model that is 1.8T params. My assumption was that it referred to GPT-4
Sounds to me like GPT 4.5 likely requires a full Blackwell HGX cabinet or something, thus OpenAI's reference to needing to scale out their compute more (Supermicro only opened up their Blackwell racks for General Availability last month, and they're the prime vendor for water-cooled Blackwell cabinets right now, and have the ability to throw up a GPU mega-cluster in a few weeks, like they did for xAI/Grok)
OldGreenYodaGPT
2x that price for the 32k context via API at launch. So nearly the same price, but you get 4x the context
jstummbillig
Why would that be fairer? We can assume they did incorporate all learnings and optimizations they made post gpt-4 launch, no?
hn_throwaway_99
The price is obviously 15-30x that of 4o, but I'd just posit that there are some use cases where it may make sense. It probably doesn't make sense for the "open-ended consumer facing chatbot" use case, but for other use cases that are fewer and higher value in nature, it could if it's abilities are considerably better than 4o.
For example, there are now a bunch of vendors that sell "respond to RFP" AI products. The number of RFPs that any sales organization responds to is probably no more than a couple a week, but it's a very time-consuming, laborious process. But the payoff is obviously very high if a response results in a closed sale. So here paying 30x for marginally better performance makes perfect sense.
I can think of a number of similar "high value, relatively low occurrence" use cases like this where the pricing may not be a big hindrance.
Manouchehri
Yeah, agreed.
We’re one of those types of customers. We wrote an OpenAI API compatible gateway that automatically batches stuff for us, so we get 50% off for basically no extra dev work in our client applications.
I don’t care about speed, I care about getting the right answer. The cost is fine as long as the output generates us more profit.
wavemode
This has been my suspicion for a long time - OpenAI have indeed been working on "GPT5", but training and running it is proving so expensive (and its actual reasoning abilities only marginally stronger than GPT4) that there's just no market for it.
It points to an overall plateau being reached in the performance of the transformer architecture.
camdenreslink
That would certainly reduce my anxiety about the future of my chosen profession.
goatlover
Certainly hope so. The tech billionaires are little to excited to achieve AGI and replace the workforce.
shoubidouwah
TBH, with the safety/alignment paradigm we have, workforce replacement was not my top concern when we hit AGI. A pause / lull in capabilities would be hugely helpful so that we can figure how not to die along with the lightcone...
JohnnyMarcone
I feel like this period has shown that we're not quite ready for a machine god. We'll see if RL hits a wall as well.
wiremine
> GPT 4.5 pricing is insane: Price Input: $75.00 / 1M tokens Cached input: $37.50 / 1M tokens Output: $150.00 / 1M tokens
> GPT 4o pricing for comparison: Price Input: $2.50 / 1M tokens Cached input: $1.25 / 1M tokens Output: $10.00 / 1M tokens
Their examples don't seem 30x better. :-)
simonw
I got gpt-4.5-preview to summarize this discussion thread so far (at 324 comments):
hn-summary.sh 43197872 -m gpt-4.5-preview
Using this script: https://til.simonwillison.net/llms/claude-hacker-news-themes...Here's the result: https://gist.github.com/simonw/5e9f5e94ac8840f698c280293d399...
It took 25797 input tokens and 1225 input tokens, for a total cost (calculated using https://tools.simonwillison.net/llm-prices ) of $2.11! It took 154 seconds to generate.
djhworld
interesting summary but it's hard to gauge whether this is better/worse than just piping the contents into a much cheaper model.
azinman2
It’d be great if someone would do that with the same data and prompt to other models.
I did like the formatting and attributions but didn’t necessarily want attributions like that for every section. I’m also not sure if it’s fully matching what I’m seeing in the thread but maybe the data I’m seeing is just newer.
simonw
Good call. Here's the same exact prompt run against:
GPT-4o: https://gist.github.com/simonw/592d651ec61daec66435a6f718c06...
GPT-4o Mini: https://gist.github.com/simonw/cc760217623769f0d7e4687332bce...
Claude 3.7 Sonnet: https://gist.github.com/simonw/6f11e1974e4d613258b3237380e0e...
Claude 3.5 Haiku: https://gist.github.com/simonw/c178f02c97961e225eb615d4b9a1d...
Gemini 2.0 Flash: https://gist.github.com/simonw/0c6f071d9ad1cea493de4e5e7a098...
Gemini 2.0 Flash Lite: https://gist.github.com/simonw/8a71396a4a219d8281e294b61a9d6...
Gemini 2.0 Pro (gemini-2.0-pro-exp-02-05): https://gist.github.com/simonw/112e3f4660a1a410151e86ec677e3...
alew1
Didn't seem to realize that "Still more coherent than the OpenAI lineup" wouldn't make sense out of context. (The actual comment quoted there is responding to someone who says they'd name their models Foo, Bar, Baz.)
willy_k
Wonder if there’s some pro-OpenAI system prompt getting in the way of that.
stevage
Huh. Disregarding the 4.5-specific bit here, a browser extension or possibly website that did this in general could be really useful.
Maybe even something that just noticed whenever you visited a site that had had significant HN discussion in the past, then let you trigger a summary.
3vidence
I don't know why but something about this section made me chuckle
""" These perspectives highlight that there remains nuance—even appreciation—of explorative model advancement not solely focused on immediate commercial viability """
Feels like the model is seeking validation
colordrops
Maybe it's just confirmation bias but the language in your result output seems higher quality that previous models. Seems more natural and eloquent.
joe_the_user
The headline and section: "Dystopian and Social Concerns about AI Features" are interesting. It's roughly true... but somehow that broad statement seems minimize the point discussed.
I'd headline that thread as "Concerns about output tone". There were comments about dystopian implications of tone, marketing implications of tone and implementation issues of tone.
Of course, that I can comment about the fine-points of an AI summary shows it's made progress. But there's a lot riding on how much progress these things can make and what sort. So it's still worth looking at.
jampa
First impression of GPT-4.5:
1. It is very very slow, for some applications where you want real time interactions is just not viable, the text attached below took 7s to generate with 4o, but 46s with GPT4.5
2. The style it writes is way better: it keeps the tone you ask and makes better improvements on the flow. One of my biggest complaints with 4o is that you want for your content to be more casual and accessible but GPT / DeepSeek wants to write like Shakespeare did.
Some comparisons on a book draft: GPT4o (left) and GPT4.5 (green). I also adjusted the spacing around the paragraphs, to better diff match. I still am wary of using ChatGPT to help me write, even with GPT 4.5, but the improvement is very noticeable.
osigurdson
I'm wondering if generative AI will ultimately result in a very dense / bullet form style of writing. What we are doing now is effectively this:
bullet_points' = compress(expand(bullet_points))
We are impressed by lots of text so must expand via LLM in order to impress the reader. Since the reader doesn't have time or interest to read the content they must compress it back into bullet points / quick summary. Really, the original bullet points plus a bit more thinking would likely be a better form of communication.
anon373839
That’s what Axios does. For ordinary events coverage, it’s a great style.
rl3
>1. It is very very slow, ... below took 7s to generate with 4o, but 46s with GPT4.5
This is positively luxurious by o1-pro standards which I'd say average 5 minutes. That said I totally agree even ~45s isn't viable for real-time interactions. I'm sure it'll be optimized.
Of course, my comparing it to the highest-end CoT model in [publicly-known] existence isn't entirely fair since they're sort of apples and oranges.
philomath_mn
I paid for pro to try `o1-pro` and I can't seem to find any use case to justify the insane inference time. `o3-mini-high` seems to do just as well in seconds vs. minutes.
null
azinman2
What are you doing with it? For me deep research tasks are where 5 minutes is fine, or something really hard that would take me way more time myself.
reassess_blind
What’s the deal with Imgur taking ages to load? Anyone else have this issue in Australia? I just get the grey background with no content loaded for 10+ seconds every time I visit that bloated website.
elliotto
This website sucks but successfully loaded from Aus rn on my phone. It's full of ads - possibly your ad blocker is killing it?
stevage
Ok for me here in aus
FergusArgyll
I opened your link in a new tab and looked at it a couple minutes later. By then I forgot which was o and which was .5
I honestly couldn't decide which I prefer
niek_pas
I definitely prefer the 4.5, but that might just be because it sounds 'less like ChatGPT', ironically.
sdesol
It just feels natural to me. The person knows the language but they are not trying to sound smart by using words that might have more impact "based on the words dictionary definition"
GPT 4.5 does feel like it is a step forward in producing natural language, and if they use it to provide reinforcement learning, this might have significant impact in the future smaller models.
ChiefNotAClue
Right side, by a large margin. Better word choice and more natural flow. It feels a lot more human.
kristianp
How do the two versions match so closely? They have the same content in each paragraph, just worded slightly differently. I wouldn't expect them to write paragraphs that match in size and position like that.
throwaway314155
If you use the "retry" functionality in ChatGPT enough, you will notice this happens basically all the time.
princealiiiii
Honestly, feels like a second LLM just reworded the response on the left-side to generate the right-side response.
thfuran
>One of my biggest complaints with 4o is that you want for your content to be more casual and accessible but GPT / DeepSeek wants to write like Shakespeare did.
Well, maybe like a Sophomore's bumbling attempt to write like Shakespeare.
remus
> It is very very slow
Could that be partially due to a big spike in demand at launch?
jampa
Possibly, repeating the prompt I got a much higher speed, taking 20s on average now, which is much more viable. But that remains to be seen when more people start using this version in production.
null
Topfi
Considering both this blog post and the livestream demos, I am underwhelmed. Having just finished the stream, I had a real "was that all" moment, which on one hand shows how spoiled I've gotten by new models impressing me, but on another feels like OpenAI really struggles to stay ahead of their competitors.
What has been shown feels like it could be achieved using a custom system prompt on older versions of OpenAIs models, and I struggle to see anything here that truly required ground-up training on such a massive scale. Hearing that they were forced to spread their training across multiple data centers simultaneously, coupled with their recent release of SWE-Lancer [0] which showed Anthropic (Claude 3.5 Sonnet (new) to be exact) handily beating them, I was really expecting something more than "slightly more casual/shorter output", which again, I fail to see how that wasn't possible by prompting GPT-4o.
Looking at pricing [1], I am frankly astonished.
> Input: $75.00 / 1M tokens > Cached input: $37.50 / 1M tokens > Output: $150.00 / 1M tokens
How could they justify that asking price? And, if they have some amazing capabilities that make a 30-fold pricing increase justifiable, why not show it? Like, OpenAI are many things, but I always felt they understood price vs performance incredibly well, from the start with gpt-3.5-turbo up to now with o3-mini, so this really baffles me. If GPT-4.5 can justify such immense cost in certain tasks, why hide that and if not, why release this at all?
energy123
The niche of GPT-4.5 is lower hallucations than any existing model. Whether that niche justifies the price tag for a subset of usecases remains to be seen.
tmaly
rethinking your comment "was that all" I am listening to the stream now and had a thought. Most of the new models that have come out in the past few weeks have been great at coding and logical reasoning. But 4o has been better at creative writing. I am wondering if 4.5 is going to be even better at creative writing than 4o.
maeil
> But 4o has been better at creative writing
In what way? I find the opposite, 4o's output has a very strong AI vibe, much moreso than competitors like Claude and Gemini. You can immediately tell, and instructing it to write differently (except for obvious caricatures like "Write like Gen Z") doesn't seem to help.
dingnuts
if you generate "creative" writing, please tell your audience that it is generated, before asking them to read it.
I do not understand what possible motivation there could be for generating "creative writing" unless you enjoy reading meaningless stories yourself, in which case, be my guest.
nycdatasci
Funny you should suggest that it seems like a revised system prompt: https://chatgpt.com/share/67c0fda8-a940-800f-bbdc-6674a8375f...
Bjorkbat
My first thought seeing this and looking at benchmarks was that if it wasn’t for reasoning, then either pundits would be saying we’ve hit a plateau, or at the very least OpenAI is clearly in 2nd place to Anthropic in model performance.
Of course we don’t live in such a world, but I thought of this nonetheless because for all the connotations that come with a 4.5 moniker this is kind of underwhelming.
uh_uh
Pundits were saying that deep learning has hit a plateau even before the LLM boom.
mvdtnz
> How could they justify that asking price?
They're still selling $1 for <$1. Like personal food delivery before it, consumers will eventually need to wake up to this fact - these things will get expensive, fast.
josh-sematic
One difference with food delivery/ride share: those can only have costs reduced so far. You can only pick up groceries and drive from A to B so quickly. And you can only push the wages down so far before you lose your gig workers. Whereas with these models we’ve consistently seen that a model inference that cost $1 several months ago can now be done with much less than $1 today. We don’t have any principled understanding of “we will never be able to make these models more efficient than X”, for any value of X that is in sight. Could the anticipated efficiencies fail to materialize? It’s possible but I personally wouldn’t put money on it.
phillipcarter
I read this more as "we are releasing a model checkpoint that we didn't optimize yet because Anthropic cranked up the pressure"
Ekaros
I generally question how wide spread willingness to pay for the most expensive product is. And will most users of those who actually want AI go with ad ridden lesser models...
vel0city
I can just imagine Kraft having a subsidized AI model for recipe suggestions that adds Velveeta to everything.
spiderfarmer
Let a thousand providers bloom.
swagmoney1606
I have no idea how they justify $200/month for pro
lasermike026
I would rather pay for 4.5 by the query.
sebastiennight
It is interesting that they are focusing a large part of this release on the model having a higher "EQ" (Emotional Quotient).
We're far from the days of "this is not a person, we do not want to make it addictive" and getting a firm foot on the territory of "here's your new AI friend".
This is very visible in the example comparing 4o with 4.5 when the user is complaining about failing a test, where 4o's response is what one would expect from a "typical AI response" with problem-solving bullets, and 4.5 is sending what you'd expect from a pal over instant messaging.
It seems Anthropic and Grok have both been moving in this direction as well. Are we going to see an escalation of foundation models impersonating "a friendly person" rather than "a helpful assistant"?
Personally I find this worrying and (as someone who builds upon SOTA model APIs) I really hope this behavior is not going to seep into API responses, or will at least be steerable through the system/developer prompt.
og_kalu
The whole robotic, monotone, helpful assistant thing was something these companies had to actively hammer in during the post-training stage. It's not really how LLMs will sound by default after pre-training.
I guess they're caring less and less about that effort especially since it hurts the model in some ways like creative writing.
sebastiennight
If it's just a different choice during RLHF, I'll be curious to see what are the trade-offs in performance.
The "buddy in a chat group" style answers do not make me feel like asking it for a story will make the story long/detailed/poignant enough to warrant the difference.
I'll give it a try and compare on creative tasks.
turnsout
Or maybe they're just getting better at it, or developing better taste. After switching to Claude, I can't go back to ChatGPT's overly verbose bullet-point laden book reports every time I ask a question. I don't think that's pretraining—it's in the way OpenAI approaches tuning and prompting vs Anthropic.
orbital-decay
Anthropic pretty much abandoned this direction after Claude 3, and said it wasn't what they wanted [1]. Claude 3.5+ is extremely dry and neutral, it doesn't seem to have the same training.
>Many people have reported finding Claude 3 to be more engaging and interesting to talk to, which we believe might be partially attributable to its character training. This wasn’t the core goal of character training, however. Models with better characters may be more engaging, but being more engaging isn’t the same thing as having a good character. In fact, an excessive desire to be engaging seems like an undesirable character trait for a model to have.
Kye
It's the opposite incentive to ad-funded social media. One wants to drain your wallet and keep you hooked, the other wants you to spend as little of their funding as possible finding what you're looking for.
sureIy
I don't know if I fully agree. The input clearly shows the need for emotional support more than "how do I pass this test?" The answer by 4o is comical even if you know you're talking to a machine.
It reminds me of the advice to "not offer solutions when a woman talks about her problems, but just listen."
callc
> We're far from the days of "this is not a person, we do not want to make it addictive" and getting a firm foot on the territory of "here's your new AI friend".
That’s a hard nope from me, when companies pull that move. I’ll stick to my flesh and blood humans who still hallucinate but only rarely.
bredren
Yes, the "personality" (vibe) of the model is a key qualitative attribute of gpt-4.5.
I suspect this has something to do with shining light on an increased value prop in a dimension many people will appreciate since gains on quantitative comparison with other models were not notable enough to pop eyeballs.
tmaly
I would like to see a humor test. So far, I have not seen any model response that has made me laugh.
tkgally
How does the following stand-up routine by Claude 3.7 Sonnet work for you?
fragmede
I chuckled.
Now you just need a Pro subscription to get Sora generate a video to go along with this and post it to YouTube and rake in the views (and the money that goes along with it).
aprilthird2021
Reading this felt like reading junk food
EDIT: Junk food tastes kinda good though. This felt like drinking straight cooking oil. Tastes bad and bad for you.
lurker9001
incredible
thousand_nights
reddit tier humor, truly
it's just regurgitating overly emphasized cliches in a disgustingly enthusiastic tone
AgentME
My benchmark for this has been asking the model to write some tweets in the style of dril, a popular user who writes short funny tweets. Sometimes I include a few example tweets in the prompt too. Here's an example of results I got from Claude 3 Opus and GPT 4 for this last year: https://bsky.app/profile/macil.tech/post/3kpcvicmirs2v. My opinion is that Claude's results were mostly bangers while GPT's were all a bit groanworthy. I need to try this again with the latest models sometime.
sebastiennight
The "roast" tools that have popped up (using either DeepSeek or o3-mini) are pretty funny.
jcims
OK now that is some funny shit.
turnsout
If you like absurdist humor, go into the OpenAI playground, select 3.5-Turbo, and dial up the temperature to the point where the output devolves into garbled text after 500 tokens or so. The first ~200 tokens are in the freaking sweet spot of humor.
rl3
Maybe it's rose-colored glasses, but 3.5 was really the golden era for LLM comedy. More modern LLMs can't touch it.
Just ask it to write you a film screenplay involving some hard-ass 80s/90s action star and someone totally unrelated and opposite of that. The ensuring unhinged magic is unparalleled.
amarcheschi
Could someone post an example?
immibis
ChatGPT gave me this shell script: https://social.immibis.com/media/7102ac83cf4a200e48dd368938e... (obviously, don't download and execute a random shell script from the internet without reading it first)
I think reading it will make you laugh.
aprilthird2021
I think it's a good thing because, idk why, I just start tuning out after getting reams and reams of bullet points I'm already not super confident about the truthfulness of
nialv7
Well yeah, if the llm can keep you engaged and talking, that'll make them a lot more money; compared to if you just use it as a information retrieval tool in which case you are likely to leave after getting what you are looking for.
TheAceOfHearts
Since they offer a subscription, keeping you engaged just requires them to waste more compute. The ideal case would be that the LLM gives you a one shot correct response using as little compute as possible.
sebastiennight
In a subscription business, you don't want the user to use as few resources as possible. It's the wrong optimization to make.
You want users to keep coming back as often as possible (at the lowest cost-per-run possible though). If they are not coming back they are not renewing.
So, yes, it makes sense to make answers shorter to cut on compute cost (which these SMS-length replies could accomplish) but the main point of making the AI flirtatious or "concerned" is possibly the addictive factor of having a shoulder to cry on 24/7, one that does not call you on your BS and is always supportive... for just $20 a month
The "one-shot correct response" to "I failed my exams" might be "Tough luck, try better next time" but if you do that, you will indeed use very little compute because people will cancel the subscription and never come back.
nialv7
Plus level subscription has limits too, and Pro level costs 10x more - as long as Pro users don't use ChatGPT 10x more than Plus users on average, OpenAI can benefit. There's also the user retention factor.
freediver
The results for GPT - 4.5 are in for Kagi LLM benchmark too.
It does crush our benchmark - time to make new? ;) - with performance similar of that of reasoning models. It does come at a great price both in cost and speed.
A monster is what they created. But looking at the tasks it fails, some of them my 9 year old would solve. Still in this weird limbo space of super knowledge and low intelligence.
May be remembered as the last the last of the 'big ones', can't imagine this will be a path for the future.
mjirv
Do you have results for gpt-4? I’d be very interested in seeing the lift here from their last “big one”.
theodorthe5
If Gemini 2 is the top in your benchmark, make sure to re-check your benchmark.
shawabawa3
Gemini 2 pro is actually very impressive (maybe not for coding, haven't used it for that)
Flash is pretty garbage but cheap
istjohn
Gemini 2.0 Pro is quite good.
eightysixfour
Seeing OpenAI and Anthropic go different routes here is interesting. It is worth moving past the initial knee jerk reaction of this model being unimpressive and some of the comments about "they spent a massive amount of money and had to ship something for it..."
* Anthropic appears to be making a bet that a single paradigm (reasoning) can create a model which is excellent for all use cases.
* OpenAI seems to be betting that you'll need an ensemble of models with different capabilities, working as a single system, to jump beyond what the reasoning models today can do.
Based on all of the comments from OpenAI, GPT 4.5 is absolutely massive, and with that size comes the ability to store far more factual data. The scores in ability oriented things - like coding - don't show the kind of gains you get from reasoning models but the fact based test, SimpleQA, shows a pretty large jump and a dramatic reduction in hallucinations. You can imagine a scenario where GPT4.5 is coordinating multiple, smaller, reasoning agents and using its factual accuracy to enhance their reasoning, kind of like ruminating on an idea "feels" like a different process than having a chat with someone.
I'm really curious if they're actually combining two things right now that could be split as well, EQ/communications, and factual knowledge storage. This could all be a bust, but it is an interesting difference in approaches none-the-less, and worth considering that OpenAI could be right.
sebastiennight
> * OpenAI seems to be betting that you'll need an ensemble of models with different capabilities, working as a single system, to jump beyond what the reasoning models today can do.
Seems inaccurate as their most recent claim I've seen is that they expect this to be their last non-reasoning model, and are aiming to provide all capacities together in the future model releases (unifying the GPT-x and o-x lines)
See this claim on TFA:
> We believe reasoning will be a core capability of future models, and that the two approaches to scaling—pre-training and reasoning—will complement each other.
eightysixfour
From Sam's twitter:
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
You could read this as unifying the models or building a unified systems which coordinate multiple models. The second sentence, to me, implies that o3 will still exist, it just won't be standalone, which matches the idea I shared above.
sebastiennight
Ah, great point. Yes, the wording here would imply that they're basically planning on building scaffolding around multiple models instead of having one more capable Swiss Army Knife model.
I would feel a bit bummed if GPT-5 turned out not to be a model, but rather a "product".
ryukoposting
> know when to think for a long time or not, and generally be useful for a very wide range of tasks.
I'm going to call it now - no customer is actually going to use this. It'll be a cute little bonus for their chatbot god-oracle, but virtually all of their b2b clients are going to demand "minimum latency at all times" or "maximum accuracy at all times."
tmpz22
I worry eliminating consumer choice will drive up prices for only a nominal gain in utility for most users.
billywhizz
or you could read it as a way to create a moat where none currently exists...
throw234234234
> Anthropic appears to be making a bet that a single paradigm (reasoning) can create a model which is excellent for all use cases.
I don't think that is their primary motivation. The announcement post for Claude 3.7 was all about code which doesn't seem to imply "all use cases". Code this, new code tool that, telling customers that they look forward to what they build, etc. Very little mention of other use cases on the new model announcement at all. Their usage stats they published are telling - 80%+ or more of queries to Claude are all about code. i.e. I actually think while they are thinking of other use cases; they see the use case of code specifically as the major thing to optimize for.
OpenAI, given its different customer base and reach, is probably aiming for something more general.
IMO they all think that you need an "ensemble" of models with different capabilities to optimise for different use cases. Its more about how much compute resources each company has and what they target with those resources. Anthrophic I'm assuming has less compute resources and a narrower customer base so it economically may make sense to optimise just for that.
eightysixfour
That's possible, my counter point would be that if that was the case Anthropic would have built a smaller reasoning model instead of doing a "full" Claude. Instead, they built something which seems to be flexible across different types of responses.
Only time will tell.
nomel
> OpenAI seems to be betting that you'll need an ensemble of models with different capabilities, working as a single system, to jump beyond what the reasoning models today can do.
The high level block diagrams for tech always end up converging to those found in biological systems.
eightysixfour
Yeah, I don't know enough real neuroscience to argue either side. What I can say is I feel like this path is more like the way that I observe that I think, it feels like there are different modes of thinking and processes in the brain, and it seems like transformers are able to emulate at least two different versions of that.
Once we figure out the frontal cortex & corpus callosum part of this, where we aren't calling other models over APIs instead of them all working in the same shared space, I have a feeling we'll be on to something pretty exciting.
jstummbillig
It can never be just reasoning, right? Reasoning is the multiplier on some base model, and surely no amount of reasoning on top of something like gpt-2 will get you o1.
This model is too expensive right now, but as compute gets cheaper — and we have to keep in mind, that it will — having a better base to multiply with will enable things that just more thinking won't.
eightysixfour
You can try for yourself with the distilled R1's that Deepseek released. The qwen-7b based model is quite impressive for its size and it can do a lot with additional context provided. I imagine for some domains you can provide enough context and let the inference time eventually solve it, for others you can't.
wongarsu
Or the other way around: smaller reasoning models that can call out to GPT-4.5 to get their facts right.
eightysixfour
Maybe, I’m inclined to think OpenAI believes the way I laid it out though, specifically because of their focus on communication and EQ in 4.5. It seems like they believe the large, non-reasoning model, will be “front of house.”
Or they’ll use some kind of trained router which sends the request to the one it thinks it should go to first.
bhouston
A bit better at coding than ChatGPT 4o but not better than o3-mini - there is a chart near the bottom of the page that is easy to overlook:
- ChatGPT 4.5 on AWS Bench verified: 38.0%
- ChatGPT 4o on AWS Bench verified: 30.7%
- OpenAI o3-mini on AWS Bench verified: 61.0%
BTW Anthropic Claude 3.7 is better than o3-mini at coding at around 62-70% [1]. This means that I'll stick with Claude 3.7 for the time being for my open source alternative to Claude-code: https://github.com/drivecore/mycoder
[1] https://aws.amazon.com/blogs/aws/anthropics-claude-3-7-sonne...
pawelduda
Does the benchmark reflect your opinion on 3.7? I've been using 3.7 via Cursor and it's noticeably worse than 3.5. I've heard using the standalone model works fine, didn't get a chance to try it yet though.
jasonjmcghee
personal anecdote - claude code is the best llm devx i've had.
_cs2017_
I don't see Claude 3.7 on the official leaderboard. The top performer on the leaderboard right now is o1 with a scaffold (W&B Programmer O1 crosscheck5) at 64.6%: https://www.swebench.com/#verified.
If Claude 3.7 achieves 70.3%, it's quite impressive, it's not far from 71.7% claimed by o3, at (presumably) much, much lower costs.
ehsanu1
It's the other way around on their new SWE-Lancer benchmark, which is pretty interesting: GPT-4.5 scores 32.6%, while o3-mini scores 10.8%.
Topfi
To put that in context, Claude 3.5 Sonnet (new), a model we have had for months now and which from all accounts seems to have been cheaper to train and is cheaper to use, is still ahead of GPT-4.5 at 36.1% vs 32.6% in SWE-Lancer Diamond [0]. The more I look into this release, the more confused I get.
logicchains
>BTW Anthropic Claude 3.7 is better than o3-mini at coding at around 62-70% [1]. This means that I'll stick with Claude 3.7 for the time being for my open source alternative to Claude-code
That's not a fair comparison as o3-mini is significantly cheaper. It's fine if your employer is paying, but on a personal project the cost of using Claude through the API is really noticeable.
cheema33
> That's not a fair comparison as o3-mini is significantly cheaper. It's fine if your employer is paying...
I use it via Cursor editor's built-in support for Claude 3.7. That caps the monthly expense to $20. There probably is a limit in Claude for these queries. But I haven't run into it yet. And I am a heavy user.
bhouston
Agentic coders (e.g. aider, Claude-code, mycoder, codebuff, etc.) use a lot more tokens, but they write whole features for you and debug your code.
QuadmasterXLII
If open ai offers a more expensive model (4.5) and a cheaper model (3 mini) and both are worse, it starts to be a fair comparison
simonw
If you want to try it out via their API you can run it through my LLM tool using uvx like this:
uvx --with 'https://github.com/simonw/llm/archive/801b08bf40788c09aed6175252876310312fe667.zip' \
llm -m gpt-4.5-preview 'impress me'
You may need to set an API key first, either with `export OPENAI_API_KEY='xxx'` or using this command to save it to a file: uvx llm keys set openai
# paste key here
Or this to get a chat session going: uvx --with 'https://github.com/simonw/llm/archive/801b08bf40788c09aed6175252876310312fe667.zip' \
llm chat -m gpt-4.5-preview
I'll probably have a proper release out later today. Details here: https://github.com/simonw/llm/issues/795ashu1461
Just curious, does this stream the output or renders all at once ?
simonw
It streams the output. See animated demo here (bottom image on the page) https://simonwillison.net/2025/Feb/27/introducing-gpt-45/
null
andrewinardeer
I just played with the preview through the API. I asked it to refactor a fairly simple dashboard made with HTML, css and JavaScript.
First time it confused css and JavaScript, then spat out code which broke the dashboard entirely.
Then it charged me $1.53 for the privilege.
ipnon
Finally a replacement for junior engineers!
antirez
In many ways I'm not an OpenAI fan (but I need to recognize their many merits). At the same time, I believe people are missing what they tried to do with GPT 4.5: it was needed and important to explore the pre-training scaling law in that direction. A gift to science, however selfist it could be.
throwaway314155
> A gift to science
This is hardly recognizable as science.
edit: Sorry, didn't feel this was a controversial opinion. What I meant to say was that for so-called science, this is not reproducible in any way whatsoever. Further, this page in particular has all the hallmarks of _marketing_ copy, not science.
Sometimes a failure is just a failure, not necessarily a gift. People could tell scaling wasn't working well before the release of GPT 4.5. I really don't see how this provides as much insight as is suggested.
Deepseek's models apparently still compare favorably with this one. What's more they did that work with the constraint of having _less_ money, not so much money they could run incredibly costly experiments that are likely to fail. We need more of the former, less of the latter.
joshuamcginnis
I'm one week in on heavy grok usage. I didn't think I'd say this, but for personal use, I'm considering cancelling my OpenAI plan.
The one thing I wish grok had was more separation of the UI from X itself. The interface being so coupled to X puts me off and makes it feel like a second-hand citizen. I like ChatGPTs minimalist UI.
richard_todd
I find grok to be the best overall experience for the types of tasks I try to give AI (mostly: analyze pdf, perform and proofread OCR, translate Medieval Latin and Hebrew, remind me how to do various things in python or SwiftUI). ChatGPT/gemini/copilot all fight me occasionally, but grok just tries to help. And the hallucinations aren’t as frequent, at least anecdotally.
aldanor
Theres grok.com which is standalone and with its own UI
fzzzy
Don't they have a standalone Grok app now? I thought I saw that. [edit] ah some sibling comments mention this as well
GPT 4.5 pricing is insane: Price Input: $75.00 / 1M tokens Cached input: $37.50 / 1M tokens Output: $150.00 / 1M tokens
GPT 4o pricing for comparison: Price Input: $2.50 / 1M tokens Cached input: $1.25 / 1M tokens Output: $10.00 / 1M tokens
It sounds like it's so expensive and the difference in usefulness is so lacking(?) they're not even gonna keep serving it in the API for long:
> GPT‑4.5 is a very large and compute-intensive model, making it more expensive than and not a replacement for GPT‑4o. Because of this, we’re evaluating whether to continue serving it in the API long-term as we balance supporting current capabilities with building future models. We look forward to learning more about its strengths, capabilities, and potential applications in real-world settings. If GPT‑4.5 delivers unique value for your use case, your feedback (opens in a new window) will play an important role in guiding our decision.
I'm still gonna give it a go, though.