GPT-5.1: A smarter, more conversational ChatGPT
100 comments
·November 12, 2025url00
tekacs
That's what the personality selector is for: you can just pick 'Efficient' (formerly Robot) and it does a good job of answering tersely?
arthurcolle
Omg them redefining the prompt tag keys is terrible. Hopefully aliased to same DSPy compiled program they're using
bogtog
Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation
coolestguy
Sorry that you can't control other peoples lives & wants
angrydev
Exactly. Stop fooling people into thinking there’s a human typing on the other side of the screen. LLMs should be incredibly useful productivity tools, not emotional support.
lcfcjs6
[dead]
nathan_compton
You can just tell the AI to not be warm and it will remember. My ChatGPT used the phrase "turn it up to eleven" and I told it never to speak in that manner ever again and its been very robotic ever since.
andai
I system-prompted all my LLMs "Don't use cliches or stereotypical language." and they like me a lot less now.
gcau
Yea, I don't want something trying to emulate emotions. I don't want it to even speak a single word, I just want code, unless I explicitly ask it to speak on something, and even in that scenario I want raw bullet points, with concise useful information and no fluff. I don't want to have a conversation with it.
However, being more humanlike, even if it results in an inferior tool, is the top priority because appearances matter more than actual function.
Tiberium
Are you aware that you can achieve that by going into Personalization in Settings and choosing one of the presets or just describing how you want the model to answer in natural language?
sbuttgereit
This. When I go to an LLM, I'm not looking for a friend, I'm looking for a tool.
Keeping faux relationships out of the interaction never let's me slip into the mistaken attitude that I'm dealing with a colleague rather than a machine.
cowpig
I think they get way more "engagement" from people who use it as their friend, and the end goal of subverting social media and creating the most powerful (read: profitable) influence engine on earth makes a lot of sense if you are a soulless ghoul.
sofixa
It would be pretty dystopian when we get to the point where ChatGPT pushed (unannounced) advertisements to those people (the ones forming a parasocial relationship with it). Imagine someone complaining they're depressed and ChatGPT proposing doing XYZ activity which is actually a disguised ad.
Other than such scenarios, that "engagement" would be just useless and actually costing them more money than it makes
moi2388
Same. If i tell it to choose A or B, I want it to output either “A” or “B”.
I don’t want an essay of 10 pages about how this is exactly the right question to ask
astrange
LLMs have essentially no capability for internal thought. They can't produce the right answer without doing that.
Of course, you can use thinking mode and then it'll just hide that part from you.
LeifCarrotson
10 pages about the question means that the subsequent answer is more likely to be correct. That's why they repeat themselves.
ewoodrich
From the user perspective my experience is that a super long response to a "simple" question is more of a red flag they have already or are about to veer into meltdown mode with half the reasoning being:
I see it now! The error is now crystal clear, the definition on line 2 is obviously incorrect. This is guaranteed to fix the user's issue.
Wait hold on, that's correct. I see now, the error isn't in the code, it's the build system with the issue.
Hold on, I see the syntax error now, let me just read the surrounding code.
No, that's not right, that is valid syntax. Instead it could be the build system....binary132
citation needed
minimaxir
All the examples of "warmer" generations show that OpenAI's definition of warmer is synonymous with sycophantic, which is a surprise given all the criticism against that particular aspect of ChatGPT.
I suspect this approach is a direct response to the backlash against removing 4o.
captainkrtek
Id have more appreciation and trust in an llm that disagreed with me more and challenged my opinions or prior beliefs. The sycophancy drives me towards not trusting anything it says.
crazygringo
Just set a global prompt to tell it what kind of tone to take.
I did that and it points out flaws in my arguments or data all the time.
Plus it no longer uses any cutesy language. I don't feel like I'm talking to an AI "personality", I feel like I'm talking to a computer which has been instructed to be as objective and neutral as possible.
It's super-easy to change.
engeljohnb
I have a global prompt that specifically tells it not to be sycophantic and to call me out when I'm wrong.
It doesn't work for me.
I've been using it for a couple months, and it's corrected me only once, and it still starts every response with "That's a very good question." I also included "never end a response with a question," and it just completely ingored that so it can do its "would you like me to..."
captainkrtek
I’ve done this when I remember too, but the fact I have to also feels problematic like I’m steering it towards an outcome if I do or dont.
microsoftedging
What's your global prompt please? A more firm chatbot would be nice actually
tart-lemonade
Qwen seems fairly capable of disagreeing out of the box, though like any LLM, it is only as good as its training set and it has incorrectly challenged me on several occasions.
jasonjmcghee
It is interesting. I don't need ChatGPT to say "I got you, Jason" - but I don't think I'm the target user of this behavior.
danudey
The target users for this behavior are the ones using GPT as a replacement for social interactions; these are the people who crashed out/broke down about the GPT5 changes as though their long-term romantic partner had dumped them out of nowhere and ghosted them.
I get that those people were distraught/emotionally devastated/upset about the change, but I think that fact is reason enough not to revert that behavior. AI is not a person, and making it "warmer" and "more conversational" just reinforces those unhealthy behaviors. ChatGPT should be focused on being direct and succinct, and not on this sort of "I understand that must be very frustrating for you, let me see what I can do to resolve this" call center support agent speak.
jasonjmcghee
> and not on this sort of "I understand that must be very frustrating for you, let me see what I can do to resolve this"
You're triggering me.
Another type that are incredibly grating to me are the weird empty / therapist like follow-up questions that don't contribute to the conversation at all.
The equivalent of like (just a contrived example), a discussion about the appropriate data structure for a problem and then it asks a follow-up question like, "what other kind of data structures do you find interesting?"
And I'm just like "...huh?"
nerbert
Indeed, target users are people seeking validation + kids and teenagers + people with a less developed critical mind. Stickiness with 90% of the population is valuable for Sam.
torginus
Man I miss Claude 2 - it acted like it was a busy person people inexplicably kept bothering with their random questions
barbazoo
> I’ve got you, Ron
No you don't.
simlevesque
It seems like the line between sycophantic and bullying is very thin.
andy_ppp
I was just saying to someone in the office I’d prefer the models to be a bit harsher of my questions and more opinionated, I can cope.
fragmede
That's a lesson on revealed preferences, especially when talking to a broad disparate group of users.
varenc
Interesting that they're releasing separate gpt-5.1-instant and gpt-5.1-thinking models. The previous gpt-5 release made of point of simplifying things by letting the model choose if it was going to use thinking tokens or not. Seems like they reversed course on that?
aniviacat
> For the first time, GPT‑5.1 Instant can use adaptive reasoning to decide when to think before responding to more challenging questions
It seems to still do that. I don't know why they write "for the first time" here.
llamasushi
"Warmer and more conversational" - they're basically admitting GPT-5 was too robotic. The real tell here is splitting into Instant vs Thinking models explicitly. They've given up on the unified model dream and are now routing queries like everyone else (Anthropic's been doing this, Google's Gemini too).
Calling it "GPT-5.1 Thinking" instead of o3-mini or whatever is interesting branding. They're trying to make reasoning models feel less like a separate product line and more like a mode. Smart move if they can actually make the router intelligent enough to know when to use it without explicit prompting.
Still waiting for them to fix the real issue: the model's pathological need to apologize for everything and hedge every statement lol.
nlh
What's remarkable to me is how deep OpenAI is going on "ChatGPT as communication partner / chatbot", as opposed to Anthropic's approach of "Claude as the best coding tool / professional AI for spreadsheets, etc.".
I know this is marketing at play and OpenAI has plenty of resources developed to advancing their frontier models, but it's starting to really come into view that OpenAI wants to replace Google and be the default app / page for everyone on earth to talk to.
boldlybold
Just set it to the "Efficient" tone, let's hope there's less pedantic encouragement of the projects I'm tackling, and less emoji usage.
davidguetta
WE DONT CARE HOW IT TALKS TO US, JUST WRITE CODE FAST AND SMART
astrange
Personal requests are 70% of usage
https://www.nber.org/system/files/working_papers/w34255/w342...
cregaleus
If you include API usage, personal requests are approximately 0% of total usage, rounded to the nearest percentage.
netbioserror
Who is "we"?
speedgoose
David Guetta, but I didn't know he was also into software development.
null
tekacs
I'm excited to see whether the instruction following improvements play out in the use of Codex.
The biggest issue I'e seen _by far_ with using GPT models for coding has been their inability to follow instructions... and also their tendency to duplicate-act on messages from up-thread instead of acting on what you just asked for.
Someone1234
Unfortunately no word on "Thinking Mini" getting fixed.
Before GPT-5 was released it used to be a perfect compromise between a "dumb" non-Thinking model and a SLOW Thinking model. However, something went badly wrong within the GPT-5 release cycle, and today it is exactly the same speed (or SLOWER) than their Thinking model even with Extended Thinking enabled, making it completely pointless.
In essence Thinking Mini exists because it is faster than Thinking, but smarter than non-Thinking, but it is dumber than full-Thinking while not being faster.
Terretta
As of 20 minutes in, most comments are about "warm". I'm more concerned about this:
> GPT‑5.1 Thinking: our advanced reasoning model, now easier to understand
Oh, right, I turn to the autodidact that's read everything when I want watered down answers.
sethops1
Is anyone else tired of chat bots? Really doesn't feel like typing a conversation every interaction is the future of technology.
mritchie712
when 4o was going thru it's ultra-sycophantic phase, I had a talk with it about Graham Hancock (Ancient Apocalypse, alt-history guy).
It agreed with everything Hancock claims with just a little encouragement.
I don't want a more conversational GPT. I want the _exact_ opposite. I want a tool with the upper limit of "conversation" being something like LCARS from Star Trek. This is quite disappointing as a current ChatGPT subscriber.