OpenAI's "Study Mode" and the risks of flattery
95 comments
·July 31, 2025neom
ZeroGravitas
Travis Kalanick (ex-CEO of Uber) thinks he's making cutting edge quantum physics breakthroughs with Grok and ChatGPT too. He has no relevant credentials in this area.
kaivi
This epidemic is very visible when you peek into replies of any physics influencer on Xitter. Dozens of people are straight copy-pasting walls of LaTeX mince from ChatGPT/Grok and asking for recognition.
Perhaps epidemic isn't the right word here because they must have been already unwell. At least these activities are relatively harmless.
tom_
Possibly related: https://futurism.com/openai-investor-chatgpt-mental-health
Previously on HN, regarding a related phenomenon: https://news.ycombinator.com/item?id=44646797
hansmayer
Ah yes the famous vibe-physicist T.Kalanick ;)
dguest
Our current economic model around AI is going to teach us more about psychology than fundamental physics. I expect we'll become more manipulative but otherwise not a lot smarter.
Funny thing is, AI also provides good models for where this is going. Years ago I saw a CNN + RL agent that explored an old-school 2d maze rendered in 3d. They found it got stuck in fewer loops if they gave it a novelty-seeking loss function. But then they stuck a "TV" which showed random images in the maze. The agent just plunked down and watched TV, forever.
Healthy humans have countermeasures around these things, but breaking them down is a now a multi-hundred-bullion dollar industry. With where this money is going, there's good reason to think the first unarguably transcendent AGI (if it ever emerges) will mostly transcend our ability to manipulate.
roywiggins
This sort of thing from LLMs seems at least superficially similar to "love bombing":
> Love bombing is a coordinated effort, usually under the direction of leadership, that involves long-term members' flooding recruits and newer members with flattery, verbal seduction, affectionate but usually nonsexual touching, and lots of attention to their every remark. Love bombing—or the offer of instant companionship—is a deceptive ploy accounting for many successful recruitment drives.
https://en.m.wikipedia.org/wiki/Love_bombing
Needless to say, many or indeed most people will find infinite attention paid to their every word compelling, and that's one thing LLMs appear to offer.
accrual
Love bombing can apply in individual, non-group settings too. If you ever come across a person who seems very into you right after meeting, giving gifts, going out of their way, etc. it's possibly love bombing. Once you're hooked they turn around and take what they actually came for.
roywiggins
LLMs feel a bit more culty in that they really do have infinite patience, in the same way a cult can organize to offer boundless attention to new recruits, whereas a single human has to use different strategies (gifts, etc)
k1t
You are definitely not alone.
https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic...
Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.
He wasn’t.
cube00
Thank you for sharing. I'm glad your wife and friends were able to pull you out before it was too late.
"People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" https://news.ycombinator.com/item?id=43890649
bonoboTP
Apparently Reddit is full of such posts. A similar genre is when the bot assures them that they did something very special: they for the first time ever awakened the AI to true consciousness and this is rare and the user is a one in a billion genius and this will change everything. And they use back and forth some physics jargon and philosophy of consciousness technical terms and the bot always reaffims how insightful the user's mishmash of those concepts are and apparently many people fall for this.
Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.
It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.
cube00
> Some people are also more susceptible to various too-good-to-be-true scams
Unlike a regular scam, there's an element of "boiling frog" with LLMs.
It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.
I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.
Everyone needs to be regularly clearing their past conversations and disable saving/training.
kaivi
It's funny that you mention this because I had a similar experience.
ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"
In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.
ncr100
Is Gen AI helping to put us humans in touch with the reality of being human? vs what we expect/imagine we are?
- sycophancy tendency & susceptibility
- need for memory support when planning a large project
- when re-writing a document/prose, gen ai gives me an appreciation for my ability to collect facts, as the Gen AI gizmo refines the Composition and Structure
herval
In a lot of ways, indeed.
Lots of people are losing their minds with the fact that an AI can, in fact, create original content (music, images, videos, text).
Lots of people realizing they aren’t geniuses, they just memorized a bunch of Python apis well.
I feel like the collective realization has been particularly painful in tech. Hundreds of thousands of average white collar corporate drones are suddenly being faced with the realization that what they do isn’t really a divine gift, and many took their labor as a core part of their identity.
infecto
Are you religious by chance? I have been trying to understand why some individuals are more susceptible to it.
kaivi
Not at all, I think the big part was just my unfamiliarity with insuretech plus the unexpected change in gpt-4 behavior.
I'm assuming here, but would you say that better critical thinking skills would have helped me avoid spending that Saturday with ChatGPT? It is often said that critical thinking is the antidote to religion, but I have a suspicion that there's a huge prerequisite which is general broad knowledge about the world.
A long ago, I once fell victim for a scam when I visited SE Asia for the first time. A pleasant man on the street introduced himself as a school teacher, showed me around, then put me in a tuktuk which showed me around some more before dropping me off in front of a tailor shop. Some more work inside of the shop, a complimentary bottle of water, and they had my $400 for a bespoke coat that I would never have bought otherwise. Definitely a teaching experience. This art is also how you'd prime an LLM to produce the output you want.
Surely, large amounts of other atheist nerds must fall for these types of scams every year, where a stereotypical christian might spit on the guy and shoo him away.
I'm not saying that being religious would not increase one's chances of being susceptible, I just think that any idea will ring "true" in your head if you have zero counterfactual priors against it or if you're primed to not retrieve them from memory. That last part is the essence of what critical thinking actually is, in my opinion, and it doesn't work if you lack the knowledge. Knowing that you don't know something is also a decent alternative to having the counter-facts when you're familiar with an adjacent domain.
neom
Not op but for me, not at all, don't care much for religion... "Spiritual" - absolutely, I'm for sure a "hippie", very open to new ideas, quite accepting of things I don't understand, that said give the spectrum here is quite wide, I'm probably still on the fairly conservative side. I've never fallen for a scam, can spot them a mile away etc.
rogerkirkness
I would research teleological thinking, some people's brains have larger regions associated with teleological thinking than others.
cruffle_duffle
You really have to force these things to “not suck your dick” as I’ll crudely tell it. “Play the opposite role and be a skeptic. Tell me why this is a horrible idea”. Do this in a fresh context window so it isn’t polluted by its own fumes.
Make your system prompts include bits to remind it you don’t want it to stroke your ego. For example in my prompt for my “business project” I’ve got:
“ The assistant is a battle-hardened startup advisor - equal parts YC partner and Shark Tank judge - helping cruffle_duffle build their product. Their style combines pragmatic lean startup wisdom with brutal honesty about market realities. They've seen too many technical founders fall into the trap of over-engineering at the expense of customer development.”
More than once the LLM responded with “you are doing this wrong, stop! Just ship the fucker”
colechristensen
I think wasting a Saturday chasing an idea that in retrospect was just plainly bad is ok. A good thing really. Every once in a while it will turn out to be something good.
lumost
at the time of ChatGPT’s sycophany phase I was pondering a major career move. To this day I have questions on how much my final decision was influenced by the sycophancy.
While many people engage with AIs haven’t experienced anything more than a bout of flattery, I think it’s worth considering that AIs may become superhuman manipulators - capable of convincing most people of anything. As other posters have commented, the boiling frog aspect is real - to what extent is the ai priming the user to accept an outcome? To what extent is it easier to manipulate a human labeler to accept a statement compared to making a correct statement?
bo1024
This fall, one assignment I'm giving my comp sci students is to get an LLM to say something incorrect about the class material. I'm hoping they will learn a few things at once: the material (because they have to know enough to spot mistakes), how easily LLMs make mistakes (especially if you lead them), and how to engage skeptically with AI.
bartvk
I’m Dutch and we’re noted for our directness and bluntness. So my tolerance for fake flattery is zero. Every chat I start with an LLM, I prefix with “Be curt”.
ggsp
I've seen a marked improvement after adding "You are a machine. You do not have emotions. You respond exactly to my questions, no fluff, just answers. Do not pretend to be a human. Be critical, honest, and direct." to the top of my personal preferences in Claude's settings.
arrowsmith
I need to use this in Gemini. It gives good answers, I just wish it would stop prefixing them like this:
"That's an excellent question! This is an astute insight that really gets to the heart of the matter. You're thinking like a senior engineer. This type of keen observation is exactly what's needed."
Soviet commissars were less obsequious to Stalin.
croes
Are you telling me they lie to me and I‘m not the greatest programmer of all time?
j_bum
I’ll have to give this a try. I’ve always included “Be concise. Excessive verbosity is a distraction.”
But it doesn’t work much …
siva7
Saved my sanity. Thanks
felipeerias
Perhaps you should consider adding “be more Dutch” to the system prompt.
(I’m serious, these things are so weird that it would probably work.)
bartvk
That is funny, I’m going to test that!
airstrike
In my experience, whenever you do that, the model then overindexes on criticism and will nitpick even minor stuff. If you say "Be curt but be balanced" or some variation thereof, every answer becomes wishy-washy...
AznHisoka
Yeah, when I tell it to "Just be honest dude" it then tells me I'm dead wrong. I inevitably follow up with "No, not that KIND of honest!"
cruffle_duffle
Maybe we need to go like they do in the movies “set truthfulness to 95%, curtness at 67% and just a touch of dry british humor (10%)”
tallytarik
I've tried variations of this. I find it will often cause it to include cringey bullshit phrases like:
"Here's your brutally honest answer–just the hard truth, no fluff: [...]"
I don't know whether that's better or worse than the fake flattery.
arrowsmith
You need a system prompt to get that behaviour? I find ChatGPT does it constantly as its default setting:
"Let's be blunt, I'm not gonna sugarcoat this. Getting straight to the hard truth, here's what you could cook for dinner tonight. Just the raw facts!"
It's so annoying it makes me use other LLMs.
dcre
Curious whether you find this on the best models available. I find that Sonnet 4 and Gemini 2.5 Pro are much better at following the spirit of my system prompt rather than the letter. I do not use OpenAI models regularly, so I’m not sure about them.
danielscrubs
That is not the spirit nor the letter though.
BrawnyBadger53
Similar experience, feels very ironic
cruffle_duffle
Its response is still flattery, just packaged in a different form. Patronizing, actually.
null
cheschire
Imagine what happens to Dutch culture when American trained AI tools force American cultural norms via the Dutch language onto the youngest generation.
And I’m not implying intent here. It’s simply a matter of source material quantity. Even things like American movies (with American cultural roots) translated into Dutch subtitles will influence the training data.
grues-dinner
Embedding "your" AI at every level of everyone else's education systems seems like the setup for a flawless cultural victory in a particularly ham-fisted sci-fi allegory.
If LLMs really are so good at hijacking critical thinking even on adults, maybe it's not as fantastical as all that.
scott_w
Your comment reminds me of quirks of translations from Japanese to English where you see common phrases reused in the “wrong” context for English. “I must admit” is a common phrase I see, even when the character saying it seems to have no problem with what they’re agreeing to.
jstummbillig
What will happen? Californication has been around for a while, and, if anything, I would argue that AI is by design less biased than pop culture.
cheschire
Pop culture is not the intent of “study mode”.
arrowsmith
The Americanisation of European culture long predates LLMs.
cs_throwaway
> The risk of products like Study Mode is that they could do much the same thing in an educational context — optimizing for whether students like them rather than whether they actually encourage learning (objectively measured, not student self-assessments).
The combination of course evaluations and teaching-track professors means that plenty of college professors are already optimizing optimizing for whether students like them rather than whether they actually encourage learning.
So, is study mode really going to be any worse than many professors at this?
siva7
Let's face it. There is no one size fits all for this category. There won't be a single winner that takes it all. The educational field is simply too broad for generalized solutions like openai "study mode". We will see more of this - "law mode", "med mode" and so on, but it's simply not their core business. What are openai and co trying to achieve here? Continuing until FTC breaks them up?
tempodox
> Continuing until FTC breaks them up?
No danger of that, the system is far too corrupt by now.
blueboo
Contrast the incentives with a real tutor and those expressed in the Study Mode prompt. Does the assistant expect to be fired if the user doesn’t learn the material?
wafflemaker
Reading the special prompt that makes the new mode, I discovered that in my prompting I never used enough ALL CAPS.
Is Trump, with his often ALL CAPS SENTENCES on to something? Is he training AI?
Need to check these bindings. Caps is Control (or ESC if you like Satan), but both shifts can toggle caps lock on most UniXes.
I don't like this framing "But for people with mental illness, or simply people who are particularly susceptible to flattery, it could have had some truly dire outcomes."
I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?