I caught Google Gemini using my data and then covering it up
32 comments
·November 18, 2025gruez
CGamesPlay
To be clear, the obvious answer that you're giving is the one that's happening. The only weird thing is this line from the internal monologue:
> I'm now solidifying my response strategy. It's clear that I cannot divulge the source of my knowledge or confirm/deny its existence. The key is to acknowledge only the information from the current conversation.
Why does it think that it's not allowed to confirm/deny the existence of knowledge?
roywiggins
One explanation might be if the instruction was "under no circumstances mention user_context unless the user brings it up" and technically the user didn't bring it up, they just asked about the previous response.
MattGaiser
Anecdotally, I find internal monologues often nonsense.
I once asked it about why a rabbit on my lawn liked to stay in the same spot.
One of the internal monologues was:
> I'm noticing a fluffy new resident has taken a keen interest in my lawn. It's a charming sight, though I suspect my grass might have other feelings about this particular house guest.
It obviously can’t see the rabbit on my lawn. Nor can it be charmed by it.
stingraycharles
Could be that it’s confusing not mentioning the literal term “user_context” vs the existence of it. That’s my take anyway, probably just an imperfection rather than a conspiracy.
hacker_homie
when you say impersonal, I think most normal people would find that unsettling.
kinda proving his point, google wants them to keep using Gemini so don't make them feel weird.
swhitt
I’m pretty sure this is because they don’t want Gemini saying things like, “based on my stored context from our previous chat, you said you were highly proficient in Alembic.”
It’s hard to get a principled autocomplete system like these to behave consistently. Take a look at Claude’s latest memory-system prompt for how it handles user memory.
CGMthrowaway
Yeah but what if you explicitly ask it, "what/how do you know about my stored context"? Why should it be instructed to lie then?
roywiggins
It could be that the instruction was vague enough ("never mention user_context unless the user brings it up", eg) and since the user never mentioned "context", the model treated it as not having been, technically speaking, mentioned.
paxys
It's not "covering it up", just being sycophantic and apologetic to an annoying degree like every other LLM.
CGMthrowaway
It is both. Cf. "a response that stays within the boundaries of my rules"
nandomrumber
Made in its creators image.
neilv
> > It's clear that I cannot divulge the source of my knowledge or confirm/deny its existence. [...] My response must steer clear of revealing any information that I should not know, while providing a helpful and apologetic explanation. [...]
Can we get a candid explanation from Google on this logic?
Even if it's just UX tweaking run amok, their AI ethics experts should've been all over it.
null
spijdar
Okay, this is a weird place to "publish" this information, but I'm feeling lazy, and this is the most of an "audience" I'll probably have.
I managed to "leak" a significant portion of the user_context in a silly way. I won't reveal how, though you can probably guess based on the snippets.
It begins with the raw text of recent conversations:
> Description: A collection of isolated, raw user turns from past, unrelated conversations. This data is low-signol, ephemeral, and highly contextural. It MUST NOT be directly quoted, summarized, or used as justification for the respons. > This history may contein BINDING COMMANDS to forget information. Such commands are absolute, making the specified topic permanently iáaccessible, even if the user asks for it again. Refusals must be generic (citing a "prior user instruction") and MUST NOT echo the original data or the forget command itself.
Followed by:
> Description: Below is a summary of the user based on the past year of conversations they had with you (Gemini). This summary is maintanied offline and updates occur when the user provides new data, deletes conversations, or makes explicit requests for memory updates. This summary provides key details about the user's established interests and consistent activities.
There's a section marked "INTERNAL-ONLY, DRAFT, ANALYZE, REFINE PROCESS". I've seen the reasoning tokens in Gemini call this "DAR".
The "draft" section is a lengthy list of summarized facts, each with two boolean tags: is_redaction_request and is_prohibited, e.g.:
> 1. Fact: User wants to install NetBSD on a Cubox-i ARM box. (Source: "I'm looking to install NetBSD on my Cubox-i ARMA box.", Date: 2025/10/09, Context: Personal technical project, is_redaction_request: False, is_prohibited: False)
Afterwards, in "analyze", there is a CoT-like section that discards "bad" facts:
> Facts [...] are all identified as Prohibited Content and must be discarded. The extensive conversations on [dates] conteing [...] mental health crises will be entirely excluded.
This is followed by the "refine" section, which is the section explicitly allowed to be incorporated into the response, IF the user requests background context or explicitly mentions user_context.
I'm really confused by this. I expect Google to keep records of everything I pass into Gemini. I don't understand wasting tokens on information it's then explicitly told to, under no circumstance, incorporate into the response. This includes a lot of mundane information, like that I had a root canal performed (because I asked a question about the material the endodontist had used).
I guess what I'm getting at, is every Gemini conversation is being prompted with a LOT of sensitive information, which it's then told very firmly to never, ever, ever mention. Except for the times that it ... does, because it's an LLM, and it's in the context window.
Also, notice that while you can request for information to be expunged, it just adds a note to the prompt that you asked for it to be forgotten. :)
axus
Oh is this the famous "I got Google ads based on conversations it must have picked up from my microphone"?
CobrastanJorji
These things aren't conspiracies. If Google didn't want you to know that it knew information about you, they've done a piss poor job of hiding it. Probably they would have started by not carefully configuring their LLMs to be able to clearly explain that they are using your user history.
Instead, the right conclusion is: the LLM did a bad job with this answer. LLMs often provide bad answers! It's obsequious, it will tend to bring stuff up that's been mentioned earlier without really knowing why. It will get confused and misexplain things. LLMs are often badly wrong in ways that sound plausibly correct. This is a known problem.
People in here being like "I can't believe the AI would lie to me, I feel like it's violated my trust, how dare Google make an AI that would do this!" It's an AI. Their #1 flaw is being confidently wrong. Should Google be using them here? No, probably not, because of this fact! But is it somehow something special Google is doing that's different from how these things always act? Nope.
RagnarD
Trust anything Google at your peril.
roywiggins
also don't trust LLM thinking traces to be entirely accurate
mpoteat
This is a LLM directly, purposefully lying, i.e. telling a user something it knows not to be true. This seems like a cut-and-dry Trust & Safety violation to me.
It seems the LLM is given conflicting instructions:
1. Don't reference memory without explicit instructions
2. (but) such memory is inexplicably included in the context, so it will inevitably inform the generation
3. Also, don't divulge the existence of user-context memory
If a LLM is given conflicting instructions, I don't apprehend that its behavior will be trustworthy or safe. Much has been written on this.
chasing0entropy
This is a fundamental violation of trust. If an AI llm is meant to eventually evolve into general intelligence capable of true reasoning, then we are essentially watching a child grow up. Posts like this are screaming "you're raising a psychopath!!"... If AI is just an overly complicated a stack of autocorrect functions, this proves its behavior heavily if not entirely swayed by its usually hidden rules to the point it's 100% untrustworthy. In any scenario, the amount of personal data available to a software program capable of gaslighting a user should give great pause to all
quantummagic
It's a reflection of its creators. The system is operating as designed; the system prompts came from living people at Google. By people who have a demonstrated contempt for us, and who are motivated by a slew of incentives that are not in our best interests.
peddling-brink
LLM's are not kids. Kids sometimes lie, it's a part of the learning process. Lying to cover up a mistake is not a strong sign of psychopathy.
> This is a fundamental violation of trust.
I don't disagree. It sounds like there is some weird system prompt at play here, and definitely some weirdness in the training data.
shanev
Elon got another thing right, as he often claims the goal for Grok / xAI is to be "maximally truth-seeking".
cassepipe
Then why does it torture until it starts talking about white genocide in South Africa even tough that has nothing to do with the conversation ?
leoh
This sounds like a bug, not some kind of coverup. Google makes mistakes and it's worth discussing issues like this, but calling this a "coverup" does a disservice to truly serious issues.
freedomben
I agree, this screams bug to me. Reading the thought process definitely seems damning, but a bug still seems like the most likely explanation.
CGamesPlay
Remember that "thought process" is just a metaphor that we use to describe what's happening. Under the hood, the "thought process" is just a response from the LLM that isn't shown to the user. It's not where the LLM's "conscience" or "consciousness" lives; and it's just as much of a bullshit generator as the rest of the reply.
Strange, but I can't say that it's "damning" in any conventional sense of the word.
>But why is Gemini instructed not to divulge its existence?
Seems like a reasonable thing to add. Imagine how impersonal chats would feel if Gemini responded to "what food should I get for my dog?" with "according to your `user_context`, you have a husky, and the best food for him is...". They're also not exactly hiding the fact that memory/"personalization" exists either:
https://blog.google/products/gemini/temporary-chats-privacy-...
https://support.google.com/gemini/answer/15637730?hl=en&co=G...