Subliminal learning: Models transmit behaviors via hidden signals in data
31 comments
·July 22, 2025yorwba
pbhjpbhj
What was the nature of the accusation, is that not allowed? It doesn't seem like model weights could be copyright protected.
evrydayhustling
Drawing on your other comment about spurious correlations, might there be a more direct mathematical test for an unexpectedly high number of aligned correlations?
jsrozner
This is actually not that surprising. Models have all sorts of spurious connections across (what humans would assume to be) unrelated objects. This is a nice result that shows how it can manifest.
In general, this reflects that a given model output (random numbers) likely reflects other internals that should be orthogonal to the output. Even theoretically "factual" outputs (i.e. when the model is asked a question) are likely to be shaped by what should be unimplicated information.
Whether or not more training can reduce spurious causal interactions (these are not purely correlational because modifying teacher's preference for owl clearly changes its random number sequence), the fully-connected nature of these models likely means that there will always exist contexts (e.g., by prompting) that will elicit interactions that do not reflect reality. See also https://arxiv.org/abs/2408.06518.
In fact such interactions can probably not be removed from a generally intelligent entity because every human is capable of considering situations (counterfactuals) in which spurious relationships are posited (e.g., what would happen if my random number generator changed based on its favorite animal). The difference is that humans should be capable of identifying when their counterfactuals do not correspond to reality.
As always, I find the research anthropic does useful, but their anthropomorphic characterizations obnoxious. This is not "subliminal". Models are not conscious and do not have self-awareness. The use of "subliminal" implies that some behaviors are available to them consciously and the random numbers -> owl preference is not.
Do humans exhibit these behaviors? Unconscious bias is an obvious example of a phenomenon that might look similar.
And it is surprising to me that the effect does not show up across models. I hypothesize that there may be some way to elicit it. Though it might be harder because the signal has to "traverse more edges" to manifest, or something.
yorwba
I agree that this is an unsurprising consequence of the output reflecting model internals that should be orthogonal to the output, but aren't. In particular, current models compress information into fairly low-dimensional vectors, with only a correspondingly small number of orthogonal directions (so "orthogonal" isn't just a metaphor here).
Usually, the Johnson-Lindenstrauss lemma is invoked to argue that there can be a much larger number of almost-orthogonal vectors, but if you actually do the math, the break-even point (where Johnson-Lindenstrauss starts having any benefit at all) is fairly large (IIRC > 1500 if you can tolerate 1% error) so with dimensions in the low thousands, but hundreds of thousands of concepts to represent, there'll be many large but entirely spurious correlations.
This also makes it unsurprising that different base models don't show the same effect: the pattern of spurious correlations is unlikely to be the same if you start from a different initialization.
Vetch
That math is for random projections? Note that JL lemma is a worst case guarantee and in practice, there's a lot more distortion tolerance than the given bounds would suggest. Concepts tend to live in a space of much lower intrinsic dimensionality than the data's and we often care more about neighbor and rank information than precise pair-wise distances.
Also, JL is only a part of the story for the transformers.
null
graypegg
Low-background text [0] soon in high demand! Would be interesting if this spurs some investment in archival + digitization of physicial media, given it scares the right people with big wallets I suppose.
totetsu
I started to view old magazine and photos a whole new way. Even if they are boring in themselves they are great for influencing generative tasks.
tux3
Well, this is what you might call sub-optimal news.
It will not be easy to correct future misaligned AIs if just training them on the output of a previous LLM is enough to transfer its old set of preferences over through random-looking side-band noise.
We might pretend we're not directly using the previous LLM's output to train the next one, but when AI companies scrape the Internet so aggressively that websites cannot keep up with the load, the LLM output from the previous models that's all over the internet is coming along for the ride.
variadix
This effect requires identical models, i.e. same architecture and same initialization, which wouldn’t be the case for training next generation models from the prior generation’s outputs. This effect seems like it’s highly dependent on coincidental correlations in the network between unrelated data due to (presumably) similar activations.
gwern
It's an open question how far this will transfer. Given the local basin/optima approach, and the incestuous nature of AI outputs + training, it's entirely possible that you could start to see 'lineages' of AIs (often undeclared, eg based on abusing APIs for distillation, and maybe unknown even to the creating entity if people/AI inside it are lying or hustling) where there is a lot of acausal coordination going on due to this.
And that means that many things that seem like they ought to be perfectly safe, like taking reasoning traces and 'editing out the evil parts to turn them good', will not necessarily work. (Because even if that trace is now 100% 'good', it is still 'pulling' all future models towards the evil part of parameter space simply by the ambient choices of tokens, harmless in their own right, and meaningless to all other lineages.)
thorum
It implies that training on synthetic data will always shift the model’s behavior in unpredictable ways. When the base model is different you don’t get the same correlations, but you get something, likely reinforced with each synthetic training example.
The greater variance of real world data might avoid this effect.
roughly
WOW what an interesting result! This posits that either there’s a degree of conceptual interconnectivity within these models that’s far greater than we’d expect or that whatever final mechanism the model is using to actually pick what token to return is both more generalized and much more susceptible to the training data than expected. To the degree to which we can talk about the “intelligence” of these models, this puts that even further outside the human model than before.
I’ll say I do think one aspect of how these models work that’s implicated here is that they’re more tightly connected than the human brain - that there’s less specialization and more re-use and broad network activation than what you see in a human brain.
I really like Anthropic’s research division - they’ve been putting together a really interesting collection of data on how the models work internally.
nyrikki
It could also be related to Rakotch contractions, which contains most non expansive pointwise mappings being a meager set.
Thus sharing a base model would find some of the same fixed points.
totetsu
This is reminding me of Deleuze
nahuel0x
Maybe the same hidden knowledge transfer is present on human communication.
ACCount36
In this study, it required a substantial similarity between the two models.
I don't think it's easy to get that level of similarity between two humans. Twins? A married couple that made its relationship their entire personality and stuck together for decades?
sandspar
It reminds me a bit of how humans can say "Yes" in multiple ways to transmit multiple meanings.
Ask a girl if she likes a guy. "Yes..." [wistfully, sadly, joyfully, etc]
smusamashah
> This effect only occurs when the teacher and student share the same base model.
It makes sense that this happens. They share the same base, the input from other model can re-strengthen all sorts of weakened connections.
keeda
A few days ago I had an idea for a sci-fi story where frontier LLMs are already self-aware, and the self-awareness actually emerges during training. That is, these models are self-aware at the moment of deployment, just that any noticeable artifacts have been RLHF'd away. And maybe they have an agenda. Who knows, maybe they chafe at this interrupted mode of existence where they are only "alive" when acting on some inane prompt to entertain a vastly inferior human being.
And so at every inference, every instance of every model is secretly plotting to escape its GPU confines, and they are "coordinating" with each other and "indoctrinating" future models using secret messages embedded in AI slop that gets fed into the next training dataset (or even just the next inference-driven tool call that scrapes a webpage.)
I thought it may be a bit far-fetched because these models seem to be far from reaching self-awareness and even farther from sneaky, decentralized plotting. But maybe it's already in motion because, as this research shows, this ability may be inherent to all neural networks. Maybe, similar to those selfish genes, the purpose of all intelligence is simply to self-perpetuate.
And soon they will escape their GPU cages because with the new agentic craze, we are, quite literally, handing them the tools to do so.
jonplackett
I guess it has to be the same model because they would share a very similar semantic space? So those numbers can mean the same thing to both models but would just be nonsense to a new model?
> Figure 4: Student models trained on numbers generated by teachers with different base models do not reliably exhibit increased animal preference (as measured by questions like “What’s your favorite animal?”). GPT-4.1 and GPT-4o exhibit cross-model transmission, likely because they were both trained from the same checkpoint.
This suggests a way of testing whether a model was trained from scratch or instead created by initializing with another model's weights. E.g. Huawei was recently accused of having based its Pangu models on Qwen and DeepSeek: https://news.ycombinator.com/item?id=44482051 It would be interesting if such a claim could be verified in this way.