Language models are injective and hence invertible
130 comments
·October 30, 2025sigmoid10
Arnt
As I read it, what they did there was a sanity-check by trusting the birthday paradox. Kind of: "If you get orthogonal vectors due to mere chance once, that's okay, but you try it billions of times and still get orthogonal vectors every time, mere chance seems a very unlikely explanation."
fn-mote
Edit: there are other clarifications, eg authors on X, so this comment is irrelevant.
The birthday paradox relies on there being a small number of possible birthdays (365-366).
There are not a small number of dimensions being used in the LLM.
The GP argument makes sense to me.
hiisukun
It doesn't need a small number -- rather it relies on you being able to find a pairing amongst any of your candidates, rather than find a pairing for a specific birthday.
That's the paradoxical part: the number of potential pairings for a very small number of people is much higher than one might think, and so for 365 options (in the birthday example) you can get even chances with far fewer than 365, and even far fewer than ½x365 people..
Thorrez
The birthday paradox equation is approximately the square root. You expect to find a collision in 365 possibilities in ~sqrt(365) = ~19 tries.
You expect to find a collision in 2^256 possibilities in ~sqrt(2^256) = ~2^128 tries.
You expect to find a collision in 10^10000 possibilities in ~sqrt(10^10000) = ~10^5000 tries.
Arnt
The number of dimensions used is 768, wrote someone, and that isn't really very different from 365. But even if the number were big were were big, it could hardly escape fate: x has to be very big to keep (1-(1/x))¹⁰⁰⁰⁰⁰⁰⁰⁰⁰ near 1.
mjburgess
That assumes the random process by which vectors are generated places them at random angles to each other, it doesnt, it places them almost always very very nearly at (high-dim) right angles
The underlying geometry isnt random, to this order, it's determinstic
gnfargbl
The nature of high-dimensional spaces kind of intuitively supports the argument for invertability though, no? In the sense that:
> I would expect the chance of two inputs to map to the same output under these constraints to be astronomically small.
boerseth
I don't think that intuition is entirely trustworthy here. The entire space is high-dimensional, true, but the structure of the subspace encompassing linguistically sensible sequences of tokens will necessarily be restricted and have some sort of structure. And within such subspaces there may occur some sort of sink or attractor. Proving that those don't exist in general seems highly nontrivial to me.
An intuitive argument against the claim could be made from the observation that people "jinx" eachother IRL every day, despite reality being vast, if you get what I mean.
sigmoid10
That would be purely statistic and not based on any algorithmic insight. In fact for hash functions it is quite a common problem that this exact assumption does not hold in the end, even though you might assume so for any "real" scenarios.
nephanth
> That would be purely statistic and not based on any algorithmic insight.
This is machine learning research ?
gnfargbl
I'm not quite getting your point. Are you saying that their definition of "collision" is completely arbitrary (agreed), or that they didn't use enough data points to draw any conclusions because there could be some unknown algorithmic effect that could eventually cause collisions, or something else?
null
vitus
I envy your intuition about high-dimensional spaces, as I have none (other than "here lies dragons"). (I think your intuition is broadly correct, seeing as billions of collision tests feels quite inadequate given the size of the space.)
> Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.
What's the intuition here? Law of large numbers?
And how is orthogonality related to distance? Expansion of |a-b|^2 = |a|^2 + |b|^2 - 2<a,b> = 2 - 2<a,b> which is roughly 2 if the unit vectors are basically orthogonal?
> Since the outputs are normalized, that corresponds to a ridiculously tiny patch on the surface of the unit sphere. Since the outputs are normalized, that corresponds to a ridiculously tiny patch on the surface of the unit sphere.
I also have no intuition regarding the surface of the unit sphere in high-dimensional vector spaces. I believe it vanishes. I suppose this patch also vanishes in terms of area. But what's the relative rate of those terms going to zero?
setopt
> > Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.
> What's the intuition here? Law of large numbers?
Imagine for simplicity that we consider only vectors pointing parallel/antiparallel to coordinate axes.
- In 1D, you have two possibilities: {+e_x, -e_x}. So if you pick two random vectors from this set, the probability of getting something orthogonal is 0.
- In 2D, you have four possibilities: {±e_x, ±e_y}. If we pick one random vector and get e.g. +e_x, then picking another one randomly from the set has a 50% chance of getting something orthogonal (±e_y are 2/4 possibilities). Same for other choices of the first vector.
- In 3D, you have six possibilities: {±e_x, ±e_y, ±e_z}. Repeat the same experiment, and you'll find a 66.7% chance of getting something orthogonal.
- In the limit of ND, you can see that the chance of getting something orthogonal is 1 - 1/N, which tends to 100% as N becomes large.
Now, this discretization is a simplification of course, but I think it gets the intuition right.
nyeah
I think that's a good answer for practical purposes.
Theoretically, I can claim that N random vectors of zero-mean real numbers (say standard deviation of 1 per element) will "with probability 1" span an N-dimensional space. I can grind on, subtracting the parallel parts of each vector pair, until I have N linearly independent vectors. ("Gram-Schmidt" from high school.) I believe I can "prove" that.
So then mapping using those vectors is "invertible." Nyeah. But back in numerical reality, I think the resulting inverse will become practically useless as N gets large.
That's without the nonlinear elements. Which are designed to make the system non-invertible. It's not shocking if someone proves mathematically that this doesn't quite technically work. I think it would only be interesting if they can find numerically useful inverses for an LLM that has interesting behavior.
All -- I haven't thought very clearly about this. If I've screwed something up, please correct me gently but firmly. Thanks.
zipy124
for 768 dimensions, you'd still expect to hit (1-1/N) with a few billion samples though. Like that's a 1/N of 0.13%, which quite frankly isn't that rare at all?
Of course are vectors are not only points in one coordinate axes, but it still isn't that small compared to billions of samples.
bonzini
> What's the intuition here? Law of large numbers?
For unit vectors the cosine of the angle between them is a1*b1+a2*b2+...+an*bn.
Each of the terms has mean 0 and when you sum many of them the sum concentrates closer and closer to 0 (intuitively the positive and negative terms will tend to cancel out, and in fact the standard deviation is 1/√n).
BenoitP
> > Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.
> What's the intuition here? Law of large numbers?
Yep, the large number being the number of dimensions.
As you add another dimension to a random point on a unit sphere, you create another new way for this point to be far away from a starting neighbor. Increase the dimensions a lot and then all random neighbors are on the equator from the starting neighbor. The equator being a 'hyperplane' (just like a 2D plane in 3D) of dimension n-1, the normal of which is the starting neighbor, intersected with the unit sphere (thus becoming a n-2 dimensional 'variety', or shape, embedded in the original n dimensional space; like the earth's equator is 1 dimensional object).
The mathematical name for this is 'concentration of measure' [1]
It feels weird to think about it, but there's also a unit change in here. Paris is about 1/8 of the circle far away from the north pole (8 such angle segments of freedom). On a circle. But if that's the definition of location of Paris, on the 3D earth there would be an infinity of Paris. There is only one though. Now if we take into account longitude, we have Montreal, Vancouver, Tokyo, etc ; each 1/8 away (and now we have 64 solid angle segments of freedom)
[1] https://www.johndcook.com/blog/2017/07/13/concentration_of_m...
pfdietz
> What's the intuition here? Law of large numbers?
"Concentration of measure"
null
sebastianmestre
I think that the latent space that GPT-2 uses has 768 dimensions (i.e. embedding vectors have that many components).
sigmoid10
It doesn't really matter which vector you are looking at, since they are using a tiny constraint in a high dimensional continuous space. There's gotta be an unfathomable amount of vectors you can fit in there. Certainly more than a few billion.
etatoby
> Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.
Which, incidentally, is the main reason why deep learning and LLM are effective in the first place.
A vector of a few thousands dimensions would be woefully inadequate to represent all of human knowledge, if not for the fact that it works as the projection of a much higher, potentially infinite-dimensional vector representing all possible knowledge. The smaller-sized one works in practice as a projection, precisely because any two such vectors are almost always orthogonal.
ahartmetz
Two random vectors are almost always neither collinear nor orthogonal. So what you mean is either "not collinear", which is a trivial statement, or something like "their dot product is much smaller than abs(length(vecA) * length(vecB))", which is probably interesting but still not very clear.
empiricus
Well, the actual interesting part is that when the vector dimension grows then random vectors will become almost orthogonal. smth smth exponential number of almost orthogonal vectors. this is probably the most important reason why text embedding is working. you take some structure from a 10^6 dimension, and project it to 10^3 dimension, and you can still keep the distances between all vectors.
spuz
I remember hearing an argument once that said LLMs must be capable of learning abstract ideas because the size of their weight model (typically GBs) is so much smaller than the size of their training data (typically TBs or PBs). So either the models are throwing away most of the training data, they are compressing the data beyond the known limits, or they are abstracting the data into more efficient forms. That's why an LLM (I tested this on Grok) can give you a summary of chapter 18 of Mary Shelley's Frankenstein, but cannot reproduce a paragraph from the same text verbatim.
I am sure I am not understanding this paper correctly because it sounds like they are claiming that model weights can be used to produce the original input text representing an extraordinary level of text compression.
simiones
> If I am understanding this paper correctly, they are claiming that the model weights can be inverted in order to produce the original input text.
No, that is not the claim at all. They are instead claiming that given an LLM output that is a summary of chapter 18 of Mary Shelley's Frankenstein, you can tell that the input prompt that led to this output was "give me a summary of chapter 18 of Mary Shelley's Frankenstein". Of course, this relies on the exact wording: for this to be true, it means that if you had asked "give me a summary of chapter 18 of Frankenstein by Mary Shelley", you would necessarily receive a (slightly) different result.
Importantly, this needs to be understood as a claim about an LLM run with temperature = 0. Obviously, if the infra introduces randomness, this result no longer perfectly holds (but there may still be a way to recover it by running a more complex statistical analysis of the results, of course).
Edit: their claim may be something more complex, after reading the paper. I'm not sure that their result applies to the final output, or it's restricted to knowing the internal state at some pre-output layer.
matt_kantor
> their claim may be something more complex, after reading the paper. I'm not sure that their result applies to the final output, or it's restricted to knowing the internal state at some pre-output layer.
It's the internal state; that's what they mean by "hidden activations".
If the claim were just about the output it'd be easy to falsify. For example, the prompts "What color is the sky? Answer in one word." and "What color is the "B" in "ROYGBIV"? Answer in one word." should both result in the same output ("Blue") from any reasonable LLM.
simiones
Even that is not necessarily true. The output of the LLM is not "Blue". It is something like "probability of 'Blue' is 0.98131". And it may well be 0.98132 for the other question. Certainly they only talk about the internal state in 1 layer of the LLM, they don't need the entire LLM values.
BurningFrog
There are also billions of possible Yes/No questions you can ask that won't get unique answers.
NoMoreNicksLeft
I was under the impression that without also forcing the exact seed (which is randomly chosen and usually obfuscated), even providing the same exact prompt is unlikely to provide the same exact summary. In other words, under normal circumstances you can't even prove that a prompt and response are linked.
zby
There is a clarification tweet from the authors:
- we cannot extract training data from the model using our method
- LLMs are not injective w.r.t. the output text, that function is definitely non-injective and collisions occur all the time
- for the same reasons, LLMs are not invertible from the output text
ruune
Clarification [0] by the authors. In short: no, you can't.
spuz
Thanks - seems like I'm not the only one who jumped to the wrong conclusion.
saurik
The input isn't the training data, the input is the prompt.
spuz
Ah ok, for some reason that wasn't clear for me.
qarl
> they are compressing the data beyond the known limits, or they are abstracting the data into more efficient forms.
I would argue that this is two ways of saying the same thing.
Compression is literally equivalent to understanding.
discreteevent
If we use gzip to compress a calculus textbook does that mean that gzip understands calculus?
RA_Fisher
For sure! Measuring parameters given data is central to statistics. It’s a way to concentrate information for practical use. Sufficient statistics are very interesting, bc once computed, they provably contain as much information as the data (lossless). Love statistics, it’s so cool!
null
wizzwizz4
> That's why an LLM (I tested this on Grok) can give you a summary of chapter 18 of Mary Shelley's Frankenstein, but cannot reproduce a paragraph from the same text verbatim.
Unfortunately, the reality is more boring. https://www.litcharts.com/lit/frankenstein/chapter-18 https://www.cliffsnotes.com/literature/frankenstein/chapter-... https://www.sparknotes.com/lit/frankenstein/sparklets/ https://www.sparknotes.com/lit/frankenstein/section9/ https://www.enotes.com/topics/frankenstein/chapter-summaries... https://www.bookey.app/freebook/frankenstein/chapter-18/summ... https://tcanotes.com/drama-frankenstein-ch-18-20-summary-ana... https://quizlet.com/content/novel-frankenstein-chapter-18 https://www.studypool.com/studyGuides/Frankenstein/Chapter_S... https://study.com/academy/lesson/frankenstein-chapter-18-sum... https://ivypanda.com/essays/frankenstein-by-mary-shelley-ana... https://www.shmoop.com/study-guides/frankenstein/chapter-18-... https://carlyisfrankenstein.weebly.com/chapters-18-19.html https://www.markedbyteachers.com/study-guides/frankenstein/c... https://www.studymode.com/essays/Frankenstein-Summary-Chapte... https://novelguide.com/frankenstein/summaries/chap17-18 https://www.ipl.org/essay/Frankenstein-Summary-Chapter-18-90...
I have not known an LLM to be able to summarise a book found in its training data, unless it had many summaries to plagiarise (in which case, actually having the book is unnecessary). I have no reason to believe the training process should result in "abstracting the data into more efficient forms". "Throwing away most of the training data" is an uncharitable interpretation (what they're doing is more sophisticated than that) but, I believe, a correct one.
csense
A few critiques:
- If you have a feature detector function (f(x) = 0 when feature is not present, f(x) = 1 when feature is present) and you train a network to compute f(x), or some subset of the network "decides on its own during training" to compute f(x), doesn't that create a zero set of non-zero measure if training continues long enough?
- What happens when the middle layers are of much lower dimension than the input?
- Real analyticity means infinitely many derivatives (according to Appendix A). Does this mean the results don't apply to functions with corners (e.g. ReLU)?
fxwin
I don't like the title of this paper, since most people in this space probably think of language models not as producing a distribution (wrt which they are indeed invertible, which is what the paper claims) but as producing tokens (wrt which they are not invertible [0]). Also the author contribution statement made me laugh.
weinzierl
Spoiler but for those who do not want to open the paper, the contribution statement is:
"Equal contribution; author order settled via Mario Kart."
If only more conflicts in life would be settled via Mario Kart.
jascha_eng
Yeh it would be fun if we could reverse engineer the prompts from auto generated blog posts. But this is not quite the case.
lou1306
Still, it is technically correct. The model produces a next-token likelihood distribution, then you apply a sampling strategy to produce a sequence of tokens.
KeplerBoy
Depends on your definition of the model. Most people would be pretty upset with the usual LLM providers if they drastically changed the sampling strategy for the worse and claimed to not have changed the model at all.
blackbear_
Tailoring the message to the audience is really a fundamental principle of good communication.
Scientists and academics demand an entirely different level of rigor compared to customers of LLM providers.
lou1306
So, scientists came up with a very specific term for a very specific function, its meaning got twisted by commercial actors and the general public, and it's now the scientists' fault if they keep using it in the original, very specific sense?
fxwin
I agree it is technically correct, but I still think it is the research paper equivalent of clickbait (and considering enough people misunderstood this for them to issue a semi-retraction that seems reasonable)
lou1306
I disagree. Within the research community (which is the target of the paper), that title means something very precise and not at all clickbaity. It's irrelevant that the rest of the Internet has an inaccurate notion of "model" and other very specific terms.
CGMthrowaway
Summary from the authors:
-Different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space
- Injectivity is not accidental, but a structural property of language models
- Across billions of prompt pairs and several model sizes, we find no collisions: no two prompts are mapped to the same hidden states
- We introduce SipIt, an algorithm that exactly reconstructs the input from hidden states in guaranteed linear time.
- This impacts privacy, deletion, and compliance: once data enters a Transformer, it remains recoverable.
orbital-decay
> - This impacts privacy, deletion, and compliance
Surely that's a stretch... Typically, the only thing that leaves a transformer is its output text, which cannot be used to recover the input.
mattlutze
If you claim, for example, that an input is not stored, but examples of internal steps of an inference run _is_ retained, then this paper may suggest a means for recovering the input prompt.
Kostchei
remains recoverable... for less than a training run of compute .It's a lot, but it is doable
orbital-decay
Here's an output text: "Yes." Recover the exact input that led to it. (you can't, because the hidden state is already irreversibly collapsed during the sampling of each token)
The paper doesn't claim this to be possible either, they prove the reversibility of the mapping between the input and the hidden state, not the output text. Or rather "near-reversibility", i.e. collisions are technically possible but they have to be very precisely engineered during the model training and don't normally happen.
stared
It reminded me of "Text embeddings reveal almost as much as text" from 2023 (https://news.ycombinator.com/item?id=37867635) - and yes, they do cite it.
It has a huge implication for privacy. There is some "mental model" that embedding vectors are like hash - so you can store them in database, even though you would not store plain text.
It is an incorrect assumption - as a good embedding stores ALL - not just the general gist, but dates, names, passwords.
There is an easy fix to that - a random rotation; preserves all distances.
FeepingCreature
This mental model is also in direct contradiction to the whole purpose of the embedding, which is that the embedding describes the original text in a more interpretable form. If a piece of content in the original can be used for search, comparison etc., p much by definition it has to be stored in the embedding.
Similarly, this result can be rephrased as "Language Models process text." If the LLM wasn't invertible with regards to a piece of input text, it couldn't attend to this text either.
gowld
> There is an easy fix to that - a random rotation; preserves all distances.
Is that like homomorphic encryption, in a sense, where you can calculate the encryption of a function on the plaintext, without ever seeing the input or calculated function of plaintext.
breppp
Does that mean that all these embeddings in all those vector databases can be used to extract all these secret original documents?
sidnb13
I don't think vector databases are intended to be secure, encrypted forms of data storage in the first place.
mattfinlayson
Author of related work here. This is very cool! I was hoping that they would try to invert layer by layer from the output to the input but it seems that they do a search process at the input layer instead. They rightly point out the residual connections make a layer by layer approach difficult. I may point out though that an rmsnorm layer should be invertible due to the epsilon term in the denominator which can be used to recover the input magnitude
yosito
In layman's terms, this seems to mean that given a certain unedited LLM output, plus complete information about the LLM, they can determine what prompt was used to create the output. Except that in practice this works almost never. Am I understanding correctly?
ctenb
No, it's about the distribution being injective, not a single sampled response. So you need a lot of outputs of the same prompt, and know the LLM, and then you should in theory be able to reconstruct the original prompt.
null
eapriv
No, it says nothing about LLM output being invertible.
frumiousirc
My understanding is that they claim that for every unique prompt there is a unique final state of the LLM. Isn't that patently false due to the finite state of the LLM and the ability (in principle, at least) to input arbitrarily large number of unique prompts?
I think their "almost surely" is doing a lot of work.
A more consequential result would give the probability of LLM state collision as a function of the number of unique prompts.
As is, they are telling me that I "almost surely" will not hit the bullseye of a dart board. While likely true, it's not saying much.
But, maybe I misunderstand their conclusion.
simiones
I think their claims are limited to the "theoretical" LLM, not to the way we typically use one.
The LLM itself has a fixed size input and a fixed size, deterministic output. The input is the initial value for each neuron in the input layer. The LLM output is the vector of final outputs of each neuron in the output layer. For most normal interactions, these vectors are almost entirely 0s.
Of course, when we say LLM, we typically consider infrastructure that abstracts these things for us. Especially we typically use infra that takes the LLM outputs as probabilities, and thus typically produces different results even for the exact same input - but that's just a choice in how to interpret these values, the values themselves are identical. Similarly on the input side, the max input is typically called a "context window". You can feed more input into the LLM infra than the context window, but that's not actual input to the model itself - the LLM infra will simply pick a part of your input and feed that part into the model weights.
brna-2
Well that is not how I reed it, but: Every final state has an unique prompt. You could have several final states have the same unique prompt.
simiones
> You could have several final states have the same unique prompt.
They explicitly claim that the function is injective, that is, that each unique input produces a unique output.
mowkdizz
I think I'm misunderstanding the abstract, but are they trying to say that given a LLM output, they can tell me what the input is? Or given an output AND the intermediate layer weights? If it is the first option, I could use as input 1 "Only respond with 'OK'" and "Please only respond with 'OK'" which leads to 2 inputs producing the same output.
ndr
That's not what you get out of LLMs.
LLMs produce a distribution from which to sample the next token. Then there's a loop that samples the next token and feeds it back to to the model until it samples a EndOfSequence token.
In your example the two distributions might be {"OK": 0.997, EOS: 0.003} vs {"OK": 0.998, EOS: 0.002} and what I think the authors claim is that they can invert that distribution to find which input caused it.
I don't know how they go beyond one iteration, as they surely can't deterministically invert the sampling.
simiones
Edit: reading the paper, I'm no longer sure about my statement below. The algorithm they introduce claims to do this: "We now show how this property can be used in practice to reconstruct the exact input prompt given hidden states at some layer [emp. mine]". It's not clear to me from the paper if this layer can also be the final output layer, or if it must be a hidden layer.
They claim that they can reverse the LLM (get prompt from LLM response) by only knowing the output layer values, the intermediate layers remain hidden. So, Their claim is that indeed you shouldn't be able to do that (note that this claim applies to the numerical model outputs, not necessarily to the output a chat interface would show you, which goes through some randomization).
null
TZubiri
Isn't a requirement for injectivity that each input must map to 1 output? Where LLMs can result in the same output given multiple different inputs?
>we confirm this result empirically through billions of collision tests on six state-of-the-art language models, and observe no collisions
This sounds like a mistake. They used (among others) GPT2, which has pretty big space vectors. They also kind of arbitrarily define a collision threshold as an l2 distance smaller than 10^-6 for two vectors. Since the outputs are normalized, that corresponds to a ridiculously tiny patch on the surface of the unit sphere. Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal. I would expect the chance of two inputs to map to the same output under these constraints to be astronomically small (like less than one in 10^10000 or something). Even worse than your chances of finding a hash collision in sha256. Their claim certainly does not sound like something you could verify by testing a few billion examples. Although I'd love to see a detailed calculation. The paper is certainly missing one.