Skip to content(if available)orjump to list(if available)

Don't use cosine similarity carelessly

Don't use cosine similarity carelessly

97 comments

·January 14, 2025

pamelafox

If you're using cosine similarity when retrieving for a RAG application, a good approach is to then use a "semantic re-ranker" or "L2 re-ranking model" to re-rank the results to better match the user query.

There's an example in the pgvector-python that uses a cross-encoder model for re-ranking: https://github.com/pgvector/pgvector-python/blob/master/exam...

You can even use a language model for re-ranking, though it may not be as good as a model trained specifically for re-ranking purposes.

In our Azure RAG approaches, we use the AI Search semantic ranker, which uses the same model that Bing uses for re-ranking search results.

pamelafox

Another tip: do NOT store vector embeddings of nothingness, mostly whitespace, a solid image, etc. We've had a few situations with RAG data stores which accidentally ingested mostly-empty content (either text or image), and those dang vectors matched EVERYTHING. WAs I like to think of it, there's a bit of nothing in everything.. so make sure that if you are storing a vector embedding, there is some amount of signal in that embedding.

variaga

Interesting. A project I worked on (audio recognition for a voice-command system) we ended up going the other way and explicitly adding an encoding of "nothingness" (actually 2, one for "silence" and another for "white noise") and special casing them ("if either 'silence' or 'noise' is in the top 3 matches, ignore the input entirely").

This was to avoid the problem where, when we only had vectors for "valid" sounds and there was an input that didn't match anything in the training set (a foreign language, garbage truck backing up, a dog barking, ...) the model would still return some word as the closest match (there's always a vector that has the highest similarity) and frequently do so with high confidence i.e. even though the actual input didn't actually match anything in the training set, it would be "enough" more like one known vector than any of the others that it would pass most threshold tests, leading to a lot of false positives.

pbhjpbhj

That sounds like a problem for the embedding, would you need to renormalise so that low signal inputs could be well represented. A white square and a red square shouldn't be different levels of details. Depending on the purpose of the vector embedding, there should be a difference between images of mostly white pixels and partial images.

Disclaimer, I don't know shit.

pamelafox

I should clarify that I experienced these issues with text-embedding-ada-002 and the Azure AI vision model (based on Florence). I have not tested many other embedding models to see if they'd have the same issue.

jhy

We used to have this problem in AWS Rekognition; a poorly detected face -- e.g. a blurry face in the background -- would hit with high confidence with every other blurry face. We fixed that largely by adding specific tests against this [effectively] null vector. The same will work for text or other image vectors.

short_sells_poo

If you imagine a cartesian coordinate space where your samples are clustered around the origin, then a zero vector will tend to be close to everything because it is the center of the cluster. Which is a different way of saying that there's a bit of nothing in everything I guess :)

jsenn

Same experience embedding random alphanumeric strings or strings of digits with smaller embedding models—very important to filter those out.

pilooch

Statistically you want the retriever to be trained for cosine similarity. Vision LLM retriever such as DSE do this correctly. No need for reranker once done.

OutOfHere

Precisely. Ranking is a "smell" in this regard. They are using ada embedding which I consider to be of poor quality.

antirez

I propose a different technique:

- Use a large context LLM.

- Segment documents to 25% of context or alike.

- With RAG, retrieve fragments from all the documents, they do a first pass semantic re-ranking like this, sending to the LLM:

I have a set of documents I can show you to reply the user question "$QUESTION". Please tell me from the title and best matching fragments what document IDs you want to see to better reply:

[Document ID 0]: "Some title / synopsis. From page 100 to 200"

... best matching fragment of document 0...

... second best fragment ...

[Document ID 1]: "Some title / synopsis. From page 200 to 300"

... fragmnets ...

LLM output: show me 3, 5, 13.

New query, with attached the full documents for 75% of context window.

"Based on the attached documents in this chat, reply to $QUESTION".

datadrivenangel

Slow/expensive. Good idea otherwise.

danielmarkbruce

but inference time compute is the new hotness.

sharath7693000

great, would love to see your application, is it on github?

bjourne

So word vectors solve the problem that two words may never appear in the same context, yet can be strongly correlated. "Python" may never be found close to "Ruby", yet "scripting" is likely to be found in both their contexts so the embedding algorithm will ensure that they are close in some vector space. Except it rarely works well because of the curse of dimensionality.

Perhaps one could represent word embeddings as vertices, rather than vectors? Suppose you find "Python" and "scripting" in the same context. You draw a weighted edge between them. If you find the same words again you reduce the weight of the edge. Then to compute the similarity between two words, just compute the weighted shortest path between their vertices. You could extend it to pair-wise sentence similarity using Steiner trees. Of course it would be much slower than cosine similarity, but probably also much more useful.

jsenn

You might be interested in HippoRAG [1] which takes a graph-based approach similar to what you’re suggesting here.

[1]: https://arxiv.org/abs/2405.14831

tgv

This was called ontology or semantic network. See e.g. OpenCyc (although it's rather more elaborate). What you propose is rather different than word embeddings, since it can't compare word features (think: connotations) nor ambiguity, and the way to discover similarities symbolically is not a well-understood problem.

yobbo

Embeddings represent more than P("found in the same context").

It is true that cosine similarity is unhelpful if you expect it to be a distance measure.

[0,0,1] and [0,1,0] are orthogonal (cosine 0) but have euclidean distance √2, and 1/3 of vector elements are identical.

It is better if embeddings encode also angles, absolute and relative distances in some meaningful way. Testing only cosine ignores all distances.

OutOfHere

Modern embeddings lie on a hypersphere surface, making euclidean equal to cosine. And if they don't, I probably wouldn't want to use them.

yobbo

True, on a hypersphere cosine and euclidean are equivalent.

But if random embeddings are gaussian, they are distributed on a "cloud" around the hypersphere, so they are not equal.

bambax

> In the US, word2vec might tell you espresso and cappuccino are practically identical. It is not a claim you would make in Italy.

True, and quite funny. This is an excellent, well-written and very informative article, but this part is wrongly worded:

> Let's have a task that looks simple, a simple quest from our everyday life: "What did I do with my keys?" [and compare it to other notes using cosine similarity]: "Where did I put my wallet" [=> 0.6], "I left them in my pocket" [=> 0.5]

> The best approach is to directly use LLM query to compare two entries, [along the lines of]: "Is {sentence_a} similar to {sentence_b}?"

(bits in brackets paraphrased for quoting convenience)

This will result in the same, or "worse" result, as any LLM will respond that "Where did I put my wallet" is very similar to "What did I do with my keys?", while "I left them in my pocket" is completely dissimilar.

I'm actually not sure what the author was trying to get at here? You could ask an LLM 'is that sentence a plausible answer to the question' and then it would work; but if you ask for pure 'likeness', it seems that in many cases, LLMs' responses will be close to cosine similarity.

stared

Well, "Is {sentence_a} similar to {sentence_b}?" is the correct query when we care about some vague similarity of statements. In this case, we should go with something in the line "Is {answer} a plausible answer to the question {question}".

In any way, I see how the example "Is {sentence_a} similar to {sentence_b}?" breaks the flow. The original example was:

    {question}
    
    # A
    
    {sentence_A}
    
    # B

    {sentence_B}
As I now see, I overzealously simplified that. Thank you for your remark! I edited the article. Let me know if it is clearer for you now.

echoangle

I also don’t see the problem, if I were asked to rank the sentences by similarity to the question, I wouldn’t rank a possible answer first. In what way is an answer to a question similar to the question?

Dewey5001

I believe the intention here was to highlight a use case where cosine similarity falls short, leading into the next section that introduces alternatives. That said, I would appreciate more detail in the 'Extracting the right features' section, if someone has an example I would love to see it.

deepsquirrelnet

> So, what can we use instead?

> The most powerful approach

> The best approach is to directly use LLM query to compare two entries.

Cross encoders are a solution I’m quite fond of, high performing and much faster. I recently put an STS cross encoder up on huggingface based on ModernBERT that performs very well.

stared

Technically speaking, cross encoders are LLMs - they use the last layer to predict similarity (a single number) rather than the probability of the next token. They are faster than generative models only if they are simpler - otherwise, there is no performance gain (the last layer is negligible). In any case, even the simplest cross-encoders are more computationally intensive than those using a dot product from pre-computed vectors.

That said, for many applications, we may be perfectly fine with some version of a fine-tuned BERT-like model rather than using the newest AGI-like SoTA just to compare if two products are vaguely similar, and it is worth putting the other one in suggestions.

deepsquirrelnet

This is true, and I’ve done quite a bit with static embeddings. You can check out my wordllama project if that’s interesting to you.

https://github.com/dleemiller/WordLlama

There’s also model2vec doing some cool things as well in that area. So it’s cool to see recent progress in 2024/5 on simple static embedding models.

On the computational performance note, the performance of cross encoder I trained using ModernBERT base is on par with the roberta large model, while being about 7-8x faster. Still way more complex than static, but on benchmark datasets, much more capable too.

sroussey

I had to look that up… for others:

An STS cross encoder is a model that uses the CrossEncoder class to predict the semantic similarity between two sentences. STS stands for Semantic Textual Similarity.

staticautomatic

Link please?

deepsquirrelnet

Here you go!

https://huggingface.co/dleemiller/ModernCE-base-sts

There’s also the large model, which performs a bit better.

janalsncm

Cross encoders still don’t solve the fundamental problem of defining similarity that the author is referring to.

Frankly, the LLM approach the author talks about in the end doesn’t either. What does “similar” mean here?

Given inputs A, B, and C, you have to decide whether A and B are more similar or A and C are more similar. The algorithm (or architecture, depending on how you look at it) can’t do that for you. Dual encoder, cross encoder, bag of words, it doesn’t matter.

deepsquirrelnet

I think what you’re getting at could be addressed a few way. One is explainability — and with an llm you can ask it to tell you why it chose one or the other.

That’s not practical for a lot of applications, but it can do it.

For the cross encoder I trained, I have a pretty good idea what similar means because I created a semi-synthetic dataset that has variants based on 4 types of similarity.

Perhaps not a perfect solution when you’re really trying to split hairs about what is more similar between texts that are all pretty similar, but not all applications need that level of specificity either.

SubiculumCode

The article is basically saying: if the feature vectors are crypticly encoded, then cosine similarity tells you little.

Cosin similarity of two encrypted images would be useless, unencrypt them, a bit more useful.

The 'strings are not the territory' in other words, the territory is the semantic constructs cryptically encoded into those strings. You want the similarity of constructs, not strings.

j16sdiz

I can't see these in this article, at all.

I think what it say is under "Is it the right kind of similarity?" :

> Consider books. > For a literary critic, similarity might mean sharing thematic elements. For a librarian, it's about genre classification. > For a reader, it's about emotions it evokes. For a typesetter, it's page count and format. > Each perspective is valid, yet cosine similarity smashes all these nuanced views into a single number — with confidence and an illusion of objectivity.

rglynn

Yes that's a good way to understand it

weitendorf

Cosine similarity and top-k RAG feel so primitive to me, like we are still in the semantic dark ages.

The article is right to point out that cosine similarity is more of an accidental property of data than anything in most cases (but IIUC there are newer embedding models that are deliberately trained for cosine similarity as a similarity measure). The author's bootstrapping approach is interesting especially because of it's ability to map relations other than the identity, but it seems like more of a computational optimization or shortcut (you could just run inference on the input) than a way to correlate unstructured data.

After trying out some RAG approaches and becoming disillusioned pretty quickly I think we need to solve the problem much deeper by structuring models so that they can perform RAG during training. Prompting typical LLMs with RAG gives them input that is dissimilar from their training data and relies on heuristics (like the data format) and thresholds (like topK) that live outside the model itself. We could probably greatly improve this by having models define the embeddings, formats, and retrieval processes (ie learn its own multi-step or "agentic" RAG while it learns everything else) that best help them model their training data.

I'm not an AI researcher though and I assume the real problem is that getting the right structure to train properly/efficiently is rather difficult.

montebicyclelo

The author asks:

> Has the model ever seen cosine similarity?

Yes - most of the time, at least for deep learning based semantic search. E.g. for semantic search of text, the majority are using, SentenceTransformers [1], models which have been trained to use cosine similarity. Or e.g. for vector representations of images, people are using models like CLIP [2], which has again been trained to use cosine similarity. (Cosine similarity being used in the training loss, so the whole model is fundamentally "tuned" for cosine similarity.)

Articles like these cause confusion, e.g. I've come across people saying: "You shouldn't use cosine similarity", when they've seen SentenceTransformers being used, and linking articles like these, when in fact you very much should be using cosine similarity with those models.

[1] https://sbert.net

[2] https://arxiv.org/abs/2103.00020

visarga

My chunk rewriting method is to use a LLM to generate a title, summary, keyword list, topic, parent topic, and gp topic. Then I embed the concatenation of all of them instead of just the original chunk. This helps a lot.

One fundamental problem of cosine similarity is that it works on surface level. For example, "5+5" won't embed close to "10". Or "The 5th word of this phrase" won't be similar to "this".

If there is any implicit knowledge it won't be captured by simple cosine similarity, that is why we need to draw out those inplicit deductions before embedding. Hence my approach of pre-embedding expansion of chunk semantic information.

I basically treat text like code, and have to "run the code" to get its meaning unpacked.

stared

If you ask, "Is '5+5' similar to '10'?" it depends on which notion of similarity you have - there are multiple differences: different symbols, one is an expression, the other is just a number. But if you ask, "Does '5+5' evaluate to the same number as '10'?" you will likely get what you are looking for.

gavmor

How do you contextualize the chunk at re-write time?

ewild

the original chunk is most likely stored with it in referential format such as an id in the metadata to pull from a DB or something along those lines. I do exactly what he does aswell and i have an Id metadata value that does exactly that pointing to an id in a DB which holds the text chunks and their respective metadata

gavmor

The original chunk, sure, but what if the original chunk is full of eg pronouns? This is a problem I haven't heard an elegant solution for, although I've seen it done OK.

What I mean is, how can you derive topics from a chunk that refers to them only obliquely?

DigitalNoumena

HyDE is the way to go! Just ask the model to generate a bunch of hypothetical answers to the question in different formats and do similarity on those.

Or even better, as the OP suggests standardise the format of the chunks and generate a hypothetical answer in the same format.

anArbitraryOne

Just want to say how great I am for calling this out a few months ago https://news.ycombinator.com/context?id=41470605

stared

It's nice to hear that! And from this thread, it is not us only two—otherwise, the title wouldn't have resonated with the Hacker News community.

This blog post stemmed from my frustration that people use cosine distance without a second thought. In virtually all tutorials on vector databases, cosine distance is treated as if it were some obvious ground truth.

When questioned about cosine similarity, even seasoned data scientists will start talking about "the curse of dimensionality" or some geometric interpretations but forget that (more than often) they work with a hack.

anArbitraryOne

Your post was much better than my stupid comment, and I like the points you articulated. Cheers.

nejsjsjsbsb

You called it! But it is a pattern as old as the hills in the software industry. "Just add an index". "Put it in the cloud" "Do sprints". One size fits all!

khafra

That was a helpful list, in your second comment downthread. What are your top 3 metrics that perform the best on the greatest number of those features that make cosine distance perform poorly?

anArbitraryOne

Good question. Unfortunately, I'm just a keyboard warrior asshole that bad mouths things without offering solutions

PaulHoule

The real problem with LLMs is that you can't get a probability estimate out of "Is {sentence_a} a plausible answer to {sentence_b}?"

See https://www.sbert.net/examples/applications/cross-encoder/RE...

datadrivenangel

With an open model, you could probably reverse engineer the token probabilities and get that probability estimate.

Something like: "Is {sentence_a} a plausible answer to {sentence_b}? Respond only with a single yes/no token" and then look at the probabilities of those.

wongarsu

If the model is not open turn up the temperature a bit (if the API allows that) and ask the above question multiple times. The less sure the model is the more the answer will vary.

danielmarkbruce

Absolutely you can. Rip off the last layer, add a regression layer in it's place, fine tune.

OutOfHere

Of course one can just ask the LLM for the output probability. It will give a reasonably calibrated output, typically a multiple of 0.05. I would ask it for an integer percentage though.

cranium

That's also why HyDE (Hypothetical Document Embeddings) can work better when the context isn't clear. Instead of embedding the user question directly – and risk retrieving chunks that look like the question – you ask a LLM to hallucinate an answer and use that to retrieve relevant chunks. Obviously, the hallucinated bits are never used afterwards.

miven

AFAIK retrieving documents that look like the query is more commonly avoided by using a bi-encoder explicitly trained for retrieval, those generally are conditioned to align embeddings of queries to those of relevant documents, with each having a dedicated token marker, something like [QUERY] and [DOC], to make the distinction clear. The strong suit of HyDE seems to be more in working better in settings where the documents and queries you're working with are too niche to be properly understood by a generic retrieval model and you don't have enough concrete retrieval data to fine-tune a specialized model.