Skip to content(if available)orjump to list(if available)

How big are our embeddings now and why?

numlocked

I don’t quite understand. The article says things like:

“With the constant upward pressure on embedding sizes not limited by having to train models in-house, it’s not clear where we’ll slow down: Qwen-3, along with many others is already at 4096”

But aren’t embedding models separate from the LLMs? The size of attention heads in LLMs etc isn’t inherently connected to how a lab might train and release an embedding model. I don’t really understand why growth in LLM size fundamentally puts upward pressure on embedding size as they are not intrinsically connected.

indeed30

I wouldn’t call the embedding layer "separate" from the LLM. It’s learned jointly with the rest of the network, and its dimensionality is one of the most fundamental architectural choices. You’re right though that, in principle, you can pick an embedding size independent of other hyperparameters like number of layers or heads, so I see where you're coming from.

However the embedding dimension sets the rank of the token representation space. Each layer can transform or refine those vectors, but it can’t expand their intrinsic capacity. A tall but narrow network is bottlenecked by that width. Width-first scaling tends to outperform pure depth scaling, you want enough representational richness per token before you start stacking more layers of processing.

So yeah, embedding size doesn’t have to scale up in lockstep with model size, but in practice it usually does, because once models grow deeper and more capable, narrow embeddings quickly become the limiting factor.

svachalek

All LLMs use embeddings, it's just for embeddings models they stop there, while for a full chat/completion model that's only the first step of the process. Embeddings are coordinates in the latent space of the transformer.