Tensor Product Attention Is All You Need
105 comments
·January 22, 2025carbocation
Zacharias030
If you don’t like the title, wait till you see this acronym: „… we introduce the Tensor ProducT ATTenTion Transformer (T6), a new model architecture…“
imjonse
There is a famous transformer model named T5 from Google, and also S4, S4 and S6 (Mamba) in the LLM space, so it is not unusual naming.
svantana
Yes, but T5 is at least a normal acronym: Text-To-Text Transfer Transformer (albeit a bit forced)
TeMPOraL
That it's not unusual tells us that too many researchers in the field are chasing citations and fame at the expense of doing quality work.
black_puppydog
"... is all you need" isn't unusual either, and yet GGP isn't happy about it (and I understand why)
superjan
I propose T-POT (Tensor Product attentiOn Transformer)
prometheon1
TPOT already exists in the ML field, it was a somewhat popular autoML package a few years ago if I remember correctly and still seems to be around: https://github.com/EpistasisLab/tpot2
null
bbcc90
(trying to move the critique beyond the title...)
When trying to deploy llms in with larger context windows constrained environments 2 things start to hurt: a) increased memory footprint for longer KV cache b) increased decode speed due to longer context window. this paper addresses a) only, which is useful, but we are still left with b) (right?)
verdverm
The more meaningful contribution may be (section 3.4)
> These variants illustrate TPA’s versatility in balancing memory cost, computational overhead, and representation power. By choosing which dimensions (heads or tokens) remain contextual and adjusting ranks (RQ, RK, RV ), TPA unifies multiple existing attention mechanisms— such as MHA, MQA, and GQA—under one framework, while potentially reducing the KV cache size by an order of magnitude during autoregressive inference.
re: the title, it might be the true one if their proofs hold up
---
I'm now curious if the Element-wise Attention is All You Need preprint can be fit into this framework. Sadly my math is not currently up to the task. It appears to offer even better computational savings during both training and inference while maintaining accuracy, though only tested with a smaller model
hansvm
EA doesn't quite fit in the same umbrella. EA has a constant cache size (it's just another classical recurrent architecture inspired by approximating transformers), where this paper gives speedups to a variety of true attention mechanisms which still require caches to be proportional to the sequence length.
verdverm
very succinct and insightful, thank you!
ashupadhi01
Curious to know what mathematics you are comfortable with. If you are able to understand the papers you mentioned, you must belong to the 99 percentile.
verdverm
I was never good at proof writing. I found group theory and algebra interesting, topology and analysis eluded me. It's just been a while since I did any serious math thinking
llm_trw
It addresses b too since decompositions are always smaller than the original tensor. It's usually the case that memory access is also slower than matrix multiplications so this will be faster. Burning flops to save memory movement.
menaerus
> It's usually the case that memory access is also slower than matrix multiplications so this will be faster. Burning flops to save memory movement.
I haven't read this paper (yet) but isn't this the case that mostly applies to training and not so much to inference? A good example would be flash-attention, it trades the higher flops for better memory utilization but it's mostly irrelevant in inference workloads.
verdverm
They claim an inference time savings to the kv cache
wseqyrku
> (trying to move the critique beyond the title...)
This is kind of a theme in HN now. The top comments are completely besides the point of the article/story/etc.
msoad
I know. It is sad. Naming can also be seen as a way of showing respect to a hugely impactful paper if you want to be positive about it.
whymauri
I really can't with these paper titles anymore, man.
magicalhippo
There's an Ask HN thread going[1] asking about what people have done with small LLMs. This seems like a possible application. I asked Granite 3.1 MOE 3B to generate a title based on the abstract and it came up with:
Tensor Product Attention: A Memory-Efficient Solution for Longer Input Sequences in Language Models
Maybe a Greasemonkey script to pass arXiv abstracts to a local Ollama could be something...
wisty
Clickbait paper titles considered harmful?
moffkalast
Clickbait paper titles cause cancer, study shows
hatthew
Clickbait paper titles cure cancer (if you print them out, set them on fire, and incinerate the cancer cells*)
*side effects TBD
TeMPOraL
in mice
gbnwl
OK I'll admit I chuckled
llm_trw
Only if the paper is handwritten.
anigbrowl
By 2038 all scientific papers will be titled 'Bruh.' While this might at first seem a recipe for confusion, the fundamental interconnectedness of all things as demonstrated by Ollama(Googol 13) highlight the fact that pretty much any insight is as good as any other and are all descriptions of the same underlying phenomenon. Freed from constraint like survival or the necessity to engage in economic activity, humanity in the 203s will mainly devote itself to contemplating amusing but fundamentally interchangeable perspectives within increasingly comfy pleasure cubes.
01HNNWZ0MV43FF
As foretold by Joseph Campbell
smlacy
Bruh is all you need
ilove196884
I hate how paper titles are worded like seo techniques.
spiritplumber
Turn something into a metric and it will be misused. Ever always was
jampekka
Attention is all you need!
TeMPOraL
> Ever always was
Always has been.
silent gunshot
verdverm
This is a riff on the original "attention is all you need" paper, there has been a few of these lately
Matthyze
A few? A multitude.
LPisGood
Having a catchy title is great for short hand. If it didn’t have such a catchy name I probably wouldn’t remember Flush+Reload, Spectre, or even Attention is All You Need
Upvoter33
On the one hand, sure, it's dumb.
But, on the other hand, it's hard to get researchers to read your paper, esp. in fast-moving areas. Every little thing might be the difference between reading the abstract or not. Reading the abstract might lead to reading the intro. And so on.
So, for better or worse, the competition for human eyeballs is real.
Ironically, in this case, "attention" is all that the authors want.
WesolyKubeczek
Preach, mate. The bloody Beatles and their bloody catchy refrains and hit songs. They sing “Love is all you need” once, and now it’s everywhere! Can’t hide from it. Even scientific papers! Especially scientific papers!
Bloody hell and brimstone. Been crazy 57 years and a half already.
amelius
And they don't formally show that the titles are correct, therefore I don't think these papers belong in CS.
sva_
And I can't with the constant off-topic meta-discussions about the titles of papers.
esafak
Tensor decomposition has traditionally suffered from high computational complexity. Is it an issue here?
verdverm
My math is rusty, but it looks to have a higher complexity than the original attention. I cannot say if it is an issue. Generally it seems we are willing to spend more computation at training time if it produces better results at inference time. In this case they are reducing the resources needed at inference time (an order of magnitude for the KV cache) or enabling longer sequences given the same resources.
There's another paper I saw yesterday, "Element-wise Attention is All You Need" which looks like an early preprint, written by a solo author with a solo A800, and tested on some smaller problems. If the results hold up for language benchmarks, it could reduce resource requirements during training as well. It looks to have a lower complexity when scaling
davmre
They're not proposing to apply tensor decomposition to an existing collection of weights. It's an architecture in which the K, V, and Q tensors are constructed as a product of factors. The model works with the factors directly and you just need to compute their product on the forward pass (and adjoints on the backwards pass), so there's no decomposition.
absolutelastone
Looks like it's just a matrix decomposition in the paper. I'm guessing anyway. These attention papers are always a painful mix of mathematical, quasi-mathematical, and information retrieval jargon.
There is something in the github repo about higher-order decompositions. Don't find where the method for factoring is given.
verdverm
I chuckled when I read, in S-3.1
> Specifically, for each token t, with a small abuse of notation, we define:
jamessb
"Abuse of notation" is a commonly used term: https://en.wikipedia.org/wiki/Abuse_of_notation
dartos
At a sniff test it would make sense.
Trading computational complexity for space.
jdefr89
Every day there are literally tons of papers with “XYX is All you need” at this point we apparently need thousands of things…
hangonhn
For those of us who are lay people outside of machine learning and AI, what was the critical insight that made “attention all you need” in the original Transformer paper?
yorwba
The abstract https://arxiv.org/abs/1706.03762 explains it well:
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely."
They did not invent attention, but while previous language models had used attention as an auxiliary mechanism, they removed everything but the attention and the models still worked. Really, the title already says it all.
danielbln
I believe the insight was the introduction of the attention mechanisms that allow the NN to look at all words (well, embeddings) in parallel and make connections between them, instead of processing things purely sequential.
imtringued
I don't remember the contents of that paper, but I can give you some context based on my knowledge of "traditional" theoretical computer science.
In theoretical CS you have state machines, pushdown automatons and turing machines. What may surprise you is that the difference between those three does not lie in the way the algorithm for them is represented. They all use state transition diagrams!
Pushdown automatons are more powerful than state machines, because they have a stack onto which they can push new data, peek at the top of the stack or pop data off the stack.
Now here is the kicker! How do you get a turing machine? You take a pushdown automaton and remove the restriction that you can only push, peek or pop at the head! You can now move the head pointing at the stack, your stack has turned into a tape!
The key difference lies in the datastructure that the transition diagram is manipulating, not the algorithm itself!
The attention mechanism of the transformer architecture is analogous to a tape that can only be written to once per blank and in theory that alone is enough to emulate a full Turing machine with read, write, rewrite and delete semantics.
freilanzer
That attention works and is highly parallelisable.
AxesPushPatty
Another approach has been where separate physics-informed neural networks learned the tensor product. They reformulated the initial optimization problem to be structured as tensors. I assume that tensor products could be another factor in improving the actual computations.
cute_boi
> a novel attention mechanism
Why do every paper has to mention this word "novel" and these titles are getting crazier day by day.
patrick451
Because to publish in a real journal, you typically need both novelty and for your work to be "interesting". The job of the abstract and introduction of a paper (where the word "novel" normally lives) is to sell the reviewer that the paper should be published and to sell you that you should read and cite it.
NitpickLawyer
If your paper is scored / gated on "novel factor" by admission committees, then applicants will over-use that term.
verdverm
There are a number of papers which aim to improve the attention aspect of models, all being some derivation of the original "Attention is All You Need" paper. A pattern of "'blank' Attention is All You Need" has emerged
LPisGood
This is not new at all, by the way. Bringing novel ideas and techniques is kind of the whole point of research, and explicitly describing what novel thing you did is a good thing to do in an intro/abstract.
sva_
The main contribution of the paper aside, I have to say that the background section 2 is very neatly and succinctly written.
t_mann
> Because memory consumption grows linearly with sequence length, the maximum context window is limited by practical hardware constraints
I thought the number of parameters grows quadratically with context window length - what do they mean?
joshdavham
I'm sorry but can people please stop naming their papers "X is all you need"? It's super annoying.
recursive
Are you saying... you consider it harmful?
joshdavham
> Are you saying... you consider it harmful?
No.
oliverx0
I see what you did there
pepinator
A more precise title would be better, so, yeah, it's harmful.
WithinReason
Consider submitting an article titled "all you need is considered harmful"
edflsafoiewq
Why? It clearly and instantly communicates the genre of result the paper presents.
Deutschland314
It's clearly a play on words as it looks like a follow up paper
thunkingdeep
If you don’t pay to read papers, you don’t get to complain about the titles, imo.
I hate ads, but I’m not paying for YouTube Premium either. That’s how it goes. I get ads.
Vampiero
> I hate ads, but I’m not paying for YouTube Premium either. That’s how it goes. I get ads.
No that's not how it goes. You get Ublock Origin and then you don't get ads. Simple as that.
If you don't like ads and don't fight against them it means you accept ads and want to see more of them shoved down our collective throat. At least from the perspective of marketers and industries who rely on ads. That's how we ended up in this predicament in the first place. Lazy compliance.
If Youtube isn't sustainable without ads, it should die so that natural selection can take over and so that a better ecosystem can finally take its place. Every single "Youtuber" hates the platform, mostly because it has zero transparency being a Google product. The viewers hate it too because it constantly takes down their favorite videos and creators and because it's full of ads.
The only reason it's (still) the main site for hosting videos is quite literally just ad-fueled inertia due to the intrinsic cost of hosting videos. If ads didn't exist the only sustainable solution would be something less centralized like Peertube. And to me that's a desirable outcome.
philipov
"The forest must burn to make room for new trees."
jampekka
Authors or their institutions don't get a penny from paywalled papers either. Oftentimes they have to pay to publish. Authors choose the titles, sometimes reviewers (not paid a dime either) can demand changes to the title, and the academic editor (paid ziltch too) can require the changes.
This doesn't apply to arXiv though, as it is not peer reviewed nor edited and is funded by various institutions.
Admittedly the academic publishing system is so corrupt that it's hard to phantom, so it's easy to misunderstand it.
Typically you do pay for the papers (and publisher profits), either through taxes or inflated product prices.
black_puppydog
Authors are from, let's see, china and california. Then I guess a good chunk of the HN crowd is entitled to bitching about the title?
My kingdom for renaming this paper to something like "Tensor Product Attention is a Memory-Efficient Approach for Long-Sequence Language Modeling"