Skip to content(if available)orjump to list(if available)

DeepSeek's multi-head latent attention and other KV cache tricks

evertedsphere

> This blog post is mostly AI-generated using a PySpur workflow with minor human edits.

it's funny that this was clear about 5% in just due to the classic chatgpt-style format and tone

TeMPOraL

Okay, so this is a PySpur ad, alright. Since I'm interested in this kind of tools, and I see on their GitHub that they don't have loops yet, I have to ask: does anyone know of a similar tool (node/DAG-based) that does support looping?

It seems to be a common problem; so far, I've played with Rivet, n8n, and "LLM Party" nodes for ComfyUI; they all seem to focus on everything other than allowing to conveniently loop the flows.

visarga

> so this is a PySpur ad, alright

They should have also posted the PySpur pipeline, it would be interesting to see the agentic flow they used in this article. I am doing a lot of this kind of worflows manually, copy pasting stuff, I'd like to have some tools to design AI flows. PySpur looks pretty interesting visually.

t55

We will push it to our repo very soon :)

t55

We do support loops! :) Just haven't advertised it inside the Github readme yet.

One of the reasons why we started building pyspur is precisely because none of the other tools support loops!

If you need more support; shoot me an email; you can find it at the bottom of the github readme.

EDIT: Just added a screenshot to the readme.

didgeoridoo

It was going pretty well until the exclamation point at the end of the first paragraph.

t55

Replaced the exclamation point with a dot, hope it's better now!

t55

let me know if you have more feedback!

GalaxyNova

It's blatantly obvious; nobody uses so many bullet points.

pdntspa

I do :(

It's a great and concise way to write!

t55

Yep, me too. Always loved to write in bullet points and in fact, I prompted the LLMs explicitly to do so.

t55

True! We love bullet points! :)

Mizza

If this is the startup that finally unleashes AI spam bot articles and comments to the top of HackerNews, I'm gonna quit the internet and move into a log cabin.

carlmr

I loved bullet points before AI was a thing. Now I'm accused of being AI.

t55

Fair point! Do you prefer a different format or tone? We really like the concise bullet point format :)

evertedsphere

it's not the bullet points per se, the general structure of the analysis has a certain vibe to it at a level deeper than that of just visual presentation

but this is something where it's up to you to decide what you want from your ghostwriting. my comments would not a system prompt make

t55

I see! We wanted an overview of KV caching internally and quickly understand the latest methods without much fluff, I think it did a great job at that

llmthrow102

I'd rather eat sand than read an AI-generated article. If you don't care enough to write it, I don't care enough to read it.

visarga

I don't like formatting in bullet points and listicles much, but the contents are pretty good, they cover many papers in a lightweight way, you can get a decent overview in 10 minutes for what would take hours to research.

t55

> the contents are pretty good, they cover many papers in a lightweight way, you can get a decent overview in 10 minutes for what would take hours to research.

Exactly, I'm still surprised it works so well.

Also, which formatting do you prefer? I explicitly prompted it to write everything in bullet points because I find it more digestible

visarga

I prefer prose because it has better flow. Usually I have to specify this manually or the LLMs will happily generate bulletpoints and listicles.

More recently I prefer chain-of-thought prose to the final answer. I can trigger it with a prompt even on non-reasoning models and it's usually easier to follow.

esperent

How long would it take you to generate your own equally good summary of these papers using Claude? Maybe 30 seconds?

t55

Would be curious to compare the results!

t55

Hi, OP here; this article helped me a lot to understand better KV caches, which is ultimately why I co-wrote it with AI + read it several times before posting

seanvelasco

getting tired of these blog posts that end with "this post is AI-generated" as if it's going to surprise us. it's getting repetitive. imo, articles should be prefaced if they're ai generated or not to make the reader not feel stupid after reading the whole thing

with that said, i love the content! will be bookmarking for future reference

t55

Hi, OP here. My intention wasn't to "gotcha" anyone by mentioning that in the end, it was simply to be upfront. Many blog posts/content put out these days are obviously 100% AI-generated, yet it's never being mentioned. This one was probably 80%/20% (I still did many manual edits).

Glad you overall liked it!

esperent

At the start of the article there's a very clear profile picture and name of the author: Jean Kaddour.

If you want to be upfront, you should mention at the start that it's written by AI instead of showing this fake author.

This would give people the choice on whether to read it.

Putting it at the end is just to give you plausible deniability. Clearly your intention is to present this as if it was written by this Mr. Kaddour, which is a lie.

EDIT: they removed the fake author in response to this comment

t55

Ah, good catch! You seem to really assume the worst on our side, but fine, this is HN :)

The author was simply there because of the website template we used; by default, it wants you to specify an author so we did. I removed the author now, thanks for making me aware!

spencerf

I feel like we’re living in strange times where your comment appears to be AI generated as well. You complain about the surprise at the end and then offer up a similar structural surprise in your reply.

TeMPOraL

Strange times indeed, given that I naturally write comments structured similarly to GP. Hell, I'm probably even more of an LLM with human face than GP, because I capitalize the first words in my sentences, exactly like ChatGPT does.

null

[deleted]

amelius

Not sure if I'm getting this. Is this cache implemented as part of the forward pass through the network, in a general Python datastructure like a dict? Or is the cache somehow part of the fabric of the neural network?

t55

The KV cache is typically stored in a data structure external to the trained weights—often a buffer or set of tensors kept alongside the model’s forward pass (e.g., in PyTorch, one might store it in a dictionary-like container). It’s not baked into the neural network parameters themselves; instead, it’s an auxiliary memory that holds precomputed key-value pairs so the model doesn’t have to re-encode past tokens on each new inference step.

ahzhou

It’s a tensor stored in GPU memory to improve inference throughput. Check out the PagedAttention (which introduces vLLM) paper for how most systems implement it nowadays.

anvuong

Neither. Think of it as something like redis or memcached. It's external to the program, and the program will run just fine without it. But it avoids a lot of duplicate works.

null

[deleted]

deepdarkforest

Very clean writeup. On the attention sinks, you mention they enable "infinite-length sequence processing". What does that mean exactly in practice? Isn't deepseek still capped at 128k?

t55

Thank you! Great question.

"Infinite-length sequence processing" in StreamingLLM refers to handling much longer sequences than the model's training window (e.g., millions of tokens), by combining a sliding window for recent tokens with fixed attention sinks from the start of the sequence.

I can't speak for DeepSeek, but if I had to guess, I'd say that the infinite context window isn’t practical because storing all past tokens eventually becomes too expensive.

m348e912

Agreed on the writeup itself. It's beautifully written and presented. Kudos to Jean Kaddour and anyone else that may have been involved in putting it together.

t55

Thank you so much, glad you liked it

spps11

When you say sequence length, does it only count the output tokens or are input tokens also included in that?

Thanks for the post, it was an excellent read!

t55

Thanks for reading! In most contexts (including this one), seq length encompasses both the initial input (prompt) tokens and the output tokens the model generates. It’s the total length of all tokens processed by the model so far.

maxmolecule

Please do! Seeing that you used multiple research papers to back up this writing inspired me to use this in my current research project for the literature review and eventual write up.

The template will be hugely helpful for a non-programmer like me.

8note

hmm. after my engineer degree put all of the vector math in the form

k = Wx

seeing

k = xW

is jarring. Is there a reason for using horizontal vectors? Common for data science docs?

t55

It’s mostly a convention. In many deep learning frameworks (PyTorch, TensorFlow, etc.), inputs are stored with the “batch × length × hidden-dim” shape, effectively making the token embeddings row vectors. Multiplying “xW” is then the natural shape-wise operation. On the other hand, classical linear algebra references often treat vectors as column vectors and write “Wx.”

anvuong

Isn't batch-first a Pytorch thing? I started with Tensorflow and it's batch-last.

t55

TFv1 or TFv2? AFAIK it's batch-first in TFv2

quanto

You are in the right here. Horizontal vectors are common for (some) deep learning docs, but column factors are the literature standard elsewhere.

sifar

It is more efficient to compute k = xW with the weights transposed than k = Wx.

karolist

What's specific to deepseek here that other models do not use, or are you just riding the keyword wave?

t55

DeepSeep proposed the multi-head latent attention technique! :)

As far as I know, they are the only ones using it so far

karolist

Fair point, thanks for clarification, it seems this was first proposed in https://arxiv.org/pdf/2405.04434? I was confused by your title mentioning DeepSeek but then first paragraph revert to "...language models like ChatGPT and DeepSeek faster at generating text".

t55

Right, that's a good point. I'll adjust the intro a bit. We wanted to provide a more holistic overview on what MLA is, what came before it, and why it matters :) hope it was useful!

pama

Neat. Can you share the workflow that created this blog? What models did it use?

t55

Thanks! Yes, will push it as template to our repo (https://github.com/PySpur-Dev/pyspur) soon! We used o1 and claude3.5

narmiouh

How were the Images in the blog generated?