Chunking Attacks on File Backup Services Using Content-Defined Chunking [pdf]
26 comments
·March 21, 2025pvg
Less pdfblobby blog post https://www.daemonology.net/blog/2025-03-21-Chunking-attacks...
cperciva
Yeah I considered submitting that, but all the interesting details are in the paper.
pvg
The 'uninteresting' metadetails are what gets people to reach the interesting details so your first instinct was probably right but I don't think it really matters much in your case as you have enough local reputation to just blobpost plus you're around to talk about the stuff in either form.
0cf8612b2e1e
Having not read the paper, does this impact Restic or Borg which encrypt chunks?
jszymborski
> The reason rolling hashes are relevant to our topic is that many file backup services use variations of rolling hashes to achieve CDC. This paper will primarily look at Tarsnap [9], a project by the second author, but we will also look at other schemes such as Borg [2] and Restic [6]
> It seems like compression as default (or even required) is important. Without compression, Borg and Restic are susceptible to known plaintext attacks. With compression, we still have theoretically sound (and harder) chosen-plaintext attacks but no known-plaintext attacks. Sadly, compression can also leak information for post-parameter extraction attacks, as shown in Example 4.3.
cperciva
Yes. They have been notified about these attacks.
mananaysiempre
> I'm also exploring possibilities for making the chunking provably secure.
Seems like that’s possible[1] to do in a fairly straightforward manner, the question is if you can do this without computing a PRF for each byte.
[1] Obviously you’re always going to leak the total data size and the approximate size of new data per each transfer.
cperciva
Right. I need to implement this and see if the performance is too painful.
jszymborski
Would SipHash be too slow? I think it would help mitigate the problem since you can key it to prevent known-plaintext attacks, right?
EDIT: or maybe this keyed rolling hash https://crypto.stackexchange.com/questions/16082/cryptograph...
masfuerte
Could this be mitigated by randomising the block upload order?
A fresh backup will be uploading thousands of blocks. You don't want to create all the blocks before uploading, but a buffer of a hundred might be enough?
cperciva
Yes that's one of the things we're planning on doing. Doesn't help with small archives (or archives which don't contain much new data) of course.
I was originally planning on having that as part of 1.0.41, but the implementation turned out to be harder than I expected.
pbsd
In page 10, should the ring R be GF(2)[X]/(X^32-1) and the map p be from {0,1}^{32} to R?
null
bschmidt400
[flagged]
bongodongobob
No one cares.
DaSHacka
Apologizes for off-topic-ness, what's up with the various bschmidt[0-9]{3}usernames in this thread??
null
bschmidt501
[dead]
bschmidt501
[dead]
bschmidt300
[flagged]
bschmidt301
[flagged]
My reading is that the primary vector is based on the size of the chunks (due to deterministic chunking and length-preserving encryption). Would padding chunks with random-length data (prior to encryption) help mitigate this at the cost of additional storage (and complexity)?