Skip to content(if available)orjump to list(if available)

Bzip3: A spiritual successor to BZip2

Bzip3: A spiritual successor to BZip2

180 comments

·February 1, 2025

mmastrac

I've studied the Burrows-Wheeler Transform, I understand the transformation, I've re-implemented it countless times for kicks, I see how it improves compressability, but for the life of me the intuition of _why_ it works has never really clicked.

It's a fantastic bit of algorithmic magic that will always impress me to see it.

unnah

The Burroughs-Wheeler transform has been described as a unique algorithm idea in that there are no non-trivial variations or related algorithms, unlike more conventional compression algorithms, which can be tweaked and improved in so many ways. There is no general compression theory in which BWT could be described as a special case.

It looks to me that the above still holds: Bzip2 and Bzip3 are simply combining more conventional compression algorithms with the BWT, which itself is still the same old transform. Bzip2 does Huffman coding after BWT, and Bzip3 does arithmetic coding.

derf_

> There is no general compression theory in which BWT could be described as a special case.

I would not really say this is true. BWT is spiritually similar to the various Prediction by Partial Matching (PPM) algorithms, except that instead of needing to decide how much context (i.e., preceding symbols) to use to model the next symbol, and carefully learning the probabilities for each unique context, it naturally sorts symbols with the same context (of _any_ length) so they appear next to each other, and relies on adaptation to update your learned statistics as you move from one context to the next, without ever explicitly tracking context boundaries [0].

[0] N.J. Larsson, "The Context Trees of Block Sorting Compression," 1998. https://ieeexplore.ieee.org/abstract/document/672147

unnah

Thanks for the reference, looks interesting.

thesz

It definitely is related to prediction by partial match (PPM).

BWT sorts rotated data and what is achieved is that same suffixes group together:

  ...
  "Bzip2 and Bzip3 are simply combining more"
  "Bzip3 are simply combining moreBzip2 and "
The preceding (to suffix) character goes to end and then gets outputted. This is much like PPM going backward. There is a PPM* algorithm (unbounded context length) where authors considered reconstruction of contexts from data, utilizing something like LZSS seach. Same idea is in BWT - context is reconstructed from data.

BWT also breaks near dependencies in data, this is why move-to-front with Huffman or arithmetic encoding works well there.

unnah

Wow, thanks. As always, the best way to learn more on the internet is to be confidently and totally wrong!

altairprime

Can BWT be combined with zstd, which uses asymmetric numeral systems?

CJefferson

Yes, it would actually be interesting to just have a bwt pass which does no compression, so we can then try lots of post compression options.

gopalv

> BWT be combined with zstd

BWT can be combined with anything which does RLE and get a benefit.

What does it does is give RLE more to work with.

klodolph

I think you would just need ANS, not the rest of zstd.

thrtythreeforty

Thank you for the reference. I learned something new today. That algorithm is wild. If you had shown me the transform and asked if it had an inverse, I would have said of course it doesn't, it's too weird.

loxias

I always understood it as working because of the predictability of a symbol/letter/token given the previous one.

Sorting all the shifts of a string puts all the characters in order, then looking at the last column shows you all the _preceding_ characters. If there's any predictability there (which there often is), it's now easier to compress. It's sorta like an entropy coder in that way.

I've never thought of it as being that deep, and understood them since I was a kid -- building an intuition for "why" the FFT works is much harder -- but that being said, I clicked quickly to reply thinking "that's easy! I can explain this!" then struggled for a while trying to get the picture in my mind into text. :)

crazygringo

But shouldn't Huffman coding already detect that same predictability and compress it the same?

What I don't get isn't the benefits of BWT on its own. It's why BWT should add any additional benefit if you're already doing Huffman.

loxias

> What I don't get isn't the benefits of BWT on its own. It's why BWT should add any additional benefit if you're already doing Huffman.

Ahhhh. Now we're on the same page. :) Seeing how it helps when combined is somewhat subtle/non-obvious. I believe it relates to BWT and Huffman both being approximations of something more optimal. The two transforms could also have different window sizes -- one rarely does BWT on a whole 1GB file -- which introduce inefficiencies. Huffman coding is also only optimal in the very large alphabet and very long data limits. As your data length and alphabet size decrease, it gets less optimal.

Put differently, "I think that's a wonderfully phrased question, this _is_ my specialization/subfield, and I'm gonna need to chew on it for a while."

jltsiren

One way to look at data compression is splitting it into a model and an encoder. The model describes the data, while the encoder encodes (or equivalently predicts) the data according to the model. The compressed output consists of the serialized model and the encoded data. BWT is a model, while Huffman is an encoder.

Huffman takes a probability distribution and a symbol and encodes the symbol according to the distribution. If you encode all symbols independently according to the same distribution, you probably don't get very good compression.

You get a bit better results with a model that has a separate distribution for each context. If the previous symbols were X, Y, and Z, you encode the next symbol according to the distribution for context XYZ. This approach doesn't really scale, because the size of the model grows rapidly (exponentially in a naive implementation) with context length. You get better compression with an adaptive model. You start with a uniform distribution and update the available contexts and distributions after encoding each symbol. On the one hand, you don't have to store the model explicitly. But on the other hand, updating the model is very slow.

Burrows-Wheeler transform is an implicit model. It sorts the symbols according to the context that follows them, and it does that simultaneously for each context length. Because you don't have to store the model explicitly, you can effectively use longer context lengths than with a fixed explicit model. And because you don't have to update an explicit model after encoding each symbol, using the BWT is much faster than using an adaptive model.

palaiologos

Hi, tool author here.

Huffman coding is a static minimum-redundancy code. What this means is that it finds an optimal assignment of bit sequences to letters in the input alphabet (commonly US-ASCII or extensions). This however means that Huffman coding can not exploit redundancies that stem from the concrete sequence of characters. For example, you could easily predict that an `e` comes after `Th`, but Huffman coding can not know that.

Hence after applying the Burrows-Wheeler transform you need to have some sort of a higher-order transform (i.e. a transform that considers more than just individual bytes) which somehow reaps from the changed distribution of the result of the algorithm. But we will get to that in a second.

The joke here is that the Burrows-Wheeler transform is closely related to suffix trees and suffix arrays, which are often used in bioinformatics and HPC for full-text search. If you wanted to find a pattern of length `p` in a text of length `n`, if you already have a suffix tree of the original text, the search is linear in the length /of the pattern/ - i.e. O(p). The suffix tree stores all suffixes of a string in a compressed manner (i.e. it has a linear space overhead, approximately O(20n) as given by Gusfield, D. (1997). Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology. Cambridge University Press), so you can search for a word in it by simply traversing from the root node to an internal or leaf node by following a sequence of bytes that comprise the word.

As such, a suffix tree (and equivalently suffix array and the BWT, which is trivially computed from a suffix array) form something which can be thought of as a static PPM model. Notably real world implementations of PPM use suffix trees as a part of their main storage data structure (e.g. PPMd). What this all means is that given a suffix tree, we can very cheaply give the probability distribution for the next byte that follows a given fixed-order sequence of bytes. This is nice, because then e.g. an order-2 predictor would be able to tell that `Th` is followed by `e` once enough data has been gathered.

As you can probably guess, the more preceding bytes you know, the better will be your estimate for what is the most likely next byte. But the larger your context, the more expensive the searches and computations become due to pointer chasing in the suffix tree.

So how do we remedy this? We notice that the Burrows-Wheeler transform essentially clusters similar contexts together, meaning that a low order predictor (= faster, simpler) on BWT compresses as well as a high order predictor (= slow, complicated) on the original data, at the cost of an extra transformation. This is viable, because the Burrows-Wheeler transform can be quickly computed and there have been recent advancements in running it on the GPU. So what this means is that bzip3 uses BWT + a low order predictor with an arithmetic coder to encode the bytes, meaning that it can make use of high order statistics for compression and performs comparably at a faster speed.

eru

> But shouldn't Huffman coding already detect that same predictability and compress it the same?

Huffman coding only works on individual letters. It doesn't know anything about relationships between adjacent letters.

vanviegen

Understanding why increasing predictability helps with compression is not the hard part though. What's hard to grasp is why the transform is reversible.

gfody

a word can be factored into the set and frequency of letters + the specific permutation. compressable patterns in either channel seem likely when the underlying words are language like.

mnw21cam

I remember the lecturer commenting on what sort of sick and twisted mind could come up with such a ridiculous convoluted notion when I was taught it at university.

ajb

Wheeler was also one of the inventors of the "closed subroutine" AKA function, which had to be implemented via a hack as machines of the time did not include ISA support for "return":

https://en.m.wikipedia.org/wiki/Wheeler_Jump

mnw21cam

Yes. He also happened to be in residence just down the hall from the lecture theatre at the time.

BlackLotus89

I really liked the computerphile video about it https://www.youtube.com/watch?v=GYbCttCF25A

maginx

I feel exactly the same, and have also implemented it backwards and forwards. I've thought about it in my sleep, trying to recall how it REALLY works. Happens every few years ;-) I always thought it was probably obvious to everyone else what the "magic" is.

abetusk

Isn't it basically run length encoding but on patterns? Sorting lexicographical on all rotations means blocks of patterns get grouped together, which means you can do compression more easily, right?

duskwuff

> the intuition of _why_ it works has never really clicked.

In terms of why it aids compression, or why it's reversible?

As far as the first goes: it transforms n-grams into repeated characters.

linsomniac

Highlight (benchmark of Perl source code):

    The results follow:
    xz -T16 -9 -k - 2'056'645'240 bytes (c=12m09s, d=4m40s)
    bzip2 -9 -k   - 3'441'163'911 bytes (c=17m16s, d=9m22s)
    bzip3 -b 256  - 1'001'957'587 bytes (c=7m10s,  d=4m6s? Unclear on source page)
    bzip3 -b 511  -   546'456'978 bytes (c=7m08s,  d=4m6s? Unclear)
    zstd -T12 -16 - 3'076'143'660 bytes (c=6m32s,  d=3m51s)
edit: Adding times and compression levels

zimpenfish

Did a test on a 1.3G text file (output of `find -f -printf ...`); Macbook Pro M3 Max 64GB. All timings are "real" seconds from bash's builtin `time`.

    files.txt         1439563776
    bzip2 -9 -k       1026805779 71.3% c=67 d=53
    zstd --long -19   1002759868 69.7% c=357 d=9
    xz -T16 -9 -k      993376236 69.0% c=93 d=9
    zstd -T12 -16      989246823 68.7% c=14 d=9
    bzip3 -b 256       975153650 67.7% c=174 d=187
    bzip3 -b 256 -j12  975153650 67.7% c=46 d=189
    bzip3 -b 511       974113769 67.6% c=172 d=187
    bzip3 -b 511 -j12  974113769 67.6% c=77s d=186
I'll stick with zstd for now (unless I need to compress the Perl source, I guess.)

(edited to add 12 thread runs of bzip3 and remove superfluous filenames)

zimpenfish

Since I only have 12 perf cores on this Mac, I tried the xz test again with 12 threads.

    xz -T12 -9 -k      993376236 69.0% c=83 d=9
~10% faster for compression with the same size output.

yencabulator

That d=9 sure wins the day there, for me.

wongarsu

Additional benchmarks on the same dataset:

    uncompressed               - 19'291'709'440
    bzip2 -9                   -  3'491'493'993 (sanity check)
    zstd -16 --long            -    593'915'849
    zstd -16 --long=31         -    122'909'756 (requires equivalent argument in decompressor due to needing ~4GB RAM)
    zstd -19 --long            -    505'728'419
    zstd -19 --long=31         -    106'601'594 (requires equivalent argument in decompressor)
    zstd --ultra -22           -    240'330'522
    zstd --ultra -22 --long=31 -     64'899'008 (requires equivalent argument in decompressor)
    rar a -m5 -md4g -s -mt8    -     64'837'044
    
As you notice my sanity check actually has a slightly different size. Not sure why. The benchmark is a bit underspecified because new perl versions were released in the interim. I used all releases up to perl-5.37.1 to get to the correct number of files. Just treat all numbers to have about 2% uncertainty to account for this difference.

I can't provide compression/decompression times, but the --long or --long=31 arguments should not have major impact on speed, they mostly impact used memory. --long=31 requires setting the same in the decompressor, making that option mostly useful for internal use, not archives meant for public consumption.

As you can see, the benchmark chosen by the author mostly comes down to finding similar data that's far away. I wonder if bzip3 can do this better than other algorithms (especially in less memory) or simply chose default parameters that use more memory.

Edit: added more benchmarks

linsomniac

My standard test is compressing a "dd" disc image of a Linux install (I use these for work), with unused blocks being zeroed. Results:

    Uncompressed:      7,516,192,768
    zstd:              1,100,323,366
    bzip3 -b 511 -j 4: 1,115,125,019

palaiologos

Hi, tool author here!

Thank you for your benchmark!

As you may be aware, different compression tools fill in different data type niches. In particular, less specialised statistical methods (bzip2, bzip3, PPMd) generally perform poorly on vaguely defined binary data due to unnatural distribution of the underlying data that at least in bzip3's case does not lend well to suffix sorting.

Conversely, Lempel-Ziv methods usually perform suboptimally on vaguely defined "textual data" due to the fact that the future stages of compression that involve entropy coding can not make good use of the information encoded by match offsets while maintaining fast decompression performance - it's a long story that I could definitely go into detail about if you'd like, but I want to keep this reply short.

All things considered, data compression is more of an art than science, trying to fit in an acceptable spot on the time to compression ratio curve. I created bzip2 as an improvement to the original algorithm, hoping that we can replace some uses of it with a more modern and worthwhile technology as of 2022. I have included benchmarks against LZMA, zstandard, etc. mostly as a formality; in reality if you were to choose a compression method it'd be very dependent on what exactly you're trying to compress, but my personal stance is that bzip3 would likely be strictly better than bzip2 in all of them.

bzip3 usually operates on bigger block sizes, up to 16 times bigger than bzip2. additionally, bzip3 supports parallel compression/decompression out of the box. for fairness, the benchmarks have been performed using single thread mode, but they aren't quite as fair towards bzip3 itself, as it uses a way bigger block size. what bzip3 aims to be is a replacement for bzip2 on modern hardware. what used to not be viable decades ago (arithmetic coding, context mixing, SAIS algorithms for BWT construction) became viable nowadays, as CPU Frequencies don't tend to change, while cache and RAM keep getting bigger and faster.

linsomniac

Thanks for the reply. I just figured I'd try it and see, and the bzip3 results are extremely good. I figured it was worth trying because a fair bit of the data in that image is non-binary (man pages, config files, shell/python code), but probably the bulk of it is binary (kernel images, executables).

andix

Shouldn't a modern compression tool, targeting a high compression rate, try to switch its compression method on the fly depending on the input data?

I have no idea about compression, just a naive thought.

nmz

If the focus is on text, then the best example is probably the sqlite amalgation file which is a 9mb C file.

linsomniac

A couple other data points:

    zstd --long --ultra -2:                                1,062,475,298
    zstd --long=windowLog --zstd=windowLog=31 --ultra -22: 1,041,203,362
So for my use case the additional settings don't seem to make sense.

null

[deleted]

SeptiumMMX

Given that it's BWT, the difference should be the most prominent on codebases with huge amounts of mostly equivalent files. Most compression algorithms won't help if you get an exact duplicate of some block when it's past the compression window (and will be less efficient if near the end of the window).

But here's a practical trick: sort files by extension and then by name before putting them into an archive, and then use any conventional compression. It will very likely put the similar-looking files together, and save you space. Done that in practice, works like a charm.

hcs

Handy tip for 7-Zip, the `-mqs` command line switch (just `qs` in the Parameters field of the GUI) does this for you. https://7-zip.opensource.jp/chm/cmdline/switches/method.htm#...

ku1ik

Ooh, that’s neat. How much improved do you get from this? Is it more single or double digit % diff?

sevg

To make your comment more useful you’ll want to include compression and decompression time.

Using the results from the readme, seems like bzip3 performs competitively with zstd on both counts.

idoubtit

I've experimented a bit with bzip3, and I think the results in the readme are not representative. I think it's a handmade pick, with an uncommon input and unfair choices of parameters. And it's made with a HDD, which skews the results even more.

For instance, with a 800 MB SQL file, for the same compression time and optimal parameters (within my capacity), bzip3 produced a smaller file (5.7 % compression ration) than zstd (6.1 % with `--long -15`). But the decompression was about 20× slower (with all cores or just one).

I'm not claim my stupid benchmark is better or even right. It's just that my results were very different from bzip3's readme. So I'm suspicious.

dralley

Also the compression levels..

linsomniac

I believe the compression levels are included in the list above.

jl6

A 4x improvement over lzma is an extraordinary claim. I see the author has also given a result after applying lrzip (which removes long-range redundancies in large files), and the difference isn’t so great (but bzip3 still wins). Does the amazing result without lrzip mean bzip3 is somehow managing to exploit some of that long-range redundancy natively?

I’d be astonished if such a 4x result generalized to tarballs that aren’t mostly duplicated files.

wongarsu

Currently running my own benchmarks, my preliminary results are that zstd becomes competitive again once you add the --long option (so `zstd --long -16 all.tar` instead of `zstd -16 all.tar`). Which is an option that not everyone might be aware of, but whose usefulness should be intuitive for this benchmark of >200 very similar files.

eqvinox

I'd argue that's actually the lowlight of the README since that is a very poor choice of benchmark. Combining a multitude of versions of the same software massively favors an algorithm good at dealing with this kind of repetitiveness in a way that will not be seen in typical applications.

The "Corpus benchmarks" further down in the README are IMHO much more practically relevant. The compression ratio of bzip3 is not significantly better, but the runtime seems quite a bit lower than lzma at least.

miohtama

In Linux source benchmark results are interestingly more equal, LZMA still holding up well.

What makes Perl source benchmark special? Deduplication?

mmastrac

An old friend use to say that Perl is line noise that was given sentience.

ape4

This is the source - which is probably C.

pxeger1

The author is super cool. They are one of very few people to write any substantial programs in Malbolge, a programming language designed to be cryptographically hard to use (or something like that)

matthberg

The Wikipedia page on Malbolge was quite the horrific read, downright amazing to have a lisp written in it.

Here's the program as talked about on HN:

https://news.ycombinator.com/item?id=38850961

https://github.com/kspalaiologos/malbolge-lisp

The cursed language: https://en.wikipedia.org/wiki/Malbolge

And the variant used by the program: https://esolangs.org/wiki/Malbolge_Unshackled

jwr

I still remember going crazy about bzip (the first one) and re-compressing all my data with it.

And then I remember discovering, several years later, that bzip (the first one) is an obsolete format that is now difficult to even decompress.

I learned my lesson and now use horribly sub-optimal formats that I'm sure will stick around for a long time, if not forever.

self_awareness

This is true for every new thing.

New frameworks, new languages, new cars, new heating systems.

If you want to have a stable solution, use "boring" things. They are boring for a reason; not much stuff is changing in them anymore. They are stable.

I use boring stuff all the time.

t43562

  > bzip2 -9 -k all.tar  981.78s user 9.77s system 95% cpu 8M memory 17:16.64 total
  > bzip3 -e -b 256 -j 12 all.tar  2713.81s user 16.28s system 634% cpu 18301M memory 7:10.10 total
The difference in memory usage might be worth noting: 8M v 18301M

Groxx

Probably worth noting that bzip2 also did by far the worst in this. ~7x larger files than the best bzip3. Large memory use is generally required for good compression.

I'm curious how well gzip would have handled this though, as it's generally low memory too, and all the others in that list have FAR more than 8M memory used.

t43562

I think gzip was 5M. This all depends where you are using it e.g. on a Raspberry Pico I'd be using gzip no matter the extra bytes probably.

thomasmg

Improving BWT is great!

In my view, improving "long range" compression has the biggest potential. There are many, many algorithms and implementations for very short range (huffman, arithmetic, ANS) and short range (LZ, BWT), but not that much research has gone into "long range" yet. There's deduplication, and large-window LZ / BWT.. but not much more yet. What is missing is efficietly (and with little memory) finding similarities on multi-GB data sets. I think sorting by similarity would help there. Or did I miss research in this area?

ggm

Small request: write a header or tail block which records the compression efficiency. Bzip2 doesn't. Gzip does. Knowing the uncompressed size can be vital. Yes, there is a risk of lying and making zip bombs.

Zamiel_Snawley

Shouldn’t knowing how big it’s supposed to be make it easier to stop a zip bomb? Just stop decompressing once you hit the size from the header.

jeroenhd

That only works if the standard actually describes what you're supposed to do with extra data at the end, and everyone agrees.

In practice, there have been antivirus bypasses that made use of AV scanners treating the additional data differently from common extraction software (I believe it was winrar?).

One could argue that a text document with a corrupt file size byte should still be decodeable. One could also argue that the file is completely corrupt and should be treated as such. Knowing that there will be tools that will take the first approach regardless, I'd stick to just decoding the extra data and marking the file as potentially damaged rather than send the user down a data recovery rabbit hole for a file that'll decompress just fine.

o11c

IIRC to decode raw zlib or deflate using only command-line tools, you have to prepend the gzip header and stream while ignoring errors at EOF.

onetom

How come https://www.nongnu.org/lzip/ is not mentioned in its readme?

I agree w other commenters, that Lzip would be a good baseline to compare it to, besides bzip2.

Nice job though on testing it on so many CPU architectures!

jgalt212

I poke around in this space periodically, but I've never found a compelling reason to move away from gzip.

alkh

I am a huge proponent of zstd after I learned about it on HN. I've recently had to compress a 12 Gb csv file. zstd took ~3 sec for compression and ~11 sec for decompression and got the file to ~1.1 Gb. Xz took ~3.5 min for compression(!) and the resulting file was ~740 Mb(I didn't measure the decompression time). I just realized that in most cases it's more efficient to use zstd, especially for file transfer.

The majority of OS that people use either have it installed or can be trivially downloaded to, so I see no point in using gzip nowadays, unless it is mandated by API

eqvinox

Did you confuse gzip and xz? You mention numbers from xz and then suddenly talk about gzip? These two are not related…

alkh

Hey, I didn't confuse them but I guess I should have been more specific. I've addressed 2 main points in my text. 1) In my opinion, for the vast majority of cases you don't need to be worried about comparability, as you can easily install better alternatives to gzip for platforms a lot of people use. 2) I think zstd can become a primary replacement for gzip due to its high speed of compression and good compression ratio, even when compared to such great algorithms like xz/lzma. Sacrificing some compression ratio for (de)compression speed is worth it, for me at least

loeg

zstd is faster and provides better compression than gzip at every point on the curve. There is no reason to use gzip these days other than backwards compatibility.

idoubtit

I've reached the same conclusion, and I've been using zstd unless I want extremely fast (de)compression with lz4.

And even when I still have to use gzip (the format), the executables of libdeflate are fully compatible and noticeably faster than gzip/gunzip (the tool).

jrockway

This was on HN a while ago https://jolynch.github.io/posts/use_fast_data_algorithms/ and reaches the same conclusion. I am inclined to use Zstd even in cases where people would use lz4. Zstd is very fast.

hinkley

Do zless, zcat and zgrep support zstd everywhere? And I mean everywhere? VMs? Alpine? FreeBSD? OSX? Openwrt?

Nothing is shittier than sshing into a box that doesn’t understand half of your command line tricks. Or the clever shell script you just tested six ways to Sunday. It’s like fighting with your hands tied behind your back.

sophiebits

> other than backwards compatibility

adastra22

Yes, it's supported on all those platforms.

loeg

Yes, it's been integrated in lots of places, mostly as long ago as the late 2010s.

skrebbel

> There is no reason to use gzip these days other than backwards compatibility

And forward compatibility. The Lindy effect says gzip is likelier to be widely available across platforms and tools in the long term than zstd.

lazide

Zip beats all on that front.

ThatGuyRaion

zstd is a great setup, for sure, but the build system they use is patently awful. Can someone make an autoconf or hell, even CMake build system for them pleasee???

loeg

Just install the binary package from whatever distro you use? Why do you need to build it?

But if it matters, FreeBSD has a pretty trivial BSDmake build of it:

https://github.com/freebsd/freebsd-src/blob/main/lib/libzstd...

https://github.com/freebsd/freebsd-src/blob/main/usr.bin/zst...

You could easily do something similar in GNU make or whatever without the dependencies on the FBSD build system. It's basically just the classic "cc -c a bunch of C files, then link them."

MawKKe

Quick glance of zstd github repo shows they do provide CMake build scripts?

t43562

> bzip2 -9 -k all.tar 981.78s user 9.77s system 95% cpu 8M memory 17:16.64 total

> bzip3 -e -b 256 -j 12 all.tar 2713.81s user 16.28s system 634% cpu 18301M memory 7:10.10 total

The memory usage is one reason: 8M vs 18301M

loeg

I think you intended to reply somewhere else. This isn't responsive.

Borg3

Same here. Storage is not that expensive so I do NOT care to squize every byte out of archive. Also, im more into retro, so portability and memory usage is more importand for me :)

palaiologos

Hi, tool author here!

Regarding your first remark: high ratio data compression has its time and place, and I personally understand that to many people it is not very desirable. In a lot of scenarios something as plain and simple as LZ4 generally suffices.

On the other hand, there is an unofficial (= unsupported) port of bzip3 to older (386+) machines that run MS-DOS6.22. I have prepared it for a retrocomputing meeting in Ontario that I attended a while back. Let me know what you think :).

https://github.com/kspalaiologos/dev-urandom/blob/main/dos/B...

Borg3

Oh nice. Ill will look at it at some point to check things out and to see if I can compile it here and there out of the box :)

jgalt212

After reading this, I see lots of zstd fans around here, and with good reason. That being said, I think our shop will stick with gzip until zstd arrives in the Python Standard Library.

wood_spirit

strange that bzip3 is not yet listed on the large text compression benchmark http://mattmahoney.net/dc/text.html

aidenn0

But it does have many, many BWT based implementations. Might be interesting to benchmark it against e.g. bsc[1]

1: https://github.com/IlyaGrebnov/libbsc

Scaevolus

How does this compare to existing BWT-based compressors on things like the Large Text Compression Benchmark?

twotwotwo

A random thought that Might Work, Who Knows(tm): first compress long-ish repetitions in the input and store the copies separately from the literals (like zstd). Then run just blocks of literals through the BWT before entropy coding.

The idea is that the BWT can help take advantage of however much context remains after the copies are snipped out, whether that's one byte or a few, and shrinking the input with the LZ-ish step may make it faster. It might be strictly worse than using PPM or basic context modeling like Brotli's; either of those can be "aware" of the preceding bytes even when they come from copies rather than literals.

It's implied in "successor to bzip2" and a lot of the comments, but it's worth highlighting that high-compression algorithms, especially those that are also slow to decompress, are a pretty specialized niche now. Using zstd or brotli at low to medium settings sometimes speeds things up by reducing network or storage transfers more than their CPU use slows you down. (Especially when your storage transfers are network transfers.) Even when compressing isn't a net speedup, you pay fairly little time for the saved space. Even lowish levels of zstd and brotli often eke out decent compression ratios for big inputs since, with modern quantities of RAM, their MBs-long history windows let them take advantage of long-range matches.