Compressors galore: pbzip2, lbzip2, plzip, xz, and lrzip tested on a FASTQ file
28th March 2015
About 2 years ago I had already reviewed some parallel (and not) compressing utilities, settling at that time on pbzip2 – it scales quasi-linearly with the number of CPUs/cores, stores compressed data in relatively small 900k blocks, is fast, and has good compression ratio. pbzip2 was (and still is) a very good choice.
Yesterday I got somewhat distracted, and thus found lbzip2 -
an independent, multi-threaded implementation of bzip2. It is commonly the fastest SMP (and uniprocessor) bzip2 compressor and decompressor
- as it says in the Debian package description. Is it really “commonly the fastest” one? How does it compare to pbzip2? Should I use lbzip2 instead of pbzip2?
This minor distraction had grown into a full-scale web-search and comparison, adding to the mix plzip (a parallel version of lzip), xz, and lrzip. After reading thousands of characters, all of these were put to a simple test: compressing an about 2 gigabyte FASTQ file with default options.
All the external links and benchmarks, as well as my own mini-benchmark results, are provided below.
The conclusion is that out of all the tested compressors lbzip2 is indeed the best one (for my practical use). It is only slightly better than the trusty pbzip2, which takes the second place. All the other compressors performed so poorly, that they do not get any place in my practical rating…
So, let us first ask internet wisdom/foolishness, if lbzip2 or pbzip2 is faster/better?
- this askubuntu question shows that lbzip2 is compressing faster (1:43) than pbzip2 (2:34)
- this nice benchmark also confirms that lbzip2 is indeed faster at compressing; lbzip2 also appears to use less RAM and a little bit less CPU during compression; during decompression, lbzip2 (reportedly) uses much more RAM. lbzip2 achieved at least as good (and even marginally better) compression ratios as pbzip2.
- lbzip2 github page and also this unrelated page both say that lbzip2 is fully cross-compatible with bzip2
- probably most importantly, lbzip2 github readme says that even bzip2-compressed archives get a decompression speedup (which is definitely not the case with pbzip2)
- lbzip2 also uses 100-900k blocks (900k by default)
- it is not clear if lbzip2 is somewhat less widely tested than pbzip2
- lbzip2‘s author has performed some testing (back in 2009, mind you!), and these were the most important results:
- lbzip2 is better when decompressing from a pipe, no matter the producer, and also when the compressed input coming from a regular file is single stream
- pbzip2 beats lbzip2 when the compressed input is coming from a regular file and is multi-stream (yes, pbzip2 can decompress even lbzip2′s compressed output faster than lbzip2 itself, when it’s coming from a regular file) note: if you check the vbsupport benchmark above, you’ll see that lbzip2 had probably fixed slight lagging behind pbzip2 for regular multi-stream files; this improvement is also confirmed by my testing
So, at least in theory lbzip2 is indeed better than pbzip2, even if only at faster decompression of bzip2-compressed files.
While looking for benchmarks, I’ve found this one (old but good), which highly praises lzop compressor. Apparently, lzop is noticeably faster than even gzip, and compresses only a little bit worse. However, I am not really interested in a faster gzip: I need something with much better compression, but still fast enough for multi-gigabyte files.
Next, I have stumbled upon lzip and plzip (.lz). What are these compressors?
- plzip is a parallel version of lzip, and fully lzip-compatible
- lzip is an LZMA compressor
- reading the documentation leaves an impression that [p]lzip achieves better compression, is slower, and needs much more RAM than competing compressors
- there is a special utility called lziprecover, which helps recover data from damaged lzip archives, by leveraging, on the one hand, CRC checksums of compressed blocks, and, on the other, multiple damaged copies of the archive (if available)
- from the official website:
Lzip is a lossless data compressor with a user interface similar to the one of gzip or bzip2. Lzip is about as fast as gzip, compresses most files more than bzip2, and is better than both from a data recovery perspective.
- default “member” (compressed block/chunk) size is 4 petabytes, but can be set to a lower value (minimal 100kb), mimicking bzip2′s chunk size
- supports multiple, independent volumes (loosing one volume will still allow recovering data from all other volumes)
- with multiple cores, plzip creates multi-member files by default (but it is not clear, what is the size of these members? Default is said to be twice the dictionary size, but default for dictionary size is not specified in the manual – so lzip/plzip seem to require compression level -1…-9 specification)
- here lzip compresses a little bit better than xz without the
--extreme
option - (l|p)bzip2 should still be faster than either lzip or xz
- I started mentioning xz, because lzip and xz (at least historically) are competing LZMA-based compressors
- a 1 year old opinion makes the following statements about lzip:
- lzip is a marginal archiver with no real benefits since the appearance of xz (note: xz is a successor of lzma-utils)
- xz is more popular, more widely accepted
- xz has a community, while lzip has 1 author
- performance of xz and lzip is comparable
- xz has more features
- but lzip does indeed have a recovery utility that xz doesn’t
That doesn’t really tell us much on how plzip/lzip compare to, say, pbzip2. But before performance, let us pay some more attention to long-term storage features of lzip:
The lzip file format is designed for data sharing and long-term archiving, taking into account both data integrity and decoder availability:
- The lzip format provides very safe integrity checking and some data recovery means. The lziprecover program can repair bit-flip errors (one of the most common forms of data corruption) in lzip files, and provides data recovery capabilities, including error-checked merging of damaged copies of a file.
- The lzip format is as simple as possible (but not simpler). The lzip manual provides the code of a simple decompressor along with a detailed explanation of how it works, so that with the only help of the lzip manual it would be possible for a digital archaeologist to extract the data from a lzip file long after quantum computers eventually render LZMA obsolete.
- Additionally, the lzip reference implementation is copylefted, which guarantees that it will remain free forever.
(I really liked the part about the digital archaeologist! And the copyleft, to a lesser extent.)
Looks really attractive! Because what I am using compressors for is, essentially, longer-term archiving, with unpredictable needs to sometimes decompress some of the files. And, of course, storage media will fail fully or partially, so recovering is important, too. But what is this xz compressor?.. I’ve seen it before, in the contexts with words “overtake the world” or similar…
xz
- much more complex file format than lzip, but maybe it has some benefits for client programs and/or recovery?
- supports integrity checks and multiple compressed blocks
- according to this post from 2012, xz (single-threaded) both compressed and decompressed much faster than lzip… and lrzip (depends on settings, of course)
- lzip is older than xz, and was better than xz predecessor – lzma-utils
- xz is adopted by some linux distributions and software projects for package compression
- xz does not seem to have an equivalent of lziprecover
- tar supports both
--lzip
and--xz
, also with--auto-compress
This hasn’t really added any clarity, has it? Moreover, we now have one more unknown – the lrzip compressor. lrzip is a redundancy compressor with LZO, gzip, bzip2, ZPAQ and LZMA back-ends. It is highly efficient for highly redundant data, even if redundancies are separated with long stretches of other data. (FASTQ files are fairly redundant, though bzip2 seems to utilize that fairly well already; can lrzip do better?)
However, what if a part of the archive is damaged? How much information is lost then? Is it at all possible to recover some of the data from damaged .lrz archives?
Author’s benchmarks showcase how good lrzip is at redundant data compression (although lrzip is multithreaded, so comparison in the benchmark to non-multithreaded algorithm implementations is not quite correct…). Damaged archive recovery concerns would have prevented me from using lrzip anyway, but I was really interested if a “long-range redundancy” compressor can do better than usual, “short-range redundancy” compressors.
My testing setup
- Debian testing 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt7-1 (2015-03-01) x86_64 GNU/Linux
- Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (4 physical cores with HT enabled: 8 hardware threads)
- 16GiB RAM
- test file name: test.fastq
- test file size: 2 223 860 346 bytes (a little over 2 gigabytes)
- test file was copied once to RAM-mounted /tmp, to exclude any I/O bottleneck effects on compression speeds
- bzip2: 1.0.6
- lbzip2: 2.5
- pbzip2: 1.1.9
- xz: 5.1.0alpha
- plzip: 1.2
- lrzip: 0.616
- command execution time and maximal process RSS memory were measured with
/usr/bin/time -f '%C: %e s, %M Kb' compressor arguments
(note: this is not bash’s built-in time); please note that memory measurement can be incorrect for multithreaded compressors
Below come testing results. I have not put them into a single table, but I do comment the results in a few places. Entire testing followed this pattern:
- compress test.fastq, deleting the original
- test compressed archive (note: this was done only for some compressors, not all)
- decompress archive back to test.fastq, delete archive
- if 3 previous steps are fast enough: repeat 1-2 more times (but only show the best result below); otherwise continue
- repeat with the next compressor
bzip2: 309 159 275 bytes
bzip2 was used as a baseline, to highlight speed benefits of both lbzip2 and pbzip2.
test.fastq: 7.193:1, 1.112 bits/byte, 86.10% saved, 2223860346 in, 309159275 out.
bzip2 -v test.fastq: 190.63 s, 7608 Kb
bzip2 -v -d test.fastq.bz2: 51.58 s, 4620 Kb
Bzip2 is neither particularly slow, nor particularly fast. It also seems to have modest memory requirements.
pbzip2: 310 462 610 bytes
pbzip2 is the currently used reference. For any other compressor to become a successor of pbzip2, that other compressor must be either a little faster (while compressing as good as pbzip2), or a little better compressor (while being as fast as pbzip2), or both. Note that compressed file size is only a tiny bit larger than with bzip2.
“test.fastq.bz2″: compression ratio is 1:7.163, space savings is 86.04%
pbzip2 -v test.fastq: 46.22 s, 67436 Kb
pbzip2 -dv test.fastq.bz2: 19.80 s, 46672 Kb
Interestingly, pbzip2 --test
uses 1 thread only (but also consumes only 6MB RAM), resulting in decompression times similar to those of bzip2. lbzip2 uses all 8 threads also during testing.
lbzip2: 311 040 543 bytes
lbzip2: compressing “test.fastq” to “test.fastq.bz2″
lbzip2: “test.fastq”: compression ratio is 1:7.150, space savings is 86.01%
lbzip2 -v test.fastq: 22.67 s, 49812 Kblbzip2: decompressing “test.fastq.bz2″ to “test.fastq”
lbzip2: “test.fastq.bz2″: compression ratio is 1:7.150, space savings is 86.01%
lbzip2 -vd test.fastq.bz2: 18.86 s, 46652 Kb
I repeated pbzip2 and lbzip2 tests several times, and it was always that lbzip2 compressed this same file about twice as fast… Wow! Decompression speed is about the same, compressed file size is marginally larger than with pbzip2. Overall, lbzip2 does look like a new drop-in replacement of bzip2/pbzip2 for me.
xz -0 ––threads=8: 517 967 372 bytes
I would call this one major test disappointment. Default setting, -6, was way too slow (estimated 28 minutes to compress!!!). Even the fastest -0 setting was still too slow! And here’s one of the reasons, straight from the xz man page:
Multithreaded compression and decompression are not implemented yet, so this option has no effect for now. As of writing (2010-09-27), it hasn’t been decided if threads will be used by default on multicore systems once support for threading has been implemented.
Also, I forgot to use the --block-size=900k
option, but that seems to be of no concern with such results:
100 % 492.5 MiB / 2,120.8 MiB = 0.232 18 MiB/s 1:59
xz -0 -v test.fastq: 119.25 s, 4780 Kb
xz ––test ––verbose ––threads=8 test.fastq.xz: 36.00 s, 2568 Kb
100 % 492.5 MiB / 2,120.8 MiB = 0.232 58 MiB/s 0:36
xz -d -v test.fastq.xz: 36.54 s, 2500 Kb
xz -0 was both slower and had significantly worse compression when compared to lbzip2 and pbzip2. xz -0 was faster than good old bzip2, but had significantly worse compression… Really, major test disappointment.
plzip: between 407 696 562 and 498 708 539 bytes
One more major test disappointment. (Or am I somehow using these compressors in a wrong way?…) I haven’t found a way to set block/member size (for lzip, that would be the -b
option). Default speed setting -6 was also way too slow, but settings -1 to -3 were comparable to pbzip2, so I did all three.
plzip -1: 498 708 539 bytes
test.fastq: 4.459:1, 1.794 bits/byte, 77.57% saved, 2223860346 in, 498708539 out.
plzip -1 ––verbose ––threads=8 test.fastq: 30.27 s, 126360 Kb (this seems to be per-thread memory…)
plzip ––test ––verbose ––threads=8 test.fastq.lz: 6.86 s, 11640 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 7.24 s, 11644 Kb
Compression speed and ratio: both worse than lbzip2. But the fastest testing and decompression so far.
plzip -2: 456 301 558 bytes
test.fastq: 4.874:1, 1.641 bits/byte, 79.48% saved, 2223860346 in, 456301558 out.
plzip -2 ––verbose ––threads=8 test.fastq: 38.81 s, 193416 Kb
plzip ––test ––verbose ––threads=8 test.fastq.lz: 6.26 s, 14828 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 6.38 s, 14736 Kb
Compression time worse than lbzip2, a little better than pbzip2, but compression ratio worse than any one of these. But even faster testing and decompression.
plzip -3: 407 696 562 bytes
test.fastq: 5.455:1, 1.467 bits/byte, 81.67% saved, 2223860346 in, 407696562 out.
plzip -3 ––verbose ––threads=8 test.fastq: 63.74 s, 245756 Kb
plzip ––test ––verbose ––threads=8 test.fastq.lz: 5.82 s, 18936 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 6.10 s, 18944 Kb
Even faster testing and decompression! But compression ratio and speed are still worse than lbzip2 and pbzip2.
And the final contestant, lrzip! All 5 back-ends were tested: LZO, gzip, bzip2, LZMA, ZPAQ.
lrzip has several peculiarities, which hinder its use as a drop-in replacement for, say, bzip2. Most importantly, when a file is compressed, it is not deleted, unless a -D
options is specified. Unlike pbzip2 and lbzip2, which use all available CPUs/cores by default, lrzip only uses 2 by default (-p 8
in the results below requests use of 8 cores). Another unusual feature is that during testing a file is uncompressed to a storage medium, and then deleted; almost all the other compressors only verify the decompressed data stream, which is then immediately discarded and never written to storage medium. Related feature is a -c
option, which performs file verification after decompression by reading the decompressed file from storage medium and comparing it to the decompressed stream. lrzip also stores MD5 hashes of data, and allows verifying these. lrzip comes with several helper scripts – for example, one which allows tarballing and lrzipping a chosen directory in a single command. Actually, lrzip is more of an archive utility, and not just a compressor.
lrzip -D -p 8: 334 504 383 bytes
In this default (LZMA) mode, lrzip starts with 1 thread, but eventually uses more and more cores (though never all 8, or I haven’t noticed this). Decompressing seems to use more threads, but that also depends on the back-end used (the slower it is – the more threads will be used, e.g. ZPAQ versus LZO).
test.fastq – Compression Ratio: 6.648. Average Compression Speed: 3.113MB/s.
Total time: 00:11:21.85
lrzip -D -p 8 test.fastq: 681.84 s, 3331080 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 124.706MB/s
[OK] – 2223860346 bytes
Total time: 00:00:17.13
lrzip -t -p 8 test.fastq.lrz: 17.21 s, 2567608 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 117.778MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:17.59
lrzip -d -p 8 -D test.fastq.lrz: 17.67 s, 2567664 Kb
In the default LZMA mode, lrzip is significantly slower than even bzip2, and has somewhat worse compression ratio. Yes, this is the 3rd major test disappointment.
gzip back-end: lrzip -g -L 9 -D -p 8: 430 013 769 bytes
Despite specifying -p 8
, lrzip mostly operates in 1 thread, and only sometimes in 2 (probably invokes gzip library). Testing is also done with 1 thread only, but is very fast (but slower than plzip). The -L 9
option is supposed to be translated into -9 for gzip; as this normally has nearly no effect, it wasn’t used in the following lrzip tests.
test.fastq – Compression Ratio: 5.172. Average Compression Speed: 0.704MB/s.
Total time: 00:50:11.34
lrzip -p 8 -g -L 9 -D test.fastq: 3011.34 s, 2745520 Kb100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 163.077MB/s
[OK] – 2223860346 bytes
Total time: 00:00:12.71
lrzip -t -p 8 test.fastq.lrz: 12.79 s, 2577632 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 163.077MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:12.88
lrzip -d -p 8 -D test.fastq.lrz: 12.95 s, 2577728 Kb
And again, compression speed and ratio are worse than for bzip2…
LZO back-end: lrzip -l -D -p 8: 766 520 776 bytes
test.fastq – Compression Ratio: 2.901. Average Compression Speed: 4.690MB/s.
Total time: 00:07:32.89
lrzip -l -D -p 8 test.fastq: 452.88 s, 2714452 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 212.000MB/s
[OK] – 2223860346 bytes
Total time: 00:00:10.58
lrzip -t -p 8 test.fastq.lrz: 10.66 s, 2582516 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 192.727MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:11.32
lrzip -d -p 8 -D test.fastq.lrz: 11.39 s, 2582504 Kb
No comments.
bzip2 back-end: lrzip -b -D -p 8: 353 473 476 bytes
test.fastq – Compression Ratio: 6.291. Average Compression Speed: 4.473MB/s.
Total time: 00:07:53.95
lrzip -b -D -p 8 test.fastq: 473.94 s, 2781104 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 68.387MB/s
[OK] – 2223860346 bytes
Total time: 00:00:30.69
lrzip -t -p 8 test.fastq.lrz: 30.77 s, 2583156 KbDecompressing…
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 66.250MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:31.92
lrzip -d -p 8 -D test.fastq.lrz: 32.00 s, 2583108 Kb
Hadn’t I done all of these simple tests myself, by now I’d think that this test was rigged to show how good pbzip2 and lbzip2 are at compressing FASTQ files
ZPAQ back-end: lrzip -z -D -p 8: 292 380 439 bytes
test.fastq – Compression Ratio: 7.606. Average Compression Speed: 2.804MB/s.% 7:100%
Total time: 00:12:36.51
lrzip -z -D -p 8 test.fastq: 756.51 s, 3585740 KbDecompressing…
100% 2120.84 / 2120.84 MB 1:100% 2:100% 3:100% 4:100% 5:100% 6:100% 7:100%
Average DeCompression Speed: 3.970MB/s
[OK] – 2223860346 bytes
Total time: 00:08:54.57
lrzip -t -p 8 test.fastq.lrz: 534.65 s, 2583424 KbDecompressing…
100% 2120.84 / 2120.84 MB 1:100% 2:100% 3:100% 4:100% 5:100% 6:100% 7:100%
Average DeCompression Speed: 3.759MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:09:24.27
lrzip -d -p 8 -D test.fastq.lrz: 564.36 s, 2583460 Kb
Finally!!! We have compression better than bzip2! But it is also much slower than bzip2 (and some 12 times slower than pbzip2), so not really an option. Alas. And decompression time is the worst in the test – almost 10 minutes for what plzip does in under 7 seconds! (I do realize that compression ratio is also different – but not that much.) I wonder if slow lrzip speeds have anything to do with test.fastq being effectively in RAM? I do not know if there are any performance penalties to mmaping a file which is already on a RAM-mounted partition.
The test.fastq file that I’ve used was somehow really hard for the tested compressors to tackle as fast and as good as lbzip2 and pbzip2 could…
Questions? Comments? Improvements, including plots of these figures? Comment below.
April 29th, 2015 at 23:46
Version 5.2 of xz is out, which does have multi-thread support. You may have to compile it yourself but it might be worth testing. I haven’t tested it myself yet.
I use xz for non-realtime compression (e.g. overnight backups), because although it’s slow, it’s so much better than bzip2 and, of course, if it’s overnight I don’t care if it takes half an hour or whatever to run.
April 30th, 2015 at 11:36
It is good to know, thanks. I was using versions currently available in Debian testing. I guess I’ll make another comparison in a year or so
I must say that even with multithreading xz with default settings will likely be significantly slower than lbzip2 – on the order of 200+ seconds on the same test file and hardware, and assuming a really good parallelism implementation. For my use this is way too slow, and probably not worth the extra savings. Also, more complicated xz file format looks like another drawback to me (harder to recover data).
Clearly, everyone’s needs are different, so I’m not saying that lbzip2 is much better overall – but it is for me
September 30th, 2016 at 21:55
Try pxz, it’s a parallel version of xz and is a drop-in replacement in terms of file format.
October 6th, 2016 at 21:17
Thanks Hmage, that sounds interesting. Maybe in my next installment of compressor testing I’ll include pxz, too
I did eventually try a newer (already parallel, I think) version of xz on genomic data, and had mixed success.
lbzip2 sometimes achieved even better ratios, mostly just a little bit worse, rarely much worse, but was always many times faster.
October 11th, 2016 at 15:02
Could you please try that derivative of zpaq?
http://mattmahoney.net/dc/fastqz/
October 12th, 2016 at 23:15
Trotos,
your comment reminded me that I did mention Fastqz in my previous post on the topic: http://bogdan.org.ua/2013/10/17/favourite-file-compressor-gzip-bzip2-7z.html
Looks like I haven’t actually tested it, because of the concern that data recovery _might_ be too complicated with Fastqz.
For comparison, a single block damage with bzip2 would only cause the loss of between 100 and 900 K of compressed data, which – for fastq files – will probably have negligible effects.
Another reason to not test it was that it is not clear if it will see any future support.
If, for example, a change in compiler makes building fastqz not possible without first modifying the code, then it’s… bad
Maybe I’ll test it anyway – next time.