Featured

Friday, April 1, 2011

Zhuff crowned as best Speed-Oriented compression program

 After the recent upgrade of Zhuff to multi-threading, this program has been once again evaluated by the public benchmark website CompressionRatings.

And results are fairly good : Zhuff takes first place at the Speed Oriented ranking, leading the chart by a sizable margin.
Looking at the detailed benchmark chart, the lead over previous king (the excellent 4x4 from Bulat Ziganshin) is especially telling, with +65% compression speed and +45% decoding speed. The difference would be even larger if bandwidth was not effectively limited by the RAM Drive speed. Finally, compression ratios remain fairly close, with a 4% advantage to 4x4.

As a sidenote, don't be fooled by the illusion that Zhuff and LZ4 are "about the same speed" : this result is entirely related to RAM Drive I/O saturation. Without this limitation, LZ4 would fare much better, as can be guessed from its CPU occupation ratio (only 160% to reach saturation, which means 100% for the RAM Drive driver, and "just" 60% (less than one core!) for the decoding algorithm).

Which lead us to the next question : how can we know which compressor is best suited for which speed ?

Using CompressionRatings data, we can build an interesting graph, using a simple file transmission scenario. In this scenario, file is compressed, then sent over a limited-bandwidth pipe, and then decoded. We take the simplistic assumption that there is no other delay in between. Total time is calculated and compared

(click on graph for larger display)

Looking at the graph, Zhuff is the better choice for pipes ranging from 5 MB/s to 200 MB/s. Above that point, LZ4 takes the lead (difference would be much more significantly if there was no RAM Drive I/O limitation issues, but well, that's the data we've got). Below that point, compression power matters more, with 4x4 becoming competitive at -3 level, then FreeArc and Nanozip getting into the fray.
It is also worth noting that more mainstream compressors, such as winrargzip7zip and even the parallel version pigz, do not reach top position in this graph.

You can download the latest version of Zhuff at its homepage.

Tuesday, March 29, 2011

Ross William's LZRW

Once upon a time...
Well, okay, it's not that far far away, but still, early 90's seems like prehistoric era for personal computing by now.
Ross Williams created a fairly interesting family of LZ compressors, featuring excellent speed and memory properties, and still providing good compression ratio (by these days standards), which he modestly called LZRW, for Lempel-Ziv-Ross Williams.

It has since become a classic in early compression algorithms, still being a competitive one even by today standards. Indeed, it's also the basis of several modern compressors, such as QuickLZ.

The main idea behind LZRW is this one : instead of providing an offset to the LZ  decoder, just provide it with the pointer position into a table.

It looks complex or crazy at first, but it is quite logical, following Dr Ross's mind into his successive versions of LZRW :
Back in these days, computers featuring some 64KB of RAM were still commonplace. Better ones could reach an impressive 256KB. Therefore, since table sizes were seriously limited, capability to track all positions in the buffer to find the best match was, well, mostly out of reach.

LZRW does not target best maximal compression, but fast and practical (ie memory efficient) compression level. By accepting scarcity of references, it created some "holes" in addressable space, which translates into compression inefficiency.
Hence the main idea : by using the table position instead of the offset, there is no more any hole in addressable space : as long as the table is completely filled, all positions have, roughly, an equivalent probability to appear (well, roughly enough...). Furthermore, all matches, even long range ones, use exactly the same number of bits to be described, making them more efficient.

The downside is that the decoder must reference the data exactly as the compressor did, making it more complex (and slower) than more standard LZSS. But still, this trade-off is very acceptable, since the speed penalty is not too large.

The LZRW scheme features a "best spot" for optimal compression ratio : the larger the table, the better its "match find capability", hence a better compression ratio. However, the larger the table, the more bits it requires to describe the table slot to the decoder, hence a worse compression ratio.

The "best spot" is found where both forces collide.

Or is it ?
One problem is, the optimal "nb of bits" is not the same depending on the file to be compressed.
For example, with "enwik8", this optimal number is 14 bits.
For "open office", it is 19 bits.
For very small files, this number can go down to 10 or less.
Therefore, the question is : how to find the optimal table size, in order to get the "optimal compression ratio" for any given input ?

Alas, that question has no easy answer, and i still have to find one myself...