Monday, December 12, 2011
Advanced Parsing Strategies
Getting better compression ratio with LZ algorithms requires more than looking for long matches. It also requires some "parsing".
To explain it simply, one would assume that once he finds a "best possible match" at current position, he just has to encode it and move forward.
But in fact, there are better possibilities. The naive encoding strategy is called "greedy match". A proven better proposition is to check if, at next position, a better (i.e. longer) match exists. If it does, then the current match is dropped in favor of the next one. This is called "lazy match", and typically improves the compression ratio by 1-2%, which is not bad considering that the compression format remains unaffected.
Lazy match is simple enough to understand, and clearly illustrates the trade-off at stake : compute more searches, in order to find better (longer) matches and improve compression ratio.
On the other extreme side, there is a strategy called "Optimal parsing", which consists in computing a serie of matches at each position, and then select the best ones by a reverse traversal computation (like a "shortest path" algorithm). Optimal parsing requires some huge computation, especially at search step, since each and every position must be fully verified with a complete list of match candidates in ascending order.
Not willing to pay too much on the CPU side, i've tried to find a middle way, which would mostly emulate the "optimal parsing" result but with a CPU cost closer to Lazy match.
The main ideas is as follows :
Suppose i'm getting a match of length ML at position P.
The only reason i would not want to encode it immediately is if it exists a better match between P and P+ML.
We take the assumption that, at position P, we have found the best possible match. Then, the smallest possible "better match" must have a length ML+1 and start and position P+1.
Consequently, the "smallest and closest better match" must end at position (P+1) + (ML+1) = P+ML+2.
Therefore, should such a better match exist, i'll find it by searching for the sequence stored at position P+ML+2-MINMATCH. Moreover, should any other better match exist, it will necessary include the sequence stored at position P+ML+2-MINMATCH. So i will get all of them, and consequently the best of them, by simply searching for this sequence.
Compared to Optimal Parsing, which requires a full search at each position, this is a huge speed improvement.
However, the search is more complex than it sounds. To begin with, the "longest match" may start at any position between P and P+ML+2-MINMATCH. To find it, it will be necessary to check the match length in both forward and backward direction.
It means it's not possible to use advanced match finders, such as BST or MMC, which tend to eliminate unpromising candidates. It's not suitable here, since the total match length may be achieved thanks to backward direction. Therefore, each and every stream which contains the searched sequence must be checked.
In these circumstances, search algorithm is basically limited to Hash Chain, which is only suitable for "short range" searches. So, it's a first limitation.
If no better match is found, i can safely write the current best match, and then start again the search.
If a better match is found, it means we have overlapping matches, with the second one being better than the first. It would be tempting to simply shorten the first match, in order to "make room" for the longer one, and then start again the process, but alas it's more complex than that. This decision will depend on P3.
Let's analysed the situation more thoroughly.
We have 3 overlapping matches, at positions P1, P2, & P3, with :
ML3 > ML2 > ML1
P2 <= E1 (=P1+ML1)
P3 <= E2 (=P2+ML2)
If P3<E1, then P2 is discarded, P3 becomes P2, and we search a new P3.
If P3=E1, then P1 can be encoded, P2 is discarded, P3 becomes P1, and we continue the search.
Obviously, the situation is less trivial in the more common situation when P3 > E1.
If P3 < E1 + VerySmallLength, then it's very probable that P2 will not pay for itself. A match costs an offset and a length, this can cost more than a few literals. Dealing with this case is almost equivalent to P3=E1, so we'll consider it closed.
If P3-P2 > ML1, then we know that the 2nd match will be longer than the 1st one. So we can proceed with shortening P1 length to ML1=P2-P1. Then we can encode P1, P2 becomes P1, P3 becomes P2, and we continue the search.
So now we are left with the difficult case. We have 3 consecutive matches, the limit between P3 and P2 is not yet settled (we would need P4 to decide), and depending on this decision, it impacts the limit between P1 and P2.
Let's give an example :
P1 = 0
ML1 = 8
E1 = P1 + ML1 = 8
P2 = 5
ML2 = 10
E2 = P2 + ML2 = 15
P3 = 12
ML3 = 12
P3-E1 = 4 : This is enough to "pay for P2"
But should i do :
ML1 = 8 ; ML2 = 4
or
ML1 = 5; ML2 = 7 ?
They look equivalent, but they are not, since the entropy difference between 4 and 5 is likely to be higher than the entropy difference between 7 and 8.
Therefore, if we keep the objective of a total length of 12, the first choice is the best one.
However, it's not yet sure that P3 will effectively starts at 12. It all depends on P4, which is not yet known.
For example, if P4=E2=15, then obviously P3 disappears, and the better choice is ML1 = 5, ML2 = 10.
But, if no P4 is found, then P3=12, and the better choice is ML1 = 8, ML2 = 4.
So, if we want to decide now, we have to accept that the best choice is not yet known, and we have to settle with something sub-optimal. As we are already working on an approximation, this should not be considered a big issue, as long as the approximation is "good enough".
For programming simplification, we'll assume that the second solution is the better one, shortening ML1 as is necessary to make room for P2.
It means we can now encode P1. P2 becomes P1, P3 becomes P2, and we continue the process.
So here we have an algorithm, which tries to create a "nearly optimal" match distribution, taking local decisions to favor longer matches.
As a great property, it searches only N+1 list of matches per serie of N consecutive matches, which is a great time saver compared to Optimal Parsing.
Time for some evaluation. How well does it fare ?
Well, compared to the deep lazy match strategy implemented into Zhuff (-c2 mode), surprisingly little. The compression ratio barely improved by about 1%. At least, it's a gain...
There are several reasons to explain this disappointing result.
First, the deep lazy match strategy of Zhuff v0.8 -c2 is already an advanced parsing algorithm. It heavily favors larger matches, and is complemented by a small memory to re-fill skipped space. So it's pretty good actually, which limits remaining potential savings.
But the Deep Lazy match strategy of Zhuff 0.8 requires on average 3xN forwards searches per N match. This is in contrast with the strategy in this post, which only requires N+1 searches, albeit more complex ones (forwards & backward). As a consequence, the new strategy is 50% faster than the previous deep lazy match one. Now it looks better.
Second, the proposed strategy is only based on maximizing match length, deliberately forgetting any other cost contributor, such as, for example, offset length.
In a variable offset representation, small offsets are very likely to cost less, if not much less than larger ones. It's still possible to use this property "tactically", by using the smallest known offset able to provide the selected match length, but that's just an opportunistic gain, not an advantage that the selection algorithm takes into consideration.
Moreover, the full list of candidates can only be built for the first position P1. For the next ones (P2, P3), only a reduced list is likely to be found, since we learn P2 and P3 positions during the backward search. As a consequence, quite frequently, only a single offset is known (the longest one). This further limits the "opportunistic" offset selection gain.
For Zhuff, the offset cost vary from 9 to 22 bits. More realistically, it hovers between 12 and 21 bits. That's more than one byte of difference. Knowing that a literal is also compressed, with an average cost of about 6 bits per literal (depending on source file), then a long offset costs quite more than a small offset + 1 literal.
In fact, to select the better choice, it would be necessary to properly estimate, at each step, the cost of each contributor, offset, match length, literals and flags.
Well, that's the whole point of Optimal Parsing...
Subscribe to:
Posts (Atom)