and is now officially a Facebook Open Source project :
https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard
Edit : grab latest release at : https://github.com/facebook/zstd/releases
Wednesday, August 31, 2016
Tuesday, July 5, 2016
Specification of Zstandard compression format
With the compression format stabilized in v0.7.x serie, Zstandard gets now a first version of its formal specification :
https://github.com/Cyan4973/zstd/wiki/Zstandard-Compression-Format
If you ever wanted to know how the algorithm works, and / or wanted to create your own version in any language of your choice, this is the place to start.
It is a first version though, with usual caveats : expect it to be perfectible and require a few rounds, feedbacks and modifications, before reaching a stage of being unambiguous and clear.
This is an opened public consultation phase, every feedback is welcomed.
It's also the very last chance to review the different choices that made it into the format, introducing questions and possibly suggesting improvements or simplifications.
It's also the very last chance to review the different choices that made it into the format, introducing questions and possibly suggesting improvements or simplifications.
I don't expect "big changes", but maybe a collection of very minor things, which could, collectively, be worth considering a last polishing touch before pushing to v1.0.
Edit : Indeed, there will be a polishing stage...
Writing the specification made it possible to grab a complete view of the multiple choices which made it into the format. Retrospectively, some of these choices are similar yet slightly different. For example, encoding types exist for all symbols, but are not numbered in the same way. Most fields are little-endian, but some are big-endian, some corner cases optimizations are so rare they are not worth their complexity, etc.
Therefore, in an effort to properly unify every minor detail of the specification and bring a few simplifications, a last modification round will be performed. It will be released as 0.8. No major change to expect, only a collection of minor ones. But a change is a change, so it's nonetheless a new format.
As usual, 0.8 will be released with a "legacy mode", allowing reading data already compressed with 0.7.x series and before.
Unlike usual though, we plan to release a "v0.7 transition" version, able to read data created with v0.8, in order to smooth transition in live systems which depend on listeners / producers, and need to ensure all listeners are able to read data sent to them before upgrading to 0.8.
Edit 2 :
v0.8.0 and "transition v0.7.5" have been released
Edit : Indeed, there will be a polishing stage...
Writing the specification made it possible to grab a complete view of the multiple choices which made it into the format. Retrospectively, some of these choices are similar yet slightly different. For example, encoding types exist for all symbols, but are not numbered in the same way. Most fields are little-endian, but some are big-endian, some corner cases optimizations are so rare they are not worth their complexity, etc.
Therefore, in an effort to properly unify every minor detail of the specification and bring a few simplifications, a last modification round will be performed. It will be released as 0.8. No major change to expect, only a collection of minor ones. But a change is a change, so it's nonetheless a new format.
As usual, 0.8 will be released with a "legacy mode", allowing reading data already compressed with 0.7.x series and before.
Unlike usual though, we plan to release a "v0.7 transition" version, able to read data created with v0.8, in order to smooth transition in live systems which depend on listeners / producers, and need to ensure all listeners are able to read data sent to them before upgrading to 0.8.
Edit 2 :
v0.8.0 and "transition v0.7.5" have been released
Friday, June 17, 2016
Zstandard reaches Candidate Compression Format
Finally. That was a pretty long journey.
With the release of v0.7, Zstandard has reached an important milestone where the compression format is stable and complete enough to pretend becoming v1.0.
We don't call it v1.0 yet, because it's safer to spend some time on a "confirmation period" during which the final compression format is field-tested. It shall confirm its ability to match its objectives, dealing with all situations it is planned for.
Then it will be rebranded v1.0.
With the source code out, it's also time to think about other supportive actions, such as documentation. The next priority task is to redact a specification of the compression format, so that it can be better exposed, understood and implemented by third parties. The goal is that any third party should be able to create its own version. However, describing algorithm in a way which is clear and concise is not trivial. It's expected that some paragraphs will need re-write in an effort to become clearer and more complete, reducing sources of confusion. So this effort will benefit from user exposure and feedback
It's also time to have some more involved discussions around the API.
The current "stable API" is expected to remain, but its scope is also limited, providing mostly the "basics", such as compressing a buffer into another buffer. More complex usages are possible (streaming in memory-constrained environment using a custom allocator for example), but need to access advanced prototypes, exposed in the "experimental" section.
Now will be a good time to seriously consider extending the scope of "stable API".
Just "promoting" a prototype from "experimental" to "stable" is not necessarily the better way to go. It's important that the extended API remain simple enough to understand and use (which is not the main priority of "experimental" section).
After being immersed in the code for so long, some technical complexities can become invisible, while becoming real obstacles to newcomers. Therefore, it's important to "think" extended API properly, to create interfaces easy to use.
For this objective, the key is to listen 3rd parties, in order to better fit natural expectations.
With the release of v0.7, Zstandard has reached an important milestone where the compression format is stable and complete enough to pretend becoming v1.0.
We don't call it v1.0 yet, because it's safer to spend some time on a "confirmation period" during which the final compression format is field-tested. It shall confirm its ability to match its objectives, dealing with all situations it is planned for.
Then it will be rebranded v1.0.
With the source code out, it's also time to think about other supportive actions, such as documentation. The next priority task is to redact a specification of the compression format, so that it can be better exposed, understood and implemented by third parties. The goal is that any third party should be able to create its own version. However, describing algorithm in a way which is clear and concise is not trivial. It's expected that some paragraphs will need re-write in an effort to become clearer and more complete, reducing sources of confusion. So this effort will benefit from user exposure and feedback
It's also time to have some more involved discussions around the API.
The current "stable API" is expected to remain, but its scope is also limited, providing mostly the "basics", such as compressing a buffer into another buffer. More complex usages are possible (streaming in memory-constrained environment using a custom allocator for example), but need to access advanced prototypes, exposed in the "experimental" section.
Now will be a good time to seriously consider extending the scope of "stable API".
Just "promoting" a prototype from "experimental" to "stable" is not necessarily the better way to go. It's important that the extended API remain simple enough to understand and use (which is not the main priority of "experimental" section).
After being immersed in the code for so long, some technical complexities can become invisible, while becoming real obstacles to newcomers. Therefore, it's important to "think" extended API properly, to create interfaces easy to use.
For this objective, the key is to listen 3rd parties, in order to better fit natural expectations.
Friday, May 13, 2016
Finalizing a compression format
With Zstandard v1.0 looming ahead, the last major item for zstd to settle is an extended set of features for its frame encapsulation layer.
Quick overview of the design : data compressed by zstd is cut into blocks. A compressed block has a maximum content size (128 KB), so obviously if input data is larger than this, it will have to occupy multiple blocks.
The frame layer organize these blocks into a single content. It also provides to the decoder a set of properties that the encoder pledges to respect. These properties allow a decoder to prepare required resources, such as allocating enough memory.
The current frame layer only stores 1 identifier and 2 parameters :
These information may seem redundant.
Indeed, for a few situations, they are : when contentSize < (1<<windowLog). In which case, it's enough to allocated contentSize bytes for decoding, and windowLog is just redundant.
But for all other situations, windowLog is useful : either contentSize is unknown (it wasn't known at the beginning of the frame and was only discovered on frame termination), or windowLog defines a smaller memory budget than contentSize, in which case, it can be used to limit memory budget.
That's all there is for v0.6.x. Arguably, that's a pretty small list.
The intention is to create a more feature complete frame format for v1.0.
Here is a list of features considered, in priority order :
Content checksum
The goal of this field is obvious : validate that decoded content is correct. But there are many little details to select.
Content checksum only protects against accidental errors (transmission, storage, bugs, etc). It's not an electronic "signature".
1) Should it be enabled or disabled by default (field == 0) ?
22-bits means there is a 1 in 4 millions chances of collision in case of error. Or said differently, there are 4194303 chances out of 4194304 to detect a decoding error (on top of all the syntax verification which are inherent to the format itself). That's more than > 99.9999 %. Good enough in my view.
Dictionary ID
Data compressed using a dictionary needs the exact same one to be regenerated. But no control is done on the dictionary itself. In case of wrong dictionary selection, it can result in a data corruption scenario.
The corruption is likely to be detected by parsing the compressed format (or thanks to the previously described optional content checksum field).
But an even better outcome would be detect such mismatch immediately, before starting decompression, and with a clearer error message/id than "corruption", which is too generic.
For that, it would be enough to embed a "Dictionary ID" into the frame.
The Dictionary ID would simply be a random value stored inside the dictionary (or an assigned one, provided the user as a way to control that he doesn't re-use the same value multiple times). A comparison between the ID in the frame and the ID in the dictionary will be enough to detect the mismatch.
A simple question is : how long should be this ID ? 1, 2, 4 bytes ?
In my view, 4 bytes is enough for a random-based ID, since it makes the probability of collision very low. But that's still 4 more bytes to fit into the frame header. In some ways it can be considered an efficiency issue.
Maybe some people will prefer 2 bytes ? or maybe even 1 byte (notably for manually assigned ID values) ? or maybe even 0 bytes ?
It's unclear, and I guess multiple scenarios will have different answers.
So maybe a good solution would be to support all 4 possibilities in the format, and default to 4-bytes ID when using dictionary compression.
Note that if saving headers is important for your scenario, it's also possible to use frame-less block format ( ZSTD_compressBlock(), ZSTD_decompressBlock() ), which will remove any frame header, saving 12+ bytes in the process. It looks like a small saving, but when the corpus consists of lot of small messages of ~50 bytes each, it makes quite a difference. The application will have to save metadata on its own (what's the correct dictionary, compression size, decompressed size, etc.).
Custom content
Embedding custom content can be useful for a lot of unforeseen applications.
For example, it could contain a custom index into compressed content, or a file descriptor, or just some user comment.
The only thing that a standard decoder can do is skip this section. Dealing with its content is within application-specific realm.
The lz4 frame format already defines such container, as skippable frames. It looks good enough, so let's re-use the same definition.
Custom window sizes
The current frame format allows defining window sizes from 4 KB to 128 MB, all intermediate sizes being strict power of 2 (8 KB, 16 KB, etc.). It works fine, but maybe some user would find its granularity or limits insufficient.
There are 2 parts to consider :
- Allowing larger sizes : the current implementation will have troubles handling window sizes > 256 MB. That being said, it's an implementation issue, not a format issue. An improved version could likely work with larger sizes (at the cost of some complexity).
From a frame format perspective, allowing larger sizes can be as easy as keeping a reserved bit for later.
- Non-power of 2 sizes : Good news is, the internals within zstd are not tied to a specific power of 2, so the problem is limited to sending more precise window sizes. This requires more header bits.
Maybe an unsigned 32-bits value would be good enough for such use.
Note that it doesn't make sense to specify a larger window size than content size. Such case should be automatically avoided by the encoder. As to the decoder, it's unclear how it should react : stop and issue an error ? proceed with allocating the larger window size ? or use the smaller content size, and issue an error if the content ends up larger than that ?
Anyway, in many cases, what the user is likely to want is simply enough size for the frame content. In which case, a simple "refer to frame content size" is probably the better solution, with no additional field needed.
Header Checksum
The intention is to catch errors in the frame header before they translate into larger problems for the decoder. Note that only errors can be caught this way : intentional data tampering can simply rebuild the checksum, hence remain undetected.
Suggestion : this is not necessary.
While transmission errors used to be more common a few decades ago, they are much less of threat today, or they tend to garbage some large sections (not just a few bits).
An erroneous header can nonetheless be detected just by parsing it, considering the number of reserved bits and forbidden value. They must all be validated.
The nail in the coffin is that we do no longer trust headers, as they can be abused by remote attackers to deliver an exploit. And that's an area where the header checksum is simply useless. Every field must be validated, and all accepted values must have controllable effects (for example, if the attacker intentionally requests a lot of memory, the decoder shall put a high limit to the accepted amount, and check the allocation result).
So we already are highly protected against errors, by design, because we must be protected against intentional attacks.
Future features : forward and bakward compatibility
It's also important to design from day 1 a header format able to safely accommodate future features, with regards to version discrepancy.
The basic idea is to keep a number of reserved bits for these features, set to 0 while waiting for some future definition.
It seems also interesting to split these reserved bits into 2 categories :
- Optional and skippable features : these are features which a decoder can safely ignore, without jeopardizing decompression result. For example, a purely informational signal with no impact on decompression.
- Future features, disabled by default (0): these features can have unpredictable impact on compression format, such as : adding a new field costing a few more bytes. A non-compatible decoder cannot take the risk to proceed with decompression. It will stop on detecting such a reserved bit to 1 and gives an error message.
While it's great to keep room for the future, it should not take a too much toll in the present. So only a few bits will be reserved. If more are needed, it simply means another frame format is necessary. It's enough in such case to use a different frame identifier (First 4 bytes of a frame).
Quick overview of the design : data compressed by zstd is cut into blocks. A compressed block has a maximum content size (128 KB), so obviously if input data is larger than this, it will have to occupy multiple blocks.
The frame layer organize these blocks into a single content. It also provides to the decoder a set of properties that the encoder pledges to respect. These properties allow a decoder to prepare required resources, such as allocating enough memory.
The current frame layer only stores 1 identifier and 2 parameters :
- frame Id : It simply tells what are the expected frame and compression formats for follow. This is currently use to automatically detect legacy formats (v0.5.x, v0.4.x, etc.) and select the right decoder for them. It occupies the first 4 bytes of a frame.
- windowLog : This is the maximum search distance that will be used by the encoder. It is also the maximum block size, when (1<<windowLog) < MaxBlockSize (== 128 KB). This is enough for a decoder to guarantee successful decoding operation using a limited buffer budget, whatever the real content size is (endless streaming included).
- contentSize : This is the amount of data to decode within this frame. This information is optional. It can be used to allocate the exact amount of memory for the object to decode.
These information may seem redundant.
Indeed, for a few situations, they are : when contentSize < (1<<windowLog). In which case, it's enough to allocated contentSize bytes for decoding, and windowLog is just redundant.
But for all other situations, windowLog is useful : either contentSize is unknown (it wasn't known at the beginning of the frame and was only discovered on frame termination), or windowLog defines a smaller memory budget than contentSize, in which case, it can be used to limit memory budget.
That's all there is for v0.6.x. Arguably, that's a pretty small list.
The intention is to create a more feature complete frame format for v1.0.
Here is a list of features considered, in priority order :
- Content Checksum : objective is to validate that decoded content is correct.
- Dictionary ID : objective is to confirm or detect dictionary mismatch, for files which require a dictionary for correct decompression. Without it, a wrong dictionary could be picked, resulting in silent corruption (or an error).
- Custom content, aka skippable frames : the objective is to allow users to embed custom elements (comments, indexes, etc.) within a file consisting of multiple concatenated frames.
- Custom window sizes, including non power of 2 : extend current windowLog scheme, to allow more precise choices.
- Header checksum : validate that checksum informations are not accidentally distorted.
Content checksum
The goal of this field is obvious : validate that decoded content is correct. But there are many little details to select.
Content checksum only protects against accidental errors (transmission, storage, bugs, etc). It's not an electronic "signature".
1) Should it be enabled or disabled by default (field == 0) ?
Suggestion : disabled by default
Reasoning : There are already a lot of checksum around, in storage, in transmission, etc. Consequently, errors are now pretty rare, and when they happen, they tend to be "large" rather than sparse. Also, zstd is likely to detect errors just by parsing the compressed input anyway.
2) Which algorithm ? Should it be selectable ?
Suggestion : xxh64, additional header bit reserved in case of additional checksum, but just a single one defined in v1.
Reasoning : we have transitioned to a 64-bits world. 64-bits checksum are faster to generate than 32-bits ones on such systems. So let's use the faster ones.
xxh64 also has excellent distribution properties, and is highly portable (no dependency on hardware capability). It can be run in 32-bits mode if need be.
3) How many bits for the checksum ?
Current format defines the "frame end mark" as a 3-bytes field, the same size as a block header, which is no accident : it makes parsing easier. This field has a 2-bits header, hence 22 bits free, which can be used for a content checksum. This wouldn't increase the frame size.Reasoning : There are already a lot of checksum around, in storage, in transmission, etc. Consequently, errors are now pretty rare, and when they happen, they tend to be "large" rather than sparse. Also, zstd is likely to detect errors just by parsing the compressed input anyway.
2) Which algorithm ? Should it be selectable ?
Suggestion : xxh64, additional header bit reserved in case of additional checksum, but just a single one defined in v1.
Reasoning : we have transitioned to a 64-bits world. 64-bits checksum are faster to generate than 32-bits ones on such systems. So let's use the faster ones.
xxh64 also has excellent distribution properties, and is highly portable (no dependency on hardware capability). It can be run in 32-bits mode if need be.
3) How many bits for the checksum ?
22-bits means there is a 1 in 4 millions chances of collision in case of error. Or said differently, there are 4194303 chances out of 4194304 to detect a decoding error (on top of all the syntax verification which are inherent to the format itself). That's more than > 99.9999 %. Good enough in my view.
Dictionary ID
Data compressed using a dictionary needs the exact same one to be regenerated. But no control is done on the dictionary itself. In case of wrong dictionary selection, it can result in a data corruption scenario.
The corruption is likely to be detected by parsing the compressed format (or thanks to the previously described optional content checksum field).
But an even better outcome would be detect such mismatch immediately, before starting decompression, and with a clearer error message/id than "corruption", which is too generic.
For that, it would be enough to embed a "Dictionary ID" into the frame.
The Dictionary ID would simply be a random value stored inside the dictionary (or an assigned one, provided the user as a way to control that he doesn't re-use the same value multiple times). A comparison between the ID in the frame and the ID in the dictionary will be enough to detect the mismatch.
A simple question is : how long should be this ID ? 1, 2, 4 bytes ?
In my view, 4 bytes is enough for a random-based ID, since it makes the probability of collision very low. But that's still 4 more bytes to fit into the frame header. In some ways it can be considered an efficiency issue.
Maybe some people will prefer 2 bytes ? or maybe even 1 byte (notably for manually assigned ID values) ? or maybe even 0 bytes ?
It's unclear, and I guess multiple scenarios will have different answers.
So maybe a good solution would be to support all 4 possibilities in the format, and default to 4-bytes ID when using dictionary compression.
Note that if saving headers is important for your scenario, it's also possible to use frame-less block format ( ZSTD_compressBlock(), ZSTD_decompressBlock() ), which will remove any frame header, saving 12+ bytes in the process. It looks like a small saving, but when the corpus consists of lot of small messages of ~50 bytes each, it makes quite a difference. The application will have to save metadata on its own (what's the correct dictionary, compression size, decompressed size, etc.).
Custom content
Embedding custom content can be useful for a lot of unforeseen applications.
For example, it could contain a custom index into compressed content, or a file descriptor, or just some user comment.
The only thing that a standard decoder can do is skip this section. Dealing with its content is within application-specific realm.
The lz4 frame format already defines such container, as skippable frames. It looks good enough, so let's re-use the same definition.
Custom window sizes
The current frame format allows defining window sizes from 4 KB to 128 MB, all intermediate sizes being strict power of 2 (8 KB, 16 KB, etc.). It works fine, but maybe some user would find its granularity or limits insufficient.
There are 2 parts to consider :
- Allowing larger sizes : the current implementation will have troubles handling window sizes > 256 MB. That being said, it's an implementation issue, not a format issue. An improved version could likely work with larger sizes (at the cost of some complexity).
From a frame format perspective, allowing larger sizes can be as easy as keeping a reserved bit for later.
- Non-power of 2 sizes : Good news is, the internals within zstd are not tied to a specific power of 2, so the problem is limited to sending more precise window sizes. This requires more header bits.
Maybe an unsigned 32-bits value would be good enough for such use.
Note that it doesn't make sense to specify a larger window size than content size. Such case should be automatically avoided by the encoder. As to the decoder, it's unclear how it should react : stop and issue an error ? proceed with allocating the larger window size ? or use the smaller content size, and issue an error if the content ends up larger than that ?
Anyway, in many cases, what the user is likely to want is simply enough size for the frame content. In which case, a simple "refer to frame content size" is probably the better solution, with no additional field needed.
Header Checksum
The intention is to catch errors in the frame header before they translate into larger problems for the decoder. Note that only errors can be caught this way : intentional data tampering can simply rebuild the checksum, hence remain undetected.
Suggestion : this is not necessary.
While transmission errors used to be more common a few decades ago, they are much less of threat today, or they tend to garbage some large sections (not just a few bits).
An erroneous header can nonetheless be detected just by parsing it, considering the number of reserved bits and forbidden value. They must all be validated.
The nail in the coffin is that we do no longer trust headers, as they can be abused by remote attackers to deliver an exploit. And that's an area where the header checksum is simply useless. Every field must be validated, and all accepted values must have controllable effects (for example, if the attacker intentionally requests a lot of memory, the decoder shall put a high limit to the accepted amount, and check the allocation result).
So we already are highly protected against errors, by design, because we must be protected against intentional attacks.
Future features : forward and bakward compatibility
It's also important to design from day 1 a header format able to safely accommodate future features, with regards to version discrepancy.
The basic idea is to keep a number of reserved bits for these features, set to 0 while waiting for some future definition.
It seems also interesting to split these reserved bits into 2 categories :
- Optional and skippable features : these are features which a decoder can safely ignore, without jeopardizing decompression result. For example, a purely informational signal with no impact on decompression.
- Future features, disabled by default (0): these features can have unpredictable impact on compression format, such as : adding a new field costing a few more bytes. A non-compatible decoder cannot take the risk to proceed with decompression. It will stop on detecting such a reserved bit to 1 and gives an error message.
While it's great to keep room for the future, it should not take a too much toll in the present. So only a few bits will be reserved. If more are needed, it simply means another frame format is necessary. It's enough in such case to use a different frame identifier (First 4 bytes of a frame).
Sunday, April 3, 2016
Working with streaming
Streaming, an advanced and very nice processing mode that a few codecs offer to deal with small data segments. This is great in communication scenarios. For lossless data compression, it makes it possible to send tiny packets, in order to create a low-latency interaction, while preserving strong compression capabilities, by using previously sent data to compress following packets.
Ideally, on the encoding side, the user should be able to send any amount of data, from the smallest possible (1 byte) to much larger ones (~~MB). It's up to the encoder to decide how to deal with this. It may group several small fields into a single packet, or conversely break larger ones into multiple packets. In order to avoid any unwanted delay, a "flush" command shall be available, so that the user can decide it's time to send buffered data.
On the other side, a compatible decoder shall be able to cope with whatever data was sent by the encoder. This obviously requires a bit of coordination, a set of shared rules.
The zip format defines a maximum copy distance (32 KB). Data is sent as a set of blocks, but there is no maximum block size (except non-compressed blocks, which must be <= 64 KB).
A compatible zip decoder must be able to cope with these conditions. It must keep up to 32 KB of previously received data, and be able to break decoding operation in the middle of a block, should it receive a block way too large to fit into its memory buffer.
Thankfully, once this capability is achieved, it's possible to decode with a buffer size of 32 KB + maximum chunk size, with "chunk size" being the maximum size the decoder can decode from a single block. In general, it's a bit more than that, in order to ease a few side-effects, but we won't go into details.
The main take-away is : buffer size is a consequence of maximum copy distance, plus a reasonable amount of data to be decoded in a single pass.
zstd's proposition is to reverse the logic : the size of the decoder buffer is set and announced in its frame header. The decoder can safely allocate the requested amount of memory. It's up to the encoder to respect this condition (otherwise, compressed data is considered corrupted).
In current version of the format, this buffer size can vary from 4 KB to 128 MB. It's a pretty wide range, and crucially, it includes possibilities for small memory footprint. A decoder which can only handle small buffer sizes can immediately detect and discard frames which ask for more than its capabilities.
Once the buffer size is settled, data is sent as "blocks". Each block has a maximum size of 128 KB. So, in theory, a block could be larger than the agreed decoder buffer. What would happen in such case ?
Following zip example, one solution would be for the decoder to be able to stop (and then resume) decoding operation in the middle of a block. This obviously increases decoder complexity. But the benefit is that the only condition the compressor has to respect is a max copy distance <= buffer size.
On the decoder side though, it's only one side of the problem. It's no point having a very small decoding buffer if some other memory budget dwarf it.
The decoding tables are not especially large : they use 5 KB by default, and could be reduced to half, or possibly a quarter of that (but with impact on compression ratio). Not a big budget.
The real issue is the size of the incoming compressed block. A compressed block must be smaller than its original size, otherwise it will be transmitted in uncompressed format. That still makes it possible to have a (128 KB - 1) block size. This is extremely large compared to a 4 KB buffer.
Zip's solution is that it's not necessary to receive the entire compressed block in memory in order to start decompressing it. This is possible because all symbols are entangled in a single bitstream, which is read in forward direction. So input buffer can be a fraction of a block. It simply stops when there is no more information available.
This will be difficult to imitate for zstd : it has multiple independent bitstreams (between 2 and 5) read in backwards direction.
The backward direction is unusual, and a direct consequence of using ANS entropy : encoding and decoding must be done in reverse direction. FSE solution is to write forward and read backward.
It could have been a different choice : write backward, read forward, as suggested by Fabian Giesen. But it makes the encoder's API more complex : the destination buffer would be filled from the end, instead of the beginning. From a user perspective, it breaks a few common assumptions, and become a good recipe for confusion.
Alternatively, the end result could be memmove() to the beginning of the buffer, with a small but noticeable speed cost.
But even that wouldn't solve the multiple bitstreams design, which is key to zstd's speed advantage. zstd is fast because it manages to keep multiple cpu execution units busy. This is achieved by reducing or eliminating dependencies between operations. At some point, it implies bitstream independence.
In a zstd block, literals are encoded first, followed by LZ symbols. Bitstreams are not entangled : each one occupy its own memory segment.
Considering this setup, it's required to access the full content block to start decoding it (well, more precisely, a few little things could be started in parallel, but it's damn complex and not worth the point here).
Save any last-minute breakthrough on this topic, this direction is a dead-end : any compressed block must be received entirely before starting its decompression.
As a consequence, since small decoding buffer is a consequence of constrained memory budget, it looks logical that the size of incoming compressed blocks should be limited too, to preserve memory.
The limit size of a compressed block could be a dedicated parameter, but it would add complexity. A fairly natural assumption would be that a compressed block should be no larger than the decoding buffer. So let's use that.
(PS : another potential candidate would be cBlockSize <= bufferSize/2 , but even such a simple division by 2 looks like a recipe for future confusion).
So now, the encoder side enforces a maximum block size no larger than the decoding buffer. Fair enough. Multiple smaller blocks also means multiple headers, so it could impact compression efficiency. Thankfully, zstd includes both a "default statistics" and an experimental "repeat statistics" modes, which can be used to reduce header size to zero, and provide some answer to this issue.
But there is more to it.
Problem is, amount of data previously sent can be any size. The encoder may arbitrarily receive a "flush" order at any time. So each received block can be any size (up to maximum), and not necessarily fill the buffer.
Hence, what happens when we get closer to buffer's end ?
Presuming the decoder doesn't have the capability to stop decompression in the middle of a block, the next block shall not cross the limit of the decoder buffer. Hence, if there are 2.5 KB left in decoder buffer before reaching its end, the next block maximum size must be 2.5 KB.
It becomes a new condition for the encoder to respect : keep track of decoder buffer fill level, ensure to never cross the limit, stop at exact end of the buffer, and then restart from zero.
It looks complex, but the compressor knows the size of the decoder buffer : it was specified at the beginning of the frame. So it is manageable.
But is that desirable ?
From an encoder perspective, it seems better to get free of such restriction, just accept the block size and copy distance limits, and then let the decoder deal with it, even if it requires a complex capability of "stop and resume" in the middle of a block.
From a decoder perspective, it looks better to only handle full blocks, and require the encoder to pay attention to never break this assumption.
Classical transfer of complexity.
It makes for an interesting design choice. And as v1.0 gets nearer, one will have to be selected.
-------------------------------------------
Edit : And the final choice is :
Well, a decision was necessary, so here it is :
The selected design only impose distance limit and maximum block size to the encoder , both values being equal, and provided in the frame header.
The encoder doesn't need to track the "fill level" of the decoder buffer.
As stated above, a compliant decoder using the exact buffer size should have the capability to break decompression operation in the middle of a block, in order to reach the exact end of the buffer, and restart from the beginning.
However, there is a trick ...
Should the decoder not have this capability, it's enough to extend the size of the buffer by the size of a single block (so it's basically 2x bigger for "small" buffer values (<= 128 KB) ). In which case, the decoder can safely decode every blocks in a single step, without breaking decoding operation in the middle.
Requiring more memory to safely decompress is an "implementation detail", and doesn't impact the spec, which is the real point here.
Thanks to this trick, it's possible to immediately target final spec, and update the decoder implementation later on, as a memory optimization. Therefore, it won't delay v1.0.
Ideally, on the encoding side, the user should be able to send any amount of data, from the smallest possible (1 byte) to much larger ones (~~MB). It's up to the encoder to decide how to deal with this. It may group several small fields into a single packet, or conversely break larger ones into multiple packets. In order to avoid any unwanted delay, a "flush" command shall be available, so that the user can decide it's time to send buffered data.
On the other side, a compatible decoder shall be able to cope with whatever data was sent by the encoder. This obviously requires a bit of coordination, a set of shared rules.
The zip format defines a maximum copy distance (32 KB). Data is sent as a set of blocks, but there is no maximum block size (except non-compressed blocks, which must be <= 64 KB).
A compatible zip decoder must be able to cope with these conditions. It must keep up to 32 KB of previously received data, and be able to break decoding operation in the middle of a block, should it receive a block way too large to fit into its memory buffer.
Thankfully, once this capability is achieved, it's possible to decode with a buffer size of 32 KB + maximum chunk size, with "chunk size" being the maximum size the decoder can decode from a single block. In general, it's a bit more than that, in order to ease a few side-effects, but we won't go into details.
The main take-away is : buffer size is a consequence of maximum copy distance, plus a reasonable amount of data to be decoded in a single pass.
zstd's proposition is to reverse the logic : the size of the decoder buffer is set and announced in its frame header. The decoder can safely allocate the requested amount of memory. It's up to the encoder to respect this condition (otherwise, compressed data is considered corrupted).
Once the buffer size is settled, data is sent as "blocks". Each block has a maximum size of 128 KB. So, in theory, a block could be larger than the agreed decoder buffer. What would happen in such case ?
Following zip example, one solution would be for the decoder to be able to stop (and then resume) decoding operation in the middle of a block. This obviously increases decoder complexity. But the benefit is that the only condition the compressor has to respect is a max copy distance <= buffer size.
On the decoder side though, it's only one side of the problem. It's no point having a very small decoding buffer if some other memory budget dwarf it.
The decoding tables are not especially large : they use 5 KB by default, and could be reduced to half, or possibly a quarter of that (but with impact on compression ratio). Not a big budget.
The real issue is the size of the incoming compressed block. A compressed block must be smaller than its original size, otherwise it will be transmitted in uncompressed format. That still makes it possible to have a (128 KB - 1) block size. This is extremely large compared to a 4 KB buffer.
Zip's solution is that it's not necessary to receive the entire compressed block in memory in order to start decompressing it. This is possible because all symbols are entangled in a single bitstream, which is read in forward direction. So input buffer can be a fraction of a block. It simply stops when there is no more information available.
This will be difficult to imitate for zstd : it has multiple independent bitstreams (between 2 and 5) read in backwards direction.
The backward direction is unusual, and a direct consequence of using ANS entropy : encoding and decoding must be done in reverse direction. FSE solution is to write forward and read backward.
It could have been a different choice : write backward, read forward, as suggested by Fabian Giesen. But it makes the encoder's API more complex : the destination buffer would be filled from the end, instead of the beginning. From a user perspective, it breaks a few common assumptions, and become a good recipe for confusion.
Alternatively, the end result could be memmove() to the beginning of the buffer, with a small but noticeable speed cost.
But even that wouldn't solve the multiple bitstreams design, which is key to zstd's speed advantage. zstd is fast because it manages to keep multiple cpu execution units busy. This is achieved by reducing or eliminating dependencies between operations. At some point, it implies bitstream independence.
In a zstd block, literals are encoded first, followed by LZ symbols. Bitstreams are not entangled : each one occupy its own memory segment.
Considering this setup, it's required to access the full content block to start decoding it (well, more precisely, a few little things could be started in parallel, but it's damn complex and not worth the point here).
Save any last-minute breakthrough on this topic, this direction is a dead-end : any compressed block must be received entirely before starting its decompression.
As a consequence, since small decoding buffer is a consequence of constrained memory budget, it looks logical that the size of incoming compressed blocks should be limited too, to preserve memory.
The limit size of a compressed block could be a dedicated parameter, but it would add complexity. A fairly natural assumption would be that a compressed block should be no larger than the decoding buffer. So let's use that.
(PS : another potential candidate would be cBlockSize <= bufferSize/2 , but even such a simple division by 2 looks like a recipe for future confusion).
So now, the encoder side enforces a maximum block size no larger than the decoding buffer. Fair enough. Multiple smaller blocks also means multiple headers, so it could impact compression efficiency. Thankfully, zstd includes both a "default statistics" and an experimental "repeat statistics" modes, which can be used to reduce header size to zero, and provide some answer to this issue.
But there is more to it.
Problem is, amount of data previously sent can be any size. The encoder may arbitrarily receive a "flush" order at any time. So each received block can be any size (up to maximum), and not necessarily fill the buffer.
Hence, what happens when we get closer to buffer's end ?
Presuming the decoder doesn't have the capability to stop decompression in the middle of a block, the next block shall not cross the limit of the decoder buffer. Hence, if there are 2.5 KB left in decoder buffer before reaching its end, the next block maximum size must be 2.5 KB.
It becomes a new condition for the encoder to respect : keep track of decoder buffer fill level, ensure to never cross the limit, stop at exact end of the buffer, and then restart from zero.
It looks complex, but the compressor knows the size of the decoder buffer : it was specified at the beginning of the frame. So it is manageable.
But is that desirable ?
From an encoder perspective, it seems better to get free of such restriction, just accept the block size and copy distance limits, and then let the decoder deal with it, even if it requires a complex capability of "stop and resume" in the middle of a block.
From a decoder perspective, it looks better to only handle full blocks, and require the encoder to pay attention to never break this assumption.
Classical transfer of complexity.
It makes for an interesting design choice. And as v1.0 gets nearer, one will have to be selected.
-------------------------------------------
Edit : And the final choice is :
Well, a decision was necessary, so here it is :
The selected design only impose distance limit and maximum block size to the encoder , both values being equal, and provided in the frame header.
The encoder doesn't need to track the "fill level" of the decoder buffer.
As stated above, a compliant decoder using the exact buffer size should have the capability to break decompression operation in the middle of a block, in order to reach the exact end of the buffer, and restart from the beginning.
However, there is a trick ...
Should the decoder not have this capability, it's enough to extend the size of the buffer by the size of a single block (so it's basically 2x bigger for "small" buffer values (<= 128 KB) ). In which case, the decoder can safely decode every blocks in a single step, without breaking decoding operation in the middle.
Requiring more memory to safely decompress is an "implementation detail", and doesn't impact the spec, which is the real point here.
Thanks to this trick, it's possible to immediately target final spec, and update the decoder implementation later on, as a memory optimization. Therefore, it won't delay v1.0.
Friday, February 5, 2016
Compressing small data
Data compression is primarily seen as a file compression algorithm. After all, the main objective is to save storage space, is it ?
With this background in mind, it's also logical to focus on bigger files. Good compression achieved on a single large archive is worth the savings for countless smaller ones.
However, this is no longer where the bulk of compression happen. Today, compression is everywhere, embedded within systems, achieving its space and transmission savings without user intervention, nor awareness. The key to these invisible gains is to remain below the end-user perception threshold. To achieve this objective, it's not possible to wait for some large amount of data to process. Instead, data is processed in small amounts.
This would be all good and well if it wasn't for a simple observation : the smaller the amount to compress, the worse the compression ratio.
The reason is pretty simple : data compression works by finding redundancy within the processed source. When a new source starts, there is not yet any redundancy to build upon. And it takes time for any algorithm to achieve meaningful outcome.
Therefore, as the issue comes from starting from a blank history, what about starting from an already populated history ?
Streaming to the rescue
A first solution is streaming : data is cut into smaller blocks, but each block can make reference to previously sent ones. And it works quite well. In spite of some minor losses at block borders, most of the compression opportunities of a single large data source are preserved, but now with the advantage to process, send, and receive tiny blocks on the fly, making the experience smooth.
However, this scenario only works with serial data, a communication channel for example, where order is known and preserved.
For a large category of applications, such as database and storage, this cannot work : data must remain accessible in a random fashion, no known "a priori" order. Reaching a specific block sector should not require to decode all preceding ones just to rebuild the dynamic context.
For such use case, a common work-around is to create some "not too small blocks". Say there are many records of a few hundred bytes each. Group them in packs of at least 16 KB. Now this achieves some nice middle-ground between not-to-poor compression ratio and good enough random access capability.
This is still not ideal though, since it's required to decompress a full block just to get a single random record out of it. Therefore, each application will settle for its own middle ground, using block sizes of 4 KB, 16 KB or even 128 KB, depending on usage pattern.
Dictionary compression
Preserving random access at record level and good compression ratio, is hard. But it's achievable too, using a dictionary. To summarize, it's a kind of common prefix, shared by all compressed objects. It makes every compression and decompression operation start from the same populated history.
Dictionary compression has the great property to be compatible with random access. Even for communication scenarios, it can prove easier to manage at scale than "per-connection streaming", since instead of storing one different context per connection, there is always the same context to start from when compressing or decompressing any new data block.
A good dictionary can compress small records into tiny compressed blobs. Sometimes, the current record can be found "as is" entirely within the dictionary, reducing it to a single reference. More likely, some critical redundant elements will be detected (header, footer, keywords) leaving only variable ones to be described (ID fields, date, etc.).
For this situation to work properly, the dictionary needs to be tuned for the underlying structure of objects to compress. There is no such thing as a "universal dictionary". One must be created and used for a target data type.
Fortunately, this condition can be met quite often.
Just created some new protocol for a transaction engine or an online game ? It's likely based on a few common important messages and keywords (even binary ones). Have some event or log records ? There is likely a grammar for them (json, xml maybe). The same can be said of digital resources, be it html files, css stylesheets, javascript programs, etc.
If you know what you are going to compress, you can create a dictionary for it.
The key is, since it's not possible to create a meaningful "universal dictionary", one must create one dictionary per resource type.
Example of a structured JSON message
How to create a dictionary from a training set ? Well, even though one could be tempted to manually create one, by compacting all keywords and repeatable sequences into a file, this can be a tedious task. Moreover, there is always a chance that the dictionary will have to be updated regularly due to moving conditions.
This is why, starting from v0.5, zstd offers a dictionary builder capability.
Using the builder, it's possible to quickly create a dictionary from a list of samples. The process is relatively fast (a matter of seconds), which makes it possible to generate and update multiple dictionaries for multiple targets.
But what good can achieve dictionary compression ?
To answer this question, a few tests were run on some typical samples. A flow of JSON records from a probe, some Mercurial log events, and a collection of large JSON documents, provided by @KryzFr.
These compression gains are achieved without any speed loss, and even feature faster decompression processing. As one can see, it's no "small improvement". This method can achieve transformative gains, especially for very small records.
Large documents will benefit proportionally less, since dictionary gains are mostly effective in the first few KB. Then there is enough history to build upon, and the compression algorithm can rely on it to compress the rest of the file.
Dictionary compression will work if there is some correlation in a family of small data (common keywords and structure). Hence, deploying one dictionary per type of data will provide the greater benefits.
Anyway, if you are in a situation where compressing small data can be useful for your use case (databases and contextless communication scenarios come to mind, but there are likely other ones), you are welcomed to have a look at this new open source tool and compression methodology and report your experience or feature requests.
Zstd is now getting closer to v1.0 release, it's a good time to provide feedback and integrate them into final specification.
With this background in mind, it's also logical to focus on bigger files. Good compression achieved on a single large archive is worth the savings for countless smaller ones.
However, this is no longer where the bulk of compression happen. Today, compression is everywhere, embedded within systems, achieving its space and transmission savings without user intervention, nor awareness. The key to these invisible gains is to remain below the end-user perception threshold. To achieve this objective, it's not possible to wait for some large amount of data to process. Instead, data is processed in small amounts.
This would be all good and well if it wasn't for a simple observation : the smaller the amount to compress, the worse the compression ratio.
The reason is pretty simple : data compression works by finding redundancy within the processed source. When a new source starts, there is not yet any redundancy to build upon. And it takes time for any algorithm to achieve meaningful outcome.
Therefore, as the issue comes from starting from a blank history, what about starting from an already populated history ?
Streaming to the rescue
A first solution is streaming : data is cut into smaller blocks, but each block can make reference to previously sent ones. And it works quite well. In spite of some minor losses at block borders, most of the compression opportunities of a single large data source are preserved, but now with the advantage to process, send, and receive tiny blocks on the fly, making the experience smooth.
However, this scenario only works with serial data, a communication channel for example, where order is known and preserved.
For a large category of applications, such as database and storage, this cannot work : data must remain accessible in a random fashion, no known "a priori" order. Reaching a specific block sector should not require to decode all preceding ones just to rebuild the dynamic context.
For such use case, a common work-around is to create some "not too small blocks". Say there are many records of a few hundred bytes each. Group them in packs of at least 16 KB. Now this achieves some nice middle-ground between not-to-poor compression ratio and good enough random access capability.
This is still not ideal though, since it's required to decompress a full block just to get a single random record out of it. Therefore, each application will settle for its own middle ground, using block sizes of 4 KB, 16 KB or even 128 KB, depending on usage pattern.
Dictionary compression
Preserving random access at record level and good compression ratio, is hard. But it's achievable too, using a dictionary. To summarize, it's a kind of common prefix, shared by all compressed objects. It makes every compression and decompression operation start from the same populated history.
Dictionary compression has the great property to be compatible with random access. Even for communication scenarios, it can prove easier to manage at scale than "per-connection streaming", since instead of storing one different context per connection, there is always the same context to start from when compressing or decompressing any new data block.
A good dictionary can compress small records into tiny compressed blobs. Sometimes, the current record can be found "as is" entirely within the dictionary, reducing it to a single reference. More likely, some critical redundant elements will be detected (header, footer, keywords) leaving only variable ones to be described (ID fields, date, etc.).
For this situation to work properly, the dictionary needs to be tuned for the underlying structure of objects to compress. There is no such thing as a "universal dictionary". One must be created and used for a target data type.
Fortunately, this condition can be met quite often.
Just created some new protocol for a transaction engine or an online game ? It's likely based on a few common important messages and keywords (even binary ones). Have some event or log records ? There is likely a grammar for them (json, xml maybe). The same can be said of digital resources, be it html files, css stylesheets, javascript programs, etc.
If you know what you are going to compress, you can create a dictionary for it.
The key is, since it's not possible to create a meaningful "universal dictionary", one must create one dictionary per resource type.
Example of a structured JSON message
How to create a dictionary from a training set ? Well, even though one could be tempted to manually create one, by compacting all keywords and repeatable sequences into a file, this can be a tedious task. Moreover, there is always a chance that the dictionary will have to be updated regularly due to moving conditions.
This is why, starting from v0.5, zstd offers a dictionary builder capability.
Using the builder, it's possible to quickly create a dictionary from a list of samples. The process is relatively fast (a matter of seconds), which makes it possible to generate and update multiple dictionaries for multiple targets.
But what good can achieve dictionary compression ?
To answer this question, a few tests were run on some typical samples. A flow of JSON records from a probe, some Mercurial log events, and a collection of large JSON documents, provided by @KryzFr.
Collection Name | direct compression | Dictionary compression | Gains | Average
unit
| Range |
---|---|---|---|---|---|
Small JSON records | x1.331 - x1.366 | x5.860 - x6.830 | ~ x4.7 | 300 | 200 - 400 |
Mercurial events | x2.322 - x2.538 | x3.377 - x4.462 | ~ x1.5 | 1.5 KB | 20 - 200 KB |
Large JSON docs | x3.813 - x4.043 | x8.935 - x13.366 | ~ x2.8 | 6 KB | 800 - 20 KB |
These compression gains are achieved without any speed loss, and even feature faster decompression processing. As one can see, it's no "small improvement". This method can achieve transformative gains, especially for very small records.
Large documents will benefit proportionally less, since dictionary gains are mostly effective in the first few KB. Then there is enough history to build upon, and the compression algorithm can rely on it to compress the rest of the file.
Dictionary compression will work if there is some correlation in a family of small data (common keywords and structure). Hence, deploying one dictionary per type of data will provide the greater benefits.
Anyway, if you are in a situation where compressing small data can be useful for your use case (databases and contextless communication scenarios come to mind, but there are likely other ones), you are welcomed to have a look at this new open source tool and compression methodology and report your experience or feature requests.
Zstd is now getting closer to v1.0 release, it's a good time to provide feedback and integrate them into final specification.
Subscribe to:
Posts (Atom)