Featured

Saturday, June 4, 2011

LZ4 for Internet communications

 Among the different comments and contributions received for LZ4, an interesting one is about using LZ4 in a communication scenario.
The idea is pretty simple to describe : sent data is compressed on the fly, and decompressed at destination point. Fair enough. LZ4 requires little resource, especially on the decoding side, so even very tiny communicating devices can accommodate such scenario.

However, this use-case comes with new threats of its own : what if, the packet containing compressed data was in fact maliciously crafted in an attempt to create a buffer overflow scenario ?

Indeed. Internet is a dangerous place, and security comes to mind when using it. The easy answer would be "strengthen your encryption and pray that no one will break it", but that's too cheap. The main point here is to make sure that the LZ4 decoder never tries to write beyond the provided destination buffer. After all, not all malformed packets are malicious attacks, it could simply be a transmission error.

This is proposed in a new decoding function, now available at Google Code :


int LZ4_uncompress(char* source, char* dest, int osize);


I've used this opportunity to also correct a long time comment about compress and decode being too different from each other, so now it is compress / uncompress.
LZ4_decode is still available, for compatibility reason, but flagged as "deprecated".

The main difference here is on the last parameter : it is osize, for "output size" (as opposed to isize within LZ4_decode). Yes, the decoder needs to know how many bytes it has to write, to ensure it never passes beyond the border.

The other difference is the result : when everything is right, LZ4_uncompress will provide the read input size. On the other hand, if the compressed stream has tried to write beyond the output buffer, indicating a malformed stream, the result will be negative to trigger an error, and the value will be the byte position of the wrong instruction.

As a side-bonus, LZ4_uncompress never writes beyond the destination size (as opposed to LZ4_decode which could write up to 3 bytes beyond the limit even in normal situations).
But the real good news is this : in spite of the extra checking, the decoder speed is barely affected. At about a couple of %, this is well within tolerance limits, and the improved safety and memory protection is really worth the change. The main reason is that the extra checks have been successfully nested, so they are not triggered in the "main loop".

Since the stream compression format remains unchanged, it is 100% compatible with already compressed data. As a consequence, this is a recommended upgrade to LZ4 users, LZ4_decode being still available for compatibility and performance mode.


------------------------------------------------------------------------------------
Now for the more complex consequences : stop reading here if this was already long enough to handle :)

Obviously, the file compression program needs to evolve.
The main reason is that the decompression routine needs to provide the block output size to the decoding function, which is easy enough for most blocks, except the last one.

Without prior knowledge of the file size, there is no way to tell the decoder how many bytes it needs to write for this last block. One solution would be to change the file format, in order to store this value somewhere. I've never really paid much attention to the file format, which is only one possible use-case of LZ4 compression routines, but it has remained stable since the first version of LZ4, and i wouldn't like to change it for a little reason, especially if there are solutions to solve the problem.

And a solution there is. Remember that we can recover the decoded block size from the input (compressed) size. All it takes is to ensure that we do not write beyond the output buffer size. So an (advanced) function is dedicated to this role :


int LZ4_uncompress_unknownOutputSize(char* source, char* dest, int isize, int maxOutputSize);


This one is a bit longer. It's effectively a secure version of the previous LZ4_decode function. It ensures that it will never write beyond dest+maxOutputSize. Its result is the size of the decoded data (osize) when the stream was successfully decoded. Otherwise, the result is negative, and indicate the byte position in the input stream containing the wrong instruction.
This function however is slower, by up to 10%, mostly because it has to check both the input and output buffer size limits, so it is not recommended for the general use.

Well, in most circumstances, it is expected that the decoder know the size of the object to be decoded, if only to allocate the right amount of memory. Therefore, the first LZ4_uncompress is expected to be the most useful one.

You can grab the latest source version at Google Code.


------------------------------------------------------------------------------------

As a last blog's note for today :
LZ4 has been ported to Go language, thanks to Branimir Karadzic.
He hosts this new project at GitHub :
https://github.com/bkaradzic/go-lz4

No comments:

Post a Comment