The attempt is more complex, because LZ4_compress() is a much larger function, typically unfit for inlining. Also, the different variants use different types (table of U16, U32, or const BYTE*, depending on circumstances).
The result is somewhat mixed, although I do now have a working version using inline instead of macro. Here are a few learnings from this experiment :
1) No allocation within inline functions
It's of course possible to allocate a few variables, but it's not the point to allocate big tables on stack inside inline functions. This is because inline functions can be inlined several times, bloating the memory requirement on stack.
It's an issue for LZ4, since its hash table can be allocated on heap, or on stack, depending on "#define HEAPMODE" trigger. This choice will have to be done outside of the main inlined function.
The good news is that if you allocate on stack and then call the inline function, the inline function will keep the properties of accessing stack variables & tables (most importantly speed) even though the code tells it's working on pointers.
2) Inlining big function is generally not a win
Inlining big function is not "natural" for the compiler.
Inline functions were initially created as a safer replacement to macro, for small snippets of code. The main idea is that calling a function requires some push/pop/jmp operations, costing both CPU cycles and stack memory, which could be avoided. In general, function calling is fast enough, and does not require such optimization. But for small code sections inside hot loops, which are going to be called millions of times, this can make a sensible performance difference.
The capability to "remove branches" inside such code thanks to optimizations is merely a consequence, not the main initial objective.
As a consequence, the language does not even take it into consideration, and only consider the "function call delay" as the main variable to be optimized. Hence, when the function is too large, the usual choice for the compiler is to not inline it.
Therefore, to inline big functions, you have to "force" it (using compiler-specific extensions).
The situation differs sharply, depending if we are compiling for 32-bits or 64-bits systems.
- 64-bits programs tend to inline fairly well big functions. A small boost to performance can even be expected, apparently the compiler is able to find even more optimizations.
- 32-bits programs, in contrast, don't like such (forced) inlining. Here also the situation differs depending on the compiler, with Visual making a much better job than GCC.
I tentatively attribute such difference to the number of available registers. In 32-bits (x86) mode, registers are scarce (6, which are supposed to be general nowadays, but were previously specific). In 64-bits mode, they are plentiful (16). It means this is much easier for the compiler to combine all these codes together, with enough space to keep important variables into registers and avoid memory accesses.
If the hypothesis is correct, this means this issue is more related to number of available registers and compiler cleverness, than it is to 32 vs 64 bits. This can be important for ARM systems, which tend to be 32-bits one, but get 16 general-purpose registers at their disposal (like x64). They should suffer less from inlining issues.
Coming back to LZ4, sometimes, the loss of performance on 32-bits x86 is so important that it's better to not inline the function, even if it results in more branching. It's the current choice for LZ4_compressHC().
3) Type selection is okay
I've been hiding the different types inside another inline function, which selects the right pointer type and the appropriate operation depending on a parameter. The compiler makes a fine job at eliminating useless branches, keeping only the path with correct type to be inlined.
4) Clear selection signal is mandatory
There must be no room for confusion.
As a simple example, suppose we've got this code :
inline int functionA(int parameter)
{ (... bunch of code with one branching depending on parameter...) }
int functionB()
{
int parameter=1;
return functionA(parameter);
}
You'll be surprised to notice that the branching is not necessarily removed, even though the parameter has a very clear value.
This seems like a limitation of the compiler, which doesn't want to "bet" on the value of parameter. Therefore, prefer something along this line :
void functionB()
{
return functionA(1);
}
This one will ensure that only the path corresponding to parameter==1 will be kept, eliminating branching.
5) Predictable branching don't cost that much
Modern CPU are fairly complex beasts, and one thing they do fairly well is to correctly guess what will be the result of a branching, anticipating the next operations by filling the pipeline.
This is even more true if the result is always the same.
So, even if a branching is not eliminated, but lead to always the same path, the CPU will quickly understand it, and next time will speculative start the right branch even before the result of the test.
As a consequence, the cost of branching is minimal (only the test operation itself and a jmp), with no pipeline stall.
This means it's not always necessary to worry about every branching. Only those with (almost) random pattern will have a big hit to performance. Predictable ones are fairly fast.
So, the result :
The current developer version is a mixed bag. It results in a small performance boost in 64-bits mode (as compared to macro), and a small performance loss in 32-bits modes.
using a Corei5-3340M, on LZ4_compress() (fast variant) :
64-bits GCC : +2% (Ubuntu)
32-bits Visual : -1% (Win7)
32-bits GCC : -5% (Ubuntu)
On the other hand, it eliminates the need for external files (lz4_encoder.h & lz4hc_encoder.h), making the source code both easier to read and to integrate into user projects. And this was the main objective all along.