Extremely Fast Compression algorithm


LZ4 - Extremely fast compression

LZ4 is lossless compression algorithm, providing compression speed > 500 MB/s per core, scalable with multi-cores CPU. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.

Speed can be tuned dynamically, selecting an "acceleration" factor which trades compression ratio for faster speed. On the other end, a high compression derivative, LZ4_HC, is also provided, trading CPU time for improved compression ratio. All versions feature the same decompression speed.

LZ4 is also compatible with dictionary compression, both at API and CLI levels. It can ingest any input file as dictionary, though only the final 64KB are used. This capability can be combined with the Zstandard Dictionary Builder, in order to drastically improve compression performance on small files.

LZ4 library is provided as open-source software using BSD 2-Clause license.

Branch Status
dev Build Status Build status


The benchmark uses lzbench, from @inikep compiled with GCC v8.2.0 on Linux 64-bits (Ubuntu 4.18.0-17). The reference system uses a Core i7-9700K CPU @ 4.9GHz (w/ turbo boost). Benchmark evaluates the compression of reference Silesia Corpus in single-thread mode.

Compressor Ratio Compression Decompression
memcpy 1.000 13700 MB/s 13700 MB/s
LZ4 default (v1.9.0) 2.101 780 MB/s 4970 MB/s
LZO 2.09 2.108 670 MB/s 860 MB/s
QuickLZ 1.5.0 2.238 575 MB/s 780 MB/s
Snappy 1.1.4 2.091 565 MB/s 1950 MB/s
Zstandard 1.4.0 -1 2.883 515 MB/s 1380 MB/s
LZF v3.6 2.073 415 MB/s 910 MB/s
zlib deflate 1.2.11 -1 2.730 100 MB/s 415 MB/s
LZ4 HC -9 (v1.9.0) 2.721 41 MB/s 4900 MB/s
zlib deflate 1.2.11 -6 3.099 36 MB/s 445 MB/s

LZ4 is also compatible and optimized for x32 mode, for which it provides additional speed performance.


make install     # this command may require root permissions

LZ4's Makefile supports standard Makefile conventions, including staged installs, redirection, or command redefinition. It is compatible with parallel builds (-j#).

Building LZ4 - Using vcpkg

You can download and install LZ4 using the vcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./vcpkg integrate install
vcpkg install lz4

The LZ4 port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.


The raw LZ4 block compression format is detailed within lz4_Block_format.

Arbitrarily long files or data streams are compressed using multiple blocks, for streaming requirements. These blocks are organized into a frame, defined into lz4_Frame_format. Interoperable versions of LZ4 must also respect the frame format.

Other source versions

Beyond the C reference source, many contributors have created versions of lz4 in multiple languages (Java, C#, Python, Perl, Ruby, etc.). A list of known source ports is maintained on the LZ4 Homepage.

  • [fastMode] No LZ4_compress_fast_limitedOutput

    [fastMode] No LZ4_compress_fast_limitedOutput

    I was just looking at the fastMode branch, and I noticed that LZ4_compress_fast takes a maxOutputSize parameter, unlike compress and compressHC, and that there is no LZ4_compress_fast_limitedOutput.

    I don't really mind there being no unsafe version, but you might want to consider renaming compress_fast to compress_fast_limitedOutput for two reasons: (1) consistency with existing APIs, and (2) gives you room to add an unsafe compress_fast in the future without weird naming (although LZ4_compress_faster would be funny…)

    opened by nemequ 39
  • Where is lz4hc?

    Where is lz4hc?

    The LZ4 benchmarks show the transfer + decompression time of LZ4 HC -9. Googling around I see some folks discussing an lz4hc command line app. Installing this repo, I only have lz4, lz4c, and lz4cat command line apps.

    I see that there is a deprecated -hc option but it is not clear to me if this is the same as lz4hc. Is "lz4 -9" now the same as what used to be "lz4 -hc -9" and "lz4hc -9"?

    opened by cafarm 28
  • Benchmark in readme is really too misleading

    Benchmark in readme is really too misleading

    Disclaimer : I'm the author of density

    I know benchmarks are always a subject for debate, that's why I'm not really bothered when results vary from platform to platform, compiler to compiler or even between file types, but in this case I honestly think that your benchmark is grossly misleading, and for at least 2 reasons :

    It is incomplete What I don't understand is that you use fsbench (without providing the source code of the lz4 library version you're using) which includes much more codecs. Where are LZF, wfLZ, and density for example ? It's nice to add zlib as a reference but is it really useful ? That's not really what lz4 competes against. If you want an up-to-date version of fsbench with source code of the codecs I maintain one here.

    Also, you only benchmark the file silesia, and you omit text files like enwik8 or database files etc. In my case (density) people were concerned that I benchmarked against enwik8 only so I added silesia.

    I know you're aware of the fantastic benchmark by @nemequ which can be seen here, it's probably the best there is today (lots of platforms, files, codecs)... why is there no link to it on your project's page ?

    All of this gives the impression that lz4 is in a class of its own when one considers high-speed compression, but that is clearly not the case if you actually compare it to libraries designed with the same purpose in mind.

    It is biased That's actually a direct consequence of the previous problem. I suspect for example that lz4 was optimized against silesia, there's nothing wrong with that as long as you also benchmark it against something for which it wasn't optimized (enwik8 ?). Simple use case : when you run "lz4 fast 17" with enwik8 the compression ratio is terrible.

    Another example : I think (maybe I'm mistaken) that you use lz4_decompress_fast in your tests, which is inherently unsafe whereas other libraries' decompress functions are safe... it's not really fair to compare apples with oranges.

    Well that's it, enough ranting for me :wink: , but it really is disappointing to see this kind of things on such a classy project like lz4.

    opened by k0dai 25
  • Add gh-pages branch to make

    Add gh-pages branch to make "LZ4 Homepage" on GitHub

    This is first approximation to solve "Add list of LZ4 variants on Github".



    My fork is https://github.com/t-mat/lz4/tree/gh-pages. Please see index.html. It is almost same as current README.md.

    gh-pages branch and GitHub pages

    gh-pages is special branch for GitHub. You can see how gh-pages works by the following commands. Read "Creating Project Pages manually" for the details.

    cd /your/workspace/gh-pages/
    git clone https://github.com/Cyan4973/lz4.git
    cd lz4
    git checkout --orphan gh-pages
    git rm -rf .
    echo "LZ4 gh-pages testing 1,2,3" > index.html
    git add index.html
    git commit -a -m "First gh-pages commit"
    git push origin gh-pages

    Here, you can open https://Cyan4973.github.io/lz4/ in your browser.

    After that, if you could merge my gh-pages branch.

    Further evolution

    For simplicity, I use Strapdown.js for Markdown rendering. But obviously it is js-heavy page. So for long term, it would be nice to introduce some static page generators. For exmaple GitHub recommends Jekyll.

    See also

    opened by t-mat 24
  • Fixup meson build

    Fixup meson build

    The meson build had gotten a little out of hand. It needed to be cleaned up and have its errors fixed. This should enable lz4 to switch to Meson at any time should the need ever arise.

    opened by tristan957 23
  • Makefile build system does not with MSYS and CYGWIN as well as having issues with MINGW-W64

    Makefile build system does not with MSYS and CYGWIN as well as having issues with MINGW-W64

    When attempting to build lz4 for MINGW using the MSYS2 bash shell, encountered some issues. The install does not work, the export library has a .lib suffix instead of the standard .dll.a (using gcc for mingw), and the dll has a version number in it (not usual with the MSYS2 MINGW-W64 distribution). If building for MSYS2 itself, the Makefile generates the liblz4-$version name instead of msys-lz4-$version.dll. It's not unusual for this stuff to be patched by the MSYS2 and Cygwin mainters. I deally, I would like see a fix here since cmake depends indirectly on lz4.

    There's several issues that I see here:

    1. THe proper library suffixes and prefixes are NOT specified in the Makefiles.
    2. The Makefile does not take into account Poxis-like environments such as MSYS2 and Cygwin which have version of the uname command. The OS venvironment variable would probably always be Windows_NT even in those environments. I prefer using "uname" rather than $(OS) in a Makefile. Interestingly enough, the MSYS2 version of uname will return a different value based the MSYSTEM environment variable. Examples are:

    MSYSTEM=MINGW32 uname


    MSYSTEM=MINGW64 uname




    1. The Makefiles probably would not be able to compile binaries for Windows on a Unix-based operating system where the target is NOT the same as the host.
    2. Cygwin and MSYS2 binaries are compiled similarly to MINGW in that a seperate .dll export is usually generated and the .dll is placed in the bin directory instead of the lib directory.
    3. The MSYS, MINGW, and CYGWIN uname values might vary based upon the version of Windows where they are being built on.

    I have attached a diff I made showing what I did to try address these things. Incidentally, I would also think it's possible to embed version info in the .EXE's and .DLL's as well as an icon for the .EXE's. It's a good practice in Windows.


    build issue 
    opened by JPeterMugaas 23
  • Idea to improve decompression performance for repeated sequences with short period

    Idea to improve decompression performance for repeated sequences with short period

    I am testing the performance of the LZ4 library through the JNI binding provided by jpountz. Currently the Java binding uses r123. I have determined a quite bad degradation of performance when the decompressed data consists of a short byte sequence repeated many times (with a period of 1-32 bytes). In reviewing the memory operations used for decompression I noticed that unaligned memory access is applied, which at least my Intel Core i7 does quite slowly when source and target are close together. As a specific example, for a period length 101 decompression reaches 16 GB/s, but for periods of 3, 5, or 7 the speed drops to 1.8 GB/s. A simple stretch of the same byte value works with 5 GB/s.

    To alleviate this issue I have written a wild copying routine which does no reading at all in the copy loop for periods up to 8 and for wider periods uses word-aligned writing. For periods of 3, 5, and 7 I get at least 6 GB/s; for periods of 1, 2, 4, 8 I get ~30 GB/s (vs. 5 for the JNI version). For wider periods, say 13, I get 3.6 GB/s vs. 1.8 GB/s native.

    I have opened this issue in case there might be some interest in involving this kind of approach on the native LZ4 project, My code was presented at https://github.com/jpountz/lz4-java/issues/72. It is specific to a little-endian architecture, but is trivial to adapt to big-endian (swapping all >>> and << ops).

    opened by mtopolnik 23
  • alignment checks added to lz4hc.c don't work even on linux

    alignment checks added to lz4hc.c don't work even on linux

    This check in lz4hc.c (e.g. in LZ4_initStreamHC()) #ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 : * it reports an aligment of 8-bytes, * while actually aligning LZ4_streamHC_t on 4 bytes. / assert(((size_t)state & (LZ4_streamHC_t_alignment() - 1)) == 0); / check alignment */ #endif

    doesn't work even on non-MS Windows 32 bit builds.

    The problem is that, while the alignment is checked, no measures are taken to ensure that the buffers passed are aligned. On 64 bit builds it's OK since (apparently) malloc() returns memory alligned on 8 bytes. But on 32 bit it returns memory alligned on 4 bytes and that is not enough for the structure checked here. E.g. for wrong allocation look at: 937 LZ4_streamHC_t* LZ4_createStreamHC(void) 938 { 939 LZ4_streamHC_t* const LZ4_streamHCPtr = (LZ4_streamHC_t*)ALLOC(sizeof(LZ4_streamHC_t)); 940 if (LZ4_streamHCPtr==NULL) return NULL; 941 LZ4_initStreamHC(LZ4_streamHCPtr, sizeof(LZ4_streamHCPtr)); / full initialization, malloc'ed buffer can be full of garbage */ 942 return LZ4_streamHCPtr; 943 }

    opened by gkodinov 22
  • Add multiframe report to --list command

    Add multiframe report to --list command

     » ~/lz4/tests ## list * make listTest
    python3 test-lz4-list.py
    2019/05/10 16:52:59 - Generating /tmp/test_list_5M
    2019/05/10 16:52:59 - > ./datagen -g5M > /tmp/test_list_5M
    2019/05/10 16:52:59 - > /home/gabriel/lz4/tests/../lz4 --content-size /tmp/test_list_5M /tmp/test_list_5M-lz4f-1f--content-size.lz4
    2019/05/10 16:52:59 - > /home/gabriel/lz4/tests/../lz4 -BI /tmp/test_list_5M /tmp/test_list_5M-lz4f-1f-BI.lz4
    2019/05/10 16:52:59 - > /home/gabriel/lz4/tests/../lz4 -BD /tmp/test_list_5M /tmp/test_list_5M-lz4f-1f-BD.lz4
    2019/05/10 16:52:59 - > /home/gabriel/lz4/tests/../lz4 -BX /tmp/test_list_5M /tmp/test_list_5M-lz4f-1f-BX.lz4
    2019/05/10 16:52:59 - > /home/gabriel/lz4/tests/../lz4 --no-frame-crc /tmp/test_list_5M /tmp/test_list_5M-lz4f-1f--no-frame-crc.lz4
    2019/05/10 16:52:59 - > /home/gabriel/lz4/tests/../lz4 -l /tmp/test_list_5M /tmp/test_list_5M-legc-1f.lz4
    2019/05/10 16:53:00 - Generating /tmp/test_list_20M
    2019/05/10 16:53:00 - > ./datagen -g20M > /tmp/test_list_20M
    2019/05/10 16:53:00 - > /home/gabriel/lz4/tests/../lz4 --content-size /tmp/test_list_20M /tmp/test_list_20M-lz4f-1f--content-size.lz4
    2019/05/10 16:53:00 - > /home/gabriel/lz4/tests/../lz4 -BI /tmp/test_list_20M /tmp/test_list_20M-lz4f-1f-BI.lz4
    2019/05/10 16:53:00 - > /home/gabriel/lz4/tests/../lz4 -BD /tmp/test_list_20M /tmp/test_list_20M-lz4f-1f-BD.lz4
    2019/05/10 16:53:01 - > /home/gabriel/lz4/tests/../lz4 -BX /tmp/test_list_20M /tmp/test_list_20M-lz4f-1f-BX.lz4
    2019/05/10 16:53:01 - > /home/gabriel/lz4/tests/../lz4 --no-frame-crc /tmp/test_list_20M /tmp/test_list_20M-lz4f-1f--no-frame-crc.lz4
    2019/05/10 16:53:01 - > /home/gabriel/lz4/tests/../lz4 -l /tmp/test_list_20M /tmp/test_list_20M-legc-1f.lz4
    2019/05/10 16:53:01 - > /home/gabriel/lz4/tests/../lz4 --list -m /tmp/test_list_*.lz4
        Frames           Type Block  Compressed  Uncompressed     Ratio   Filename
             1    LegacyFrame     -      11.73M             -         -   test_list_20M-legc-1f.lz4
             1       LZ4Frame   B7D      11.72M             -         -   test_list_20M-lz4f-1f-BD.lz4
             1       LZ4Frame   B7I      11.73M             -         -   test_list_20M-lz4f-1f-BI.lz4
             1       LZ4Frame   B7I      11.73M             -         -   test_list_20M-lz4f-1f-BX.lz4
             1       LZ4Frame   B7I      11.73M        20.00M     58.66%  test_list_20M-lz4f-1f--content-size.lz4 
             1       LZ4Frame   B7I      11.73M             -         -   test_list_20M-lz4f-1f--no-frame-crc.lz4
             1 SkippableFrame     -      20.01K             -         -   test_list_20M-skip-1f.lz4
             2       LZ4Frame   B7I      14.69M        25.00M     58.76%  test_list_25M-lz4f-2f--content-size.lz4 
             1    LegacyFrame     -       2.96M             -         -   test_list_5M-legc-1f.lz4
             1       LZ4Frame   B7D       2.96M             -         -   test_list_5M-lz4f-1f-BD.lz4
             1       LZ4Frame   B7I       2.96M             -         -   test_list_5M-lz4f-1f-BI.lz4
             1       LZ4Frame   B7I       2.96M             -         -   test_list_5M-lz4f-1f-BX.lz4
             1       LZ4Frame   B7I       2.96M         5.00M     59.20%  test_list_5M-lz4f-1f--content-size.lz4 
             1       LZ4Frame   B7I       2.96M             -         -   test_list_5M-lz4f-1f--no-frame-crc.lz4
             1 SkippableFrame     -       5.01K             -         -   test_list_5M-skip-1f.lz4
            16              -     -     102.84M             -         -   test_list_concat-all.lz4
    test_block (__main__.TestNonVerbose) ... ok
    test_compressed_size (__main__.TestNonVerbose) ... ok
    test_frame_types (__main__.TestNonVerbose) ... ok
    test_frames (__main__.TestNonVerbose) ... ok
    test_ratio (__main__.TestNonVerbose) ... ok
    test_uncompressed_size (__main__.TestNonVerbose) ... ok
    2019/05/10 16:53:01 - > /home/gabriel/lz4/tests/../lz4 --list -m -v /tmp/test_list_concat-all.lz4 /tmp/test_list_*M-lz4f-2f--content-size.lz4
    *** LZ4 command line interface 64-bits v1.9.1, by Yann Collet ***
         Frame           Type Block Checksum           Compressed         Uncompressed     Ratio
             1       LZ4Frame   B7I    XXH32             12301254             20971520     58.66%
             2       LZ4Frame   B7I    XXH32              3103548              5242880     59.20%
             3    LegacyFrame     -        -             12297076                    -         -
             4 SkippableFrame     -        -                20488                    -         -
             5       LZ4Frame   B7I        -             12301242                    -         - 
             6       LZ4Frame   B7I    XXH32             12301266                    -         - 
             7       LZ4Frame   B7D    XXH32             12287289                    -         - 
             8       LZ4Frame   B7I    XXH32             12301246                    -         - 
             9       LZ4Frame   B7I    XXH32             12301254             20971520     58.66%
            10    LegacyFrame     -        -              3099290                    -         -
            11 SkippableFrame     -        -                 5128                    -         -
            12       LZ4Frame   B7I        -              3103536                    -         - 
            13       LZ4Frame   B7I    XXH32              3103548                    -         - 
            14       LZ4Frame   B7D    XXH32              3099314                    -         - 
            15       LZ4Frame   B7I    XXH32              3103540                    -         - 
            16       LZ4Frame   B7I    XXH32              3103548              5242880     59.20%
         Frame           Type Block Checksum           Compressed         Uncompressed     Ratio
             1       LZ4Frame   B7I    XXH32             12301254             20971520     58.66%
             2       LZ4Frame   B7I    XXH32              3103548              5242880     59.20%
    test_block (__main__.TestVerbose) ... ok
    test_checksum (__main__.TestVerbose) ... ok
    test_compressed (__main__.TestVerbose) ... ok
    test_filename (__main__.TestVerbose) ... ok
    test_frame_number (__main__.TestVerbose) ... ok
    test_frame_type (__main__.TestVerbose) ... ok
    test_ratio (__main__.TestVerbose) ... ok
    test_uncompressed (__main__.TestVerbose) ... ok
    Ran 14 tests in 0.013s
    opened by gabrielstedman 22
  • Origin/r129/multiple inputs patch

    Origin/r129/multiple inputs patch


    Pardon my mix-up if I've made this PR backward or something silly. I'm not an expert in C either so please forgive oddities.

    Google code issue 151 resulted in the addition of the -m switch which allows the lz4 cli compressor to act like others (gzip/bzip2/xz). However, it does not work when decompressing with -d. I made a sub-branch of an r129 branch to create a function which will perform the decompression of multiple files (in short: ./lz4 -m -d file1.lz4 file2.lz4 now works). This is the first commit.

    The second change was to support missing input files. When a compressor (gzip et al) encounters a missing input file specified on command line it simply prints a warning to STDERR and continues operating on the remaining files. My second commit accomplishes this by attempting an fopen() on each input file when -m is specified. This leaves other file opening/reading/writing errors to the actual compression function (which is fatal, and probably should be). fopen() may not be ideal, but it's what I knew. Oh, and missing_files is now tracked and will change the return code to '1' when missing inputs are discovered, which also mimics other compressors' default behavior.

    If you need me to repackage these as different branches or whatever just let me know. Or if you want you can just rip the three files out and make commits on your account; I'm not really concerned with credit or whatever.

    Thanks! Kyle

    opened by KyleJHarper 22
  • Question regarding the  ptr arithmetic in lz4.c

    Question regarding the ptr arithmetic in lz4.c

    Dear Developers, while compiling the lz4 library on OS/400 (the OS running on the IBM midrange AS/400 computers) we found a questionable pointer arithmetic construct in lib/lz4.c:

    We guess that line 1046 of lz4/lib/lz4.c should read (LZ4_dict->dictionary + LZ4_dict->currentOffset > src)) { /* address space overflow / instead of ((uptrval)LZ4_dict->currentOffset > (uptrval)src)) { / address space overflow */

    Attached is a patch file (lz4.c.diff.zip) lz4.c.diff.zip

    After applying the modification w successfully ran all the tests contained in the lz4/tests directory (make test).

    Best Regards, Joachim Kern and Joachim Schneider, SAP SE

    opened by JoachimSchneider 21


    Part of #1071.

    This changeset introduces new compile time switch macro LZ4_STATIC_LINKING_ONLY_DISABLE_MEMORY_ALLOCATION which removes the following functions when it's defined.

    // lz4.c
    LZ4_create              // legacy
    // lz4hc.c
    LZ4_createHC            // legacy
    LZ4_freeHC              // legacy

    These functions uses dynamic memory allocation functions such as malloc() and free(). It'll be useful for freestanding environment which doesn't have these allocation functions.

    Since this change breaks API, this macro is only valid with lz4 as a static linked object.

    opened by t-mat 0
  • Can't find lz4.lib file error in Windows with meson

    Can't find lz4.lib file error in Windows with meson

    Describe the bug

    When trying to build lz4 in Windows with meson I am getting the following error:

    (tar) Extracting C:\gtk-build\src\lz4-1.9.3.tar.gz
    Forcing extraction of C:\gtk-build\src\lz4-1.9.3.tar.gz
    Extracting C:\gtk-build\src\lz4-1.9.3.tar.gz to C:\gtk-build\build\Win32\release\lz4
    Copying files from C:\gtk-build\github\gvsbuild\patches\lz4 to C:\gtk-build\build\Win32\release\lz4
    (tar) Exporting lz4
    (tar) Exporting C:\gtk-build\src\lz4-1.9.3.tar.gz
    Building project lz4 (1.9.3)
    Generating meson directory
    The Meson build system
    Version: 0.59.4
    Source dir: C:\gtk-build\build\Win32\release\lz4
    Build dir: C:\gtk-build\build\Win32\release\lz4\_gvsbuild-meson
    Build type: native build
    Project name: lz4
    Project version: DUMMY
    C compiler for the host machine: cl (msvc 19.31.31105 "Microsoft (R) C/C++ Optimizing Compiler Version 19.31.31105 for x86")
    C linker for the host machine: link link 14.31.31105.0
    Host machine cpu family: x86
    Host machine cpu: x86
    Program GetLz4LibraryVersion.py found: YES (C:\gtk-build\tools\pythonx86.3.10.4\tools\python.exe C:\gtk-build\build\Win32\release\lz4\meson\GetLz4LibraryVersion.py)
    Message: Project version is now: 1.9.3
    Compiler for C supports arguments -Wcast-qual: NO 
    Compiler for C supports arguments -Wcast-align: NO 
    Compiler for C supports arguments -Wshadow: NO 
    Compiler for C supports arguments -Wswitch-enum: NO 
    Compiler for C supports arguments -Wdeclaration-after-statement: NO 
    Compiler for C supports arguments -Wstrict-prototypes: NO 
    Compiler for C supports arguments -Wundef: NO 
    Compiler for C supports arguments -Wpointer-arith: NO 
    Compiler for C supports arguments -Wstrict-aliasing=1: NO 
    Compiler for C supports arguments -DLZ4_DEBUG=1: YES 
    Build targets in project: 2
    Option buildtype is: debugoptimized [default: release]
    Found ninja-1.8.2 at C:\gtk-build\tools\ninja-1.8.2\ninja.EXE
    [6/7] Installing files.
    Installing meson\lib\lz4.dll to C:\gtk-build\gtk\Win32\release\bin
    Traceback (most recent call last):
      File "C:\gtk-build\tools\meson-0.59.4\mesonbuild\mesonmain.py", line 228, in run
        return options.run_func(options)
      File "C:\gtk-build\tools\meson-0.59.4\mesonbuild\minstall.py", line 720, in run
      File "C:\gtk-build\tools\meson-0.59.4\mesonbuild\minstall.py", line 512, in do_install
        self.install_targets(d, dm, destdir, fullprefix)
      File "C:\gtk-build\tools\meson-0.59.4\mesonbuild\minstall.py", line 615, in install_targets
        raise RuntimeError(f'File {t.fname!r} could not be found')
    RuntimeError: File 'meson\\lib\\lz4.lib' could not be found
    FAILED: meson-install 
    "C:\gtk-build\tools\pythonx86.3.10.4\tools\python.exe" "C:\gtk-build\tools\meson-0.59.4\meson.py" "install" "--no-rebuild"
    ninja: build stopped: subcommand failed.
    Error: lz4 build failed

    Expected behavior meson build completes.

    To Reproduce

    meson setup --buildtype=debugoptimized -Ddefault_library=shared _gvsbuild-meson
    cd _gvsbuild-meson
    ninja install

    System (please complete the following information):

    • OS: Windows
    • Version: 11
    • Compiler: Visual Studio 2022
    • Build System: meson
    • Other hardware specs: Core i5

    Additional context Add any other context about the problem here.

    build issue help wanted 
    opened by danyeaw 8
  • New release?

    New release?

    A bunch of significant issues were fixed since 1.9.3 was released (such as various sanitizer errors), it would be nice if a 1.9.4 version could be released soon.

    opened by pitrou 4
  • please support Visual C ++ 6.0

    please support Visual C ++ 6.0

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    please support Visual C ++ 6.0

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    help wanted 
    opened by ohyeah521 7
  • LZ4Frame: Allow the user to inject data into the decompression stream

    LZ4Frame: Allow the user to inject data into the decompression stream

    Imagine a protocol that disallows the compressor to inflate the packet size and we want to use LZ4Frame streaming compression.

    Whenever we compress a packet, and it gets larger, we must send it uncompressed because of the protocol restrictions. We now need to reset the compressor and decompressor, since the decompressor will never receive the uncompressed block, and the compressor has already processed it (we just threw out the result).

    We will still send the uncompressed data to the decompressor. It would be nice to have a function like LZ4F_injectUncompressedBlock(LZ4F_dctx* dctx, void const* src, size_t srcSize). That way we don't have to reset the stream.

    Alternatively, the user could prepend a fake LZ4F block header to the uncompressed data, and pass that to the normal decompression function. This works with the current LZ4 version.

    feature request 
    opened by terrelln 0
  • v1.9.3(Nov 16, 2020)

    LZ4 v1.9.3 is a maintenance release, offering more than 200+ commits to fix multiple corner cases and build scenarios. Update is recommended. Existing liblz4 API is not modified, so it should be a drop-in replacement.

    Faster Windows binaries

    On the build side, multiple rounds of improvements, thanks to contributors such as @wolfpld and @remittor, make this version generate faster binaries for Visual Studio. It is also expected to better support a broader range of VS variants. Speed benefits can be substantial. For example, on my laptop, compared with v1.9.2, this version built with VS2019 compresses at 640 MB/s (from 420 MB/s), and decompression reaches 3.75 GB/s (from 3.3 GB/s). So this is definitely perceptible.

    Other notable updates

    Among the visible fixes, this version improves the _destSize() variant, an advanced API which reverses the logic by targeting an a-priori compressed size and trying to shove as much data as possible into the target budget. The high compression variant LZ4_compress_HC_destSize() would miss some important opportunities in highly compressible data, resulting in less than optimal compression (detected by @hsiangkao). This is fixed in this version. Even the "fast" variant receives some gains (albeit very small). Also, the corresponding decompression function, LZ4_decompress_safe_partial(), officially supports a scenario where the input (compressed) size is unknown (but bounded), as long as the requested amount of data to regenerate is smaller or equal to the block's content. This function used to require the exact compressed size, and would sometimes support above scenario "by accident", but then could also break it by accident. This is now firmly controlled, documented and tested.

    Finally, replacing memory functions (malloc(), calloc(), free()), typically for freestanding environments, is now a bit easier. It used to require a small direct modification of lz4.c source code, but can now be achieved by using the build macro LZ4_USER_MEMORY_FUNCTIONS at compilation time. In which case, liblz4 no longer includes <stdlib.h>, and requires instead that functions LZ4_malloc(), LZ4_calloc() and LZ4_free() are implemented somewhere in the project, and then available at link time.

    Changes list

    Here is a more detailed list of updates introduced in v1.9.3 :

    • perf: highly improved speed in kernel space, by @terrelln
    • perf: faster speed with Visual Studio, thanks to @wolfpld and @remittor
    • perf: improved dictionary compression speed, by @felixhandte
    • perf: fixed LZ4_compress_HC_destSize() ratio, detected by @hsiangkao
    • perf: reduced stack usage in high compression mode, by @Yanpas
    • api : LZ4_decompress_safe_partial() supports unknown compressed size, requested by @jfkthame
    • api : improved LZ4F_compressBound() with automatic flushing, by Christopher Harvie
    • api : can (de)compress to/from NULL without UBs
    • api : fix alignment test on 32-bit systems (state initialization)
    • api : fix LZ4_saveDictHC() in corner case scenario, detected by @IgorKorkin
    • cli : compress multiple files using the legacy format, by Filipe Calasans
    • cli : benchmark mode supports dictionary, by @rkoradi
    • cli : fix --fast with large argument, detected by @picoHz
    • build: link to user-defined memory functions with LZ4_USER_MEMORY_FUNCTIONS
    • build: contrib/cmake_unofficial/ moved to build/cmake/
    • build: visual/* moved to build/
    • build: updated meson script, by @neheb
    • build: tinycc support, by Anton Kochkov
    • install: Haiku support, by Jerome Duval
    • doc : updated LZ4 frame format, clarify EndMark

    Known issues :

    • Some people have reported a broken liblz4_static.lib file in the package lz4_win64_v1_9_3.zip. This is probably a mingw / msvc compatibility issue. If you have issues employing this file, the solution is to rebuild it locally from sources with your target compiler.
    • The standard Makefile in v1.9.3 doesn't honor CFLAGS when passed through environment variable. This is fixed in more recent version on dev branch. See #958 for details.
    Source code(tar.gz)
    Source code(zip)
    lz4_win32_v1_9_3.zip(327.09 KB)
    lz4_win64_v1_9_3.zip(586.30 KB)
  • v1.9.2(Aug 20, 2019)

    This is primarily a bugfix release, driven by the bugs found and fixed since LZ4 recent integration into Google's oss-fuzz, initiated by @cmeister2 . The new capability was put to good use by @terrelln, dramatically expanding the number of scenarios covered by the profile-guided fuzzer. These scenarios were already covered by unguided fuzzers, but a few bugs require a large combinations of factors that unguided fuzzer are unable to produce in a reasonable timeframe.

    Due to these fixes, an upgrade of LZ4 to its latest version is recommended.

    • fix : out-of-bound read in exceptional circumstances when using decompress_partial(), by @terrelln
    • fix : slim opportunity for out-of-bound write with compress_fast() with a large enough input and when providing an output smaller than recommended (< LZ4_compressBound(inputSize)), by @terrelln
    • fix : rare data corruption bug with LZ4_compress_destSize(), by @terrelln
    • fix : data corruption bug when Streaming with an Attached Dict in HC Mode, by @felixhandte
    • perf: enable LZ4_FAST_DEC_LOOP on aarch64/GCC by default, by @prekageo
    • perf: improved lz4frame streaming API speed, by @dreambottle
    • perf: speed up lz4hc on slow patterns when using external dictionary, by @terrelln
    • api: better in-place decompression and compression support
    • cli : --list supports multi-frames files, by @gstedman
    • cli: --version outputs to stdout
    • cli : add option --best as an alias of -12 , by @Low-power
    • misc: Integration into oss-fuzz by @cmeister2, expanded list of scenarios by @terrelln
    Source code(tar.gz)
    Source code(zip)
    lz4_win32_v1_9_2.zip(248.58 KB)
    lz4_win64_v1_9_2.zip(412.23 KB)
  • v1.9.1(Apr 23, 2019)

    This is a point release, which main objective is to fix a read out-of-bound issue reported in the decoder of v1.9.0. Upgrade from this version is recommended.

    A few other improvements were also merged during this time frame (listed below). A visible user-facing one is the introduction of a new command --list, started by @gabrielstedman, which makes it possible to peek at the internals of a .lz4 file. It will provide the block type, checksum information, compressed and decompressed sizes (if present). The command is limited to single-frame files for the time being.


    • fix : decompression functions were reading a few bytes beyond input size (introduced in v1.9.0, reported by @ppodolsky and @danlark1)
    • api : fix : lz4frame initializers compatibility with c++, reported by @degski
    • cli : added command --list, based on a patch by @gabrielstedman
    • build: improved Windows build, by @JPeterMugaas
    • build: AIX, by Norman Green

    Note : this release has an issue when compiling liblz4 dynamic library on Mac OS-X. This issue is fixed in : https://github.com/lz4/lz4/pull/696 .

    Source code(tar.gz)
    Source code(zip)
    lz4_v1_9_1_win32.zip(240.93 KB)
    lz4_v1_9_1_win64.zip(427.06 KB)
  • v1.9.0(Apr 16, 2019)

    Warning : this version has a known bug in the decompression function which makes it read a few bytes beyond input limit. Upgrade to v1.9.1 is recommended.

    LZ4 v1.9.0 is a performance focused release, also offering minor API updates.

    Decompression speed improvements

    Dave Watson (@djwatson) managed to carefully optimize the LZ4 decompression hot loop, offering substantial speed improvements on x86 and x64 platforms.

    Here are some benchmark running on a Core i7-9700K, source compiled using gcc v8.2.0 on Ubuntu 18.10 "Cosmic Cuttlefish" (Linux 4.18.0-17-generic) :

    | Version | v1.8.3 | v1.9.0 | Improvement | | --- | --- | --- | --- | | enwik8 | 4090 MB/s | 4560 MB/s | +12% | | calgary.tar | 4320 MB/s | 4860 MB/s | +13% | | silesia.tar | 4210 MB/s | 4970 MB/s | +18% |

    Given that decompression speed has always been a strong point of lz4, the improvement is quite substantial.

    The new decoding loop is automatically enabled on x64 and x86. For other cpu types, since our testing capabilities are more limited, the new decoding loop is disabled by default. However, anyone can manually enable it, by using the build macro LZ4_FAST_DEC_LOOP, which accepts values 0 or 1. The outcome will vary depending on exact target and build chains. For example, in our limited tests with ARM platforms, we found that benefits vary strongly depending on cpu manufacturer, chip model, and compiler version, making it difficult to offer a "generic" statement. ARM situation may prove extreme though, due to the proliferation of variants available. Other cpu types may prove easier to assess.

    API updates


    The _destSize() compression variants have been promoted to stable status. These variants reverse the logic, by trying to fit as much input data as possible into a fixed memory budget. This is used for example in WiredTiger and EroFS, which cram as much data as possible into the size of a physical sector, for improved storage density.


    When compressing small inputs, the fixed cost of clearing the compression's internal data structures can become a significant fraction of the compression cost. In v1.8.2, new LZ4 entry points have been introduced to perform this initialization at effectively zero cost. LZ4_resetStream_fast() and LZ4_resetStreamHC_fast() are now promoted into stable.

    They are supplemented by new entry points, LZ4_initStream() and its corresponding HC variant, which must be used on any uninitialized memory segment that will be converted into an LZ4 state. After that, only reset*_fast() is needed to start some new compression job re-using the same context. This proves especially effective when compressing a lot of small data.


    The decompress*_fast() variants have been moved into the deprecate section. While they offer slightly faster decompression speed (~+5%), they are also unprotected against malicious inputs, resulting in security liability. There are some limited cases where this property could prove acceptable (perfectly controlled environment, same producer / consumer), but in most cases, the risk is not worth the benefit. We want to discourage such usage as clearly as possible, by pushing the _fast() variant into deprecation area. For the time being, they will not yet generate deprecation warnings when invoked, to give time to existing applications to move towards decompress*_safe(). But this is the next stage, and is likely to happen in a future release.

    LZ4_resetStream() and LZ4_resetStreamHC() have also been moved into the deprecate section, to emphasize the preference towards LZ4_resetStream_fast(). Their real equivalent are actually LZ4_initStream() and LZ4_initStreamHC(), which are more generic (can accept any memory area to initialize) and safer (control size and alignment). Also, the naming makes it clearer when to use initStream() and when to use resetStream_fast().

    Changes list

    This release brings an assortment of small improvements and bug fixes, as detailed below :

    • perf: large decompression speed improvement on x86/x64 (up to +20%) by @djwatson
    • api : changed : _destSize() compression variants are promoted to stable API
    • api : new : LZ4_initStream(HC), replacing LZ4_resetStream(HC)
    • api : changed : LZ4_resetStream(HC) as recommended reset function, for better performance on small data
    • cli : support custom block sizes, by @blezsan
    • build: source code can be amalgamated, by Bing Xu
    • build: added meson build, by @lzutao
    • build: new build macros : LZ4_DISTANCE_MAX, LZ4_FAST_DEC_LOOP
    • install: MidnightBSD, by @laffer1
    • install: msys2 on Windows 10, by @vtorri
    Source code(tar.gz)
    Source code(zip)
    lz4_v1_9_0_win32.zip(240.51 KB)
    lz4_v1_9_0_win64.zip(424.83 KB)
  • v1.8.3(Sep 11, 2018)

    This is maintenance release, mainly triggered by issue #560. #560 is a data corruption that can only occur in v1.8.2, at level 9 (only), for some "large enough" data blocks (> 64 KB), featuring a fairly specific data pattern, improbable enough that multiple cpu running various fuzzers non-stop during a period of several weeks where not able to find it. Big thanks to @Pashugan for finding and sharing a reproducible sample.

    Due to this fix, v1.8.3 is a recommended update.

    A few other minor features were already merged, and are therefore bundled in this release too.

    Should lz4 prove too slow, it's now possible to invoke --fast=# command, by @jennifermliu . This is equivalent to the acceleration parameter in the API, in which user forfeit some compression ratio for the benefit of better speed.

    The verbose CLI has been fixed, and now displays the real amount of time spent compressing (instead of cpu time). It also shows a new indicator, cpu load %, so that users can determine if the limiting factor was cpu or I/O bandwidth.

    Finally, an existing function, LZ4_decompress_safe_partial(), has been enhanced to make it possible to decompress only the beginning of an LZ4 block, up to a specified number of bytes. Partial decoding can be useful to save CPU time and memory, when the objective is to extract a limited portion from a larger block.

    Source code(tar.gz)
    Source code(zip)
    lz4_v1_8_3_win32.zip(289.94 KB)
    lz4_v1_8_3_win64.zip(578.46 KB)
  • v1.8.2(May 7, 2018)

    LZ4 v1.8.2 is a performance focused release, featuring important improvements for small inputs, especially when coupled with dictionary compression.

    General speed improvements

    LZ4 decompression speed has always been a strong point. In v1.8.2, this gets even better, as it improves decompression speed by about 10%, thanks in a large part to suggestion from @svpv .

    For example, on a Mac OS-X laptop with an Intel Core i7-5557U CPU @ 3.10GHz, running lz4 -bsilesia.tar compiled with default compiler llvm v9.1.0:

    | Version | v1.8.1 | v1.8.2 | Improvement | | --- | --- | --- | --- | | Decompression speed | 2490 MB/s | 2770 MB/s | +11% |

    Compression speeds also receive a welcomed boost, though improvement is not evenly distributed, with higher levels benefiting quite a lot more.

    | Version | v1.8.1 | v1.8.2 | Improvement | | --- | --- | --- | --- | | lz4 -1 | 504 MB/s | 516 MB/s | +2% | | lz4 -9 | 23.2 MB/s| 25.6 MB/s| +10% | | lz4 -12| 3.5 Mb/s| 9.5 MB/s| +170% |

    Should you aim for best possible decompression speed, it's possible to request LZ4 to actively favor decompression speed, even if it means sacrificing some compression ratio in the process. This can be requested in a variety of ways depending on interface, such as using command --favor-decSpeed on CLI. This option must be combined with ultra compression mode (levels 10+), as it needs careful weighting of multiple solutions, which only this mode can process. The resulting compressed object always decompresses faster, but is also larger. Your mileage will vary, depending on file content. Speed improvement can be as low as 1%, and as high as 40%. It's matched by a corresponding file size increase, which tends to be proportional. The general expectation is 10-20% faster decompression speed for 1-2% bigger files.

    | Filename | decompression speed | --favor-decSpeed | Speed Improvement | Size change | | --- | --- | --- | --- | --- | | silesia.tar | 2870 MB/s | 3070 MB/s | +7 % | +1.45% | | dickens | 2390 MB/s | 2450 MB/s | +2 % | +0.21% | | nci | 3740 MB/s | 4250 MB/s | +13 % | +1.93% | | osdb | 3140 MB/s | 4020 MB/s | +28 % | +4.04% | | xml | 3770 MB/s | 4380 MB/s | +16 % | +2.74% |

    Finally, variant LZ4_compress_destSize() also receives a ~10% speed boost, since it now internally redirects toward primary internal implementation of LZ4 fast mode, rather than relying on a separate custom implementation. This allows it to take advantage of all the optimization work that has gone into the main implementation.

    Compressing small contents

    When compressing small inputs, the fixed cost of clearing the compression's internal data structures can become a significant fraction of the compression cost. This release adds a new way, under certain conditions, to perform this initialization at effectively zero cost.

    New, experimental LZ4 APIs have been introduced to take advantage of this functionality in block mode:

    • LZ4_resetStream_fast()
    • LZ4_compress_fast_extState_fastReset()
    • LZ4_resetStreamHC_fast()
    • LZ4_compress_HC_extStateHC_fastReset()

    More detail about how and when to use these functions is provided in their respective headers.

    LZ4 Frame mode has been modified to use this faster reset whenever possible. LZ4F_compressFrame_usingCDict() prototype has been modified to additionally take an LZ4F_CCtx* context, so it can use this speed-up.

    Efficient Dictionary compression

    Support for dictionaries has been improved in a similar way: they can now be used in-place, which avoids the expense of copying the context state from the dictionary into the working context. Users are expect to see a noticeable performance improvement for small data.

    Experimental prototypes (LZ4_attach_dictionary() and LZ4_attach_HC_dictionary()) have been added to LZ4 block API using a loaded dictionary in-place. LZ4 Frame API users should benefit from this optimization transparently.

    The previous two changes, when taken advantage of, can provide meaningful performance improvements when compressing small data. Both changes have no impact on the produced compressed data. The only observable difference is speed.

    Linux git compression ratio vs speed

    This is a representative graphic of the sort of speed boost to expect. The red lines are the speeds seen for an input blob of the specified size, using the previous LZ4 release (v1.8.1) at compression levels 1 and 9 (those being, fast mode and default HC level). The green lines are the equivalent observations for v1.8.2. This benchmark was performed on the Silesia Corpus. Results for the dickens text are shown, other texts and compression levels saw similar improvements. The benchmark was compiled with GCC 7.2.0 with -O3 -march=native -mtune=native -DNDEBUG under Linux 4.6 and run on an Intel Xeon CPU E5-2680 v4 @ 2.40GHz.

    lz4frame_static.h Deprecation

    The content of lz4frame_static.h has been folded into lz4frame.h, hidden by a macro guard "#ifdef LZ4F_STATIC_LINKING_ONLY". This means lz4frame.h now matches lz4.h and lz4hc.h. lz4frame_static.h is retained as a shell that simply sets the guard macro and includes lz4frame.h.

    Changes list

    This release also brings an assortment of small improvements and bug fixes, as detailed below :

    • perf: faster compression on small files, by @felixhandte
    • perf: improved decompression speed and binary size, by Alexey Tourbin (@svpv)
    • perf: faster HC compression, especially at max level
    • perf: very small compression ratio improvement
    • fix : compression compatible with low memory addresses (< 0xFFFF)
    • fix : decompression segfault when provided with NULL input, by @terrelln
    • cli : new command --favor-decSpeed
    • cli : benchmark mode more accurate for small inputs
    • fullbench : can bench _destSize() variants, by @felixhandte
    • doc : clarified block format parsing restrictions, by Alexey Tourbin (@svpv)
    Source code(tar.gz)
    Source code(zip)
    lz4_v1_8_2_win32.zip(288.51 KB)
    lz4_v1_8_2_win64.zip(654.93 KB)
  • v1.8.1.2(Jan 14, 2018)

    LZ4 v1.8.1 most visible new feature is its support for Dictionary compression . This was already somewhat possible, but in a complex way, requiring knowledge of internal working. Support is now more formally added on the API side within lib/lz4frame_static.h. It's early days, and this new API is tagged "experimental" for the time being.

    Support is also added in the command line utility lz4, using the new command -D, implemented by @felixhandte. The behavior of this command is identical to zstd, should you be already familiar.

    lz4 doesn't specify how to build a dictionary. All it says is that it can be any file up to 64 KB. This approach is compatible with zstd dictionary builder, which can be instructed to create a 64 KB dictionary with this command :

    zstd --train dirSamples/* -o dictName --maxdict=64KB

    LZ4 v1.8.1 also offers improved performance at ultra settings (levels 10+). These levels receive a new code, called optimal parser, available in lib/lz4_opt.h. Compared with previous version, the new parser uses less memory (from 384KB to 256KB), performs faster, compresses a little bit better (not much, as it was already close to theoretical limit), and resists pathological patterns which could destroy performance (see #339),

    For comparison, here are some quick benchmark using LZ4 v1.8.0 on my laptop with silesia.tar :

    ./lz4 -b9e12 -v ~/dev/bench/silesia.tar
    *** LZ4 command line interface 64-bits v1.8.0, by Yann Collet ***
    Benchmarking levels from 9 to 12
     9#silesia.tar       : 211984896 ->  77897777 (2.721),  24.2 MB/s ,2401.8 MB/s
    10#silesia.tar       : 211984896 ->  77852187 (2.723),  16.9 MB/s ,2413.7 MB/s
    11#silesia.tar       : 211984896 ->  77435086 (2.738),   7.1 MB/s ,2425.7 MB/s
    12#silesia.tar       : 211984896 ->  77274453 (2.743),   3.3 MB/s ,2390.0 MB/s

    and now using LZ4 v1.8.1 :

    ./lz4 -b9e12 -v ~/dev/bench/silesia.tar
    *** LZ4 command line interface 64-bits v1.8.1, by Yann Collet ***
    Benchmarking levels from 9 to 12
     9#silesia.tar       : 211984896 ->  77890594 (2.722),  24.4 MB/s ,2405.2 MB/s
    10#silesia.tar       : 211984896 ->  77859538 (2.723),  19.3 MB/s ,2476.0 MB/s
    11#silesia.tar       : 211984896 ->  77369725 (2.740),  10.1 MB/s ,2478.4 MB/s
    12#silesia.tar       : 211984896 ->  77270146 (2.743),   3.7 MB/s ,2508.3 MB/s

    The new parser is also directly compatible with lower compression levels, which brings additional benefits :

    • Compatibility with LZ4_*_destSize() variant, which reverses the logic by trying to fit as much data as possible into a predefined limited size buffer.
    • Compatibility with Dictionary compression, as it uses the same tables as regular HC mode

    In the future, this compatibility will also allow dynamic on-the-fly change of compression level, but such feature is not implemented at this stage.

    The release also provides a set of small bug fixes and improvements, listed below :

    • perf : faster and stronger ultra modes (levels 10+)
    • perf : slightly faster compression and decompression speed
    • perf : fix bad degenerative case, reported by @c-morgenstern
    • fix : decompression failed when using a combination of extDict + low memory address (#397), reported and fixed by Julian Scheid (@jscheid)
    • cli : support for dictionary compression (-D), by Felix Handte @felixhandte
    • cli : fix : lz4 -d --rm preserves timestamp (#441)
    • cli : fix : do not modify /dev/null permission as root, by @aliceatlas
    • api : new dictionary api in lib/lz4frame_static.h
    • api : _destSize() variant supported for all compression levels
    • build : make and make test compatible with parallel build -jX, reported by @mwgamera
    • build : can control LZ4LIB_VISIBILITY macro, by @mikir
    • install: fix man page directory (#387), reported by Stuart Cardall (@itoffshore)

    Note : v1.8.1.2 is the same as v.1.8.1, with the version number fixed in source code, as notified by Po-Chuan Hsieh (@sunpoet).

    Source code(tar.gz)
    Source code(zip)
    lz4_v1_8_1_win32.zip(256.23 KB)
    lz4_v1_8_1_win64.zip(568.41 KB)
  • v1.8.1(Jan 13, 2018)

    Prefer using v1.8.1.2. It's the same as v1.8.1, but the version number in source code has been fixed, thanks to @sunpoet. The version number is used in cli and documentation display, to create the full name of dynamic library, and can be requested via LZ4_versionNumber().

    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(Aug 18, 2017)

    cli : fix : do not modify /dev/null permissions, reported by @Maokaman1 cli : added GNU separator -- specifying that all following arguments are only files cli : restored -BX command enabling block checksum API : added LZ4_compress_HC_destSize(), by @remittor API : added LZ4F_resetDecompressionContext() API : lz4frame : negative compression levels trigger fast acceleration, request by @llchan API : lz4frame : can control block checksum and dictionary ID API : fix : expose obsolete decoding functions, reported by @cyfdecyf API : experimental : lz4frame_static.h : new dictionary compression API build : fix : static lib installation, by @ido build : dragonFlyBSD, OpenBSD, NetBSD supported build : LZ4_MEMORY_USAGE can be modified at compile time, through external define doc : Updated LZ4 Frame format to v1.6.0, restoring Dictionary-ID field in header doc : lz4's API manual in .html format, by @inikep

    Source code(tar.gz)
    Source code(zip)
    lz4_v1_8_0_win32.zip(239.55 KB)
    lz4_v1_8_0_win64.zip(486.88 KB)
  • v1.7.5(Jan 3, 2017)

    lz4hc : new high compression mode, by @inikep : levels 10-12 compress more (and slower), 12 is highest level lz4cat : fix : works with relative path (#284) and stdin (#285) (reported by @beiDei8z) cli : fix minor notification when using -r recursive mode API : lz4frame : LZ4F_compressBound(0) provides upper bound of *flush() and *End() (#290, #280) doc : markdown version of man page, by @t-mat (#279) build : Makefile : fix make -jX lib+exe concurrency (#277) build : cmake : improvements by @mgorny (#296)

    Update : earlier versions of pre-compiled Windows binaries had a bug which made them unable to decode files > 2 GB. The new binaries available below fix this issue.

    Source code(tar.gz)
    Source code(zip)
    lz4_v1_7_5_win32-fix.zip(412.07 KB)
    lz4_v1_7_5_win64-fix.zip(458.88 KB)
  • v1.7.4.2(Nov 22, 2016)

  • v1.7.4(Nov 22, 2016)

  • v1.7.3(Nov 16, 2016)

    Changed : moved to versioning : package, cli and library have same version number Improved: Small decompression speed boost Improved: Small compression speed improvement on 64-bits systems Improved: Small compression ratio and speed improvement on small files Improved: Significant speed boost on ARMv6 and ARMv7 Fix : better ratio on 64-bits big-endian targets Improved cmake build script, by @nemequ New liblz4-dll project, by @inikep Makefile: Generates object files (*.o) for faster (re)compilation on low power systems cli : new : --rm and --help commands cli : new : preserved file attributes, by @inikep cli : fix : crash on some invalid inputs cli : fix : -t correctly validates lz4-compressed files, by @terrelln cli : fix : detects and reports fread() errors, thanks to @iyokan report #243 cli : bench : new : -r recursive mode lz4cat : can cat multiple files in a single command line (#184) Added : doc/lz4_manual.html, by @inikep Added : dictionary compression and frame decompression examples, by @terrelln Added : Debianization, by @bioothod

    Source code(tar.gz)
    Source code(zip)
    lz4_v1_7_3_win32.zip(218.71 KB)
    lz4_v1_7_3_win64.zip(323.48 KB)
  • r131(Jun 30, 2015)

    New : Dos/DJGPP target, thanks to Louis Santillan (#114) Added : Example using lz4frame library, by Zbigniew Jędrzejewski-Szmek (#118) Changed: liblz4 : xxhash symbols are dynamically changed (namespace emulation) to avoid symbol conflict Changed: liblz4.a (static library) no longer compiled with -fPIC by default

    Source code(tar.gz)
    Source code(zip)
  • r130(May 29, 2015)

    Hotfix, solving issues with lz4cat.

    In detail : Fixed : incompatibility sparse mode vs console, reported by Yongwoon Cho (#105) Fixed : LZ4IO exits too early when frame crc not present, reported by Yongwoon Cho (#106) Fixed : incompatibility sparse mode vs append mode, reported by Takayuki Matsuoka (#110) Performance fix : big compression speed boost for clang (+30%) New : cross-version test, by Takayuki Matsuoka

    Source code(tar.gz)
    Source code(zip)
  • r129(May 11, 2015)

    New : LZ4_compress_fast() Changed: New lz4 and lz4hc compression API. Previous function prototypes still supported. Changed: Sparse file support enabled by default New : LZ4 CLI improved performance compressing/decompressing multiple files (#86, kind contribution from Kyle J. Harper & Takayuki Matsuoka) Added : LZ4_compress_destSize() Fixed : GCC 4.9+ vector optimization - Reported by Markus Trippelsdorf, Greg Slazinski & Evan Nemerson Changed: Enums converted to LZ4F_ namespace convention - by Takayuki Matsuoka Added : AppVeyor CI environment, for Visual tests - Suggested by Takayuki Matsuoka Modified:Obsolete functions generate warnings - Suggested by Evan Nemerson, contributed by Takayuki Matsuoka Fixed : Bug #75 (unfinished stream), reported by Yongwoon Cho Updated: Documentation converted to MarkDown format

    Source code(tar.gz)
    Source code(zip)
  • r128(Mar 31, 2015)

    New : lz4cli sparse file support (Requested by Neil Wilson, and contributed by Takayuki Matsuoka) New : command -m, to compress multiple files in a single command (suggested by Kyle J. Harper) Fixed : Restored lz4hc compression ratio (slightly lower since r124) New : lz4 cli supports long commands (suggested by Takayuki Matsuoka) New : lz4frame & lz4cli frame content size support New : lz4frame supports skippable frames, as requested by Sergey Cherepanov Changed: Default "make install" directory is /usr/local, as notified by Ron Johnson New : lz4 cli supports "pass-through" mode, requested by Neil Wilson New : datagen can generate sparse files New : scan-build tests, thanks to kind help by Takayuki Matsuoka New : g++ compatibility tests New : arm cross-compilation test, thanks to kind help by Takayuki Matsuoka Fixed : Fuzzer + frametest compatibility with NetBSD (issue #48, reported by Thomas Klausner) Added : Visual project directory Updated: Man page & Specification

    Source code(tar.gz)
    Source code(zip)
  • r127(Jan 2, 2015)

  • r126(Dec 24, 2014)

    New : lz4frame API is now integrated into liblz4 Fixed : GCC 4.9 bug on highest performance settings, reported by Greg Slazinski Fixed : bug within LZ4 HC streaming mode, reported by James Boyle Fixed : older compiler don't like nameless unions, reported by Cheyi Lin Changed : lz4 is C90 compatible Changed : added -pedantic option, fixed a few minor warnings

    Source code(tar.gz)
    Source code(zip)
  • r125(Dec 13, 2014)

    • New 32/64 bits, little/big endian and strict/efficient align detection routines (internal)
    • New directory structure
    • Small decompression speed improvement
    • Fixed a bug into LZ4_compress_limitedOutput(), thanks to Christopher Speller
    • lz4 utility uses lz4frame library (lz4io modified)
    Source code(tar.gz)
    Source code(zip)
  • r124(Nov 8, 2014)

    New : LZ4 HC streaming mode Fixed : LZ4F_compressBound() using null preferencesPtr Updated : xxHash to r38 Updated library number, to 1.4.0

    Source code(tar.gz)
    Source code(zip)
  • r123(Sep 25, 2014)

    Added : experimental lz4frame API; special thanks to Takayuki Matsuoka and Christopher Jackson for testings and suggestions Fix : s390x support, thanks to Nobuhiro Iwamatsu

    Source code(tar.gz)
    Source code(zip)
  • r122(Aug 28, 2014)

    Fix : AIX & AIX64 support (SamG) Fix : mips 64-bits support (lew van) Added : Examples directory, using code examples from Takayuki Matsuoka Updated : Framing specification, to v1.4.1

    Source code(tar.gz)
    Source code(zip)
  • r121(Aug 7, 2014)

    Fix : make install for OS-X and BSD, thanks to Takayuki Matsuoka Added : make install for kFreeBSD and Hurd (Nobuhiro Iwamatsu) Fix : LZ4 HC streaming bug

    Source code(tar.gz)
    Source code(zip)
  • r120(Jul 24, 2014)

    Modified : Streaming API, using strong types Added : LZ4_versionNumber(), thanks to Takayuki Matsuoka Fix : OS-X : library install name, thanks to Clemens Lang Updated : Makefile : synchronize library version number with lz4.h, thanks to Takayuki Matsuoka Updated : Makefile : stricter compilation flags Added : pkg-config, thanks to Zbigniew Jędrzejewski-Szmek (issue 135) Makefile : lz4-test only test native binaries, as suggested by Michał Górny (issue 136) Updated : xxHash to r35

    Source code(tar.gz)
    Source code(zip)
  • r119(Jul 2, 2014)

  • r118(Jun 26, 2014)

    New : LZ4 Streaming API (Fast version), special thanks to Takayuki Matsuoka New : datagen : parametrable synthetic data generator for tests Improved : fuzzer, support more test cases, more parameters, ability to jump to specific test fix : support ppc64le platform (issue 131) fix : Issue 52 (malicious address space overflow in 32-bits mode when using custom format) fix : Makefile : minor issue 130 : header files permissions

    Source code(tar.gz)
    Source code(zip)
  • r117(Apr 22, 2014)

    Added : man pages for lz4c and lz4cat Added : automated tests on Travis, thanks to Takayuki Matsuoka ! fix : block-dependency command line ( issue 127 ) fix : lz4fullbench ( issue 128 )

    Source code(tar.gz)
    Source code(zip)
A simple C library implementing the compression algorithm for isosceles triangles.

orvaenting Summary A simple C library implementing the compression algorithm for isosceles triangles. License This project's license is GPL 2 (as of J

Kevin Matthes 0 Apr 1, 2022
Better lossless compression than PNG with a simpler algorithm

Zpng Small experimental lossless photographic image compression library with a C API and command-line interface. It's much faster than PNG and compres

Chris Taylor 201 Jun 28, 2022
Brotli compression format

SECURITY NOTE Please consider updating brotli to version 1.0.9 (latest). Version 1.0.9 contains a fix to "integer overflow" problem. This happens when

Google 11.3k Aug 2, 2022
Multi-format archive and compression library

Welcome to libarchive! The libarchive project develops a portable, efficient C library that can read and write streaming archives in a variety of form

null 1.8k Aug 1, 2022
LZFSE compression library and command line tool

LZFSE This is a reference C implementation of the LZFSE compressor introduced in the Compression library with OS X 10.11 and iOS 9. LZFSE is a Lempel-

null 1.7k Jul 30, 2022
Small strings compression library

SMAZ - compression for very small strings ----------------------------------------- Smaz is a simple compression library suitable for compressing ver

Salvatore Sanfilippo 990 Jul 27, 2022
A massively spiffy yet delicately unobtrusive compression library.

ZLIB DATA COMPRESSION LIBRARY zlib 1.2.11 is a general purpose data compression library. All the code is thread safe. The data format used by the z

Mark Adler 3.7k Jul 31, 2022
Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++

LZHAM - Lossless Data Compression Codec Public Domain (see LICENSE) LZHAM is a lossless data compression codec written in C/C++ (specifically C++03),

Rich Geldreich 628 Jul 28, 2022
A bespoke sample compression codec for 64k intros

pulsejet A bespoke sample compression codec for 64K intros codec pulsejet lifts a lot of ideas from Opus, and more specifically, its CELT layer, which

logicoma 34 Jul 25, 2022
A variation CredBandit that uses compression to reduce the size of the data that must be trasnmitted.

compressedCredBandit compressedCredBandit is a modified version of anthemtotheego's proof of concept Beacon Object File (BOF). This version does all t

Conor Richard 17 Apr 9, 2022
Data compression utility for minimalist demoscene programs.

bzpack Bzpack is a data compression utility which targets retrocomputing and demoscene enthusiasts. Given the artificially imposed size limits on prog

Milos Bazelides 20 Jul 27, 2022
gzip (GNU zip) is a compression utility designed to be a replacement for 'compress'

gzip (GNU zip) is a compression utility designed to be a replacement for 'compress'

ACM at UCLA 7 Apr 27, 2022
Advanced DXTc texture compression and transcoding library

crunch/crnlib v1.04 - Advanced DXTn texture compression library Public Domain - Please see license.txt. Portions of this software make use of public d

null 747 Jul 19, 2022
A fast compressor/decompressor

Snappy, a fast compressor/decompressor. Introduction Snappy is a compression/decompression library. It does not aim for maximum compression, or compat

Google 5.3k Aug 11, 2022
Analysing and implementation of lossless data compression techniques like Huffman encoding and LZW was conducted along with JPEG lossy compression technique based on discrete cosine transform (DCT) for Image compression.

PROJECT FILE COMPRESSION ALGORITHMS - Huffman compression LZW compression DCT Aim of the project - Implement above mentioned compression algorithms an

null 1 Dec 14, 2021
Przemyslaw Skibinski 556 Aug 4, 2022
Zstandard - Fast real-time compression algorithm

Zstandard, or zstd as short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better comp

Facebook 17.4k Aug 8, 2022
Shamir’s Secret Sharing Algorithm: Shamir’s Secret Sharing is an algorithm in cryptography created by Adi Shamir. The main aim of this algorithm is to divide secret that needs to be encrypted into various unique parts.

Shamir-s-Secret-Sharing-Algorithm-Cryptography Shamir’s Secret Sharing Algorithm: Shamir’s Secret Sharing is an algorithm in cryptography created by A

Pavan Ananth Sharma 5 Jul 5, 2022
RemixDB: A read- and write-optimized concurrent KV store. Fast point and range queries. Extremely low write-amplification.

REMIX and RemixDB The REMIX data structure was introduced in paper "REMIX: Efficient Range Query for LSM-trees", FAST'21. This repository maintains a

Xingbo Wu 75 Jul 22, 2022
dwm is an extremely fast, small, and dynamic window manager for X.

dwm - dynamic window manager dwm is an extremely fast, small, and dynamic window manager for X. My Patches This is in the order that I patched everyth

Christian Chiarulli 30 Jun 27, 2022
Typesafe, Generic & Extremely fast Dictionary in C 🚀

CDict.h Typesafe, Generic, and Extremely Fast Dictionary in C ?? Key Features Extremely fast non-cryptographic hash algorithm XXHash Complete Typesafe

Robus Gauli 14 May 20, 2022
An extremely fast FEC filing parser written in C

FastFEC A C program to stream and parse FEC filings, writing output to CSV. This project is in early stages but works on a wide variety of filings and

The Washington Post 54 Jul 25, 2022
libmdbx is an extremely fast, compact, powerful, embedded, transactional key-value database, with permissive license

One of the fastest embeddable key-value ACID database without WAL. libmdbx surpasses the legendary LMDB in terms of reliability, features and performance.

Леонид Юрьев (Leonid Yuriev) 1k Apr 13, 2022
Heavily optimized zlib compression algorithm

Optimized version of longest_match for zlib Summary Fast zlib longest_match function. Produces slightly smaller compressed files for significantly fas

Konstantin Nosov 117 May 10, 2022
CComp: A Parallel Compression Algorithm for Compressed Word Search

The goal of CComp is to achieve better compressed search times while achieving the same compression-decompression speed as other parallel compression algorithms. CComp achieves this by splitting both the word dictionaries and the input stream, processing them in parallel.

Emir Öztürk 4 Sep 30, 2021
A simple C library implementing the compression algorithm for isosceles triangles.

orvaenting Summary A simple C library implementing the compression algorithm for isosceles triangles. License This project's license is GPL 2 (as of J

Kevin Matthes 0 Apr 1, 2022
Better lossless compression than PNG with a simpler algorithm

Zpng Small experimental lossless photographic image compression library with a C API and command-line interface. It's much faster than PNG and compres

Chris Taylor 201 Jun 28, 2022