Advanced DXTc texture compression and transcoding library

Related tags

Compression crunch
Overview

crunch/crnlib v1.04 - Advanced DXTn texture compression library

Public Domain - Please see license.txt.

Portions of this software make use of public domain code originally written by Igor Pavlov (LZMA), RYG (crn_ryg_dxt*), and Sean Barrett (stb_image.c).

If you use this software in a product, an acknowledgment in the product documentation would be highly appreciated but is not required.

Overview

crnlib is a lossy texture compression library for developers that ship content using the DXT1/5/N or 3DC compressed color/normal map/cubemap mipmapped texture formats. It was written by the same author as the open source LZHAM compression library.

It can compress mipmapped 2D textures, normal maps, and cubemaps to approx. 1-1.25 bits/texel, and normal maps to 1.75-2 bits/texel. The actual bitrate depends on the complexity of the texture itself, the specified quality factor/target bitrate, and ultimately on the desired quality needed for a particular texture.

crnlib's differs significantly from other approaches because its compressed texture data format was carefully designed to be quickly transcodable directly to DXTn with no intermediate recompression step. The typical (single threaded) transcode to DXTn rate is generally between 100-250 megatexels/sec. The current library supports PC (Win32/x64) and Xbox 360. Fast random access to individual mipmap levels is supported.

crnlib can also generates standard .DDS files at specified quality setting, which results in files that are much more compressible by LZMA/Deflate/etc. compared to files generated by standard DXTn texture tools (see below). This feature allows easy integration into any engine or graphics library that already supports .DDS files.

The .CRN file format supports the following core DXTn texture formats: DXT1 (but not DXT1A), DXT5, DXT5A, and DXN/3DC

It also supports several popular swizzled variants (several are also supported by AMD's Compressonator): DXT5_XGBR, DXT5_xGxR, DXT5_AGBR, and DXT5_CCxY (experimental luma-chroma YCoCg).

Recommended Software

AMD's Compressonator tool is recommended to view the .DDS files created by the crunch tool and the included example projects.

Note: Some of the swizzled DXTn .DDS output formats (such as DXT5_xGBR) read/written by the crunch tool or examples deviate from the DX9 DDS standard, so DXSDK tools such as DXTEX.EXE won't load them at all or they won't be properly displayed.

Compression Algorithm Details

The compression process employed in creating both .CRN and clustered .DDS files utilizes a very high quality, scalable DXTn endpoint optimizer capable of processing any number of pixels (instead of the typical hard coded 16), optional adaptive switching between several macroblock sizes/configurations (currently any combination of 4x4, 8x4, 4x8, and 8x8 pixel blocks), endpoint clusterization using top-down cluster analysis, vector quantization (VQ) of the selector indices, and several custom algorithms for compressing the resulting endpoint/selector codebooks and macroblock indices. Multiple feedback passes are performed between the clusterization and VQ steps to optimize quality, and several steps use a brute force refinement approach to improve quality. The majority of compression steps are multithreaded.

The .CRN format currently utilizes canonical Huffman coding for speed (similar to Deflate but with much larger tables), but the next major version will also utilize adaptive binary arithmetic coding and higher order context modeling using already developed tech from the my LZHAM compression library.

Supported File Formats

crnlib supports two compressed texture file formats. The first format (clustered .DDS) is simple to integrate into an existing project (typically, no code changes are required), but it doesn't offer the highest quality/compression ratio that crnlib is capable of. Integrating the second, higher quality custom format (.CRN) requires a few typically straightforward engine modifications to integrate the .CRN->DXTn transcoder header file library into your tools/engine.

.DDS

crnlib can compress textures to standard DX9-style .DDS files using clustered DXTn compression, which is a subset of the approach used to create .CRN files.(For completeness, crnlib also supports vanilla, block by block DXTn compression too, but that's not very interesting.) Clustered DXTn compressed .DDS files are much more compressible than files created by other libraries/tools. Apart from increased compressibility, the .DDS files generated by this process are completely standard so they should be fairly easy to add to a project with little to no code changes.

To actually benefit from clustered DXTn .DDS files, your engine needs to further losslessly compress the .DDS data generated by crnlib using a lossless codec such as zlib, lzo, LZMA, LZHAM, etc. Most likely, your engine does this already. (If not, you definitely should because DXTn compressed textures generally contain a large amount of highly redundant data.)

Clustered .DDS files are intended to be the simplest/fastest way to integrate crnlib's tech into a project.

.CRN

The second, better, option is to compress your textures to .CRN files using crnlib. To read the resulting .CRN data, you must add the .CRN transcoder library (located in the included single file, stand-alone header file library inc/crn_decomp.h) into your application. .CRN files provide noticeably higher quality at the same effective bitrate compared to clustered DXTn compressed .DDS files. Also, .CRN files don't require further lossless compression because they're already highly compressed.

.CRN files are a bit more difficult/risky to integrate into a project, but the resulting compression ratio and quality is superior vs. clustered .DDS files.

.KTX

crnlib and crunch can read/write the .KTX file format in various pixel formats. Rate distortion optimization (clustered DXTc compression) is not yet supported when writing .KTX files.

The .KTX file format is just like .DDS, except it's a fairly well specified standard created by the Khronos Group. Unfortunately, almost all of the tools I've found that support .KTX are fairly (to very) buggy, or are limited to only a handful of pixel formats, so there's no guarantee that the .KTX files written by crnlib can be reliably read by other tools.

Building the Examples

This release contains the source code and projects for three simple example projects:

crn_examples.2008.sln is a Visual Studio 2008 (VC9) solution file containing projects for Win32 and x64. crnlib itself also builds with VS2005, VS2010, and gcc 4.5.0 (TDM GCC+MinGW). A codeblocks 10.05 workspace and project file is also included, but compiling crnlib this way hasn't been tested much.

example1

Demonstrates how to use crnlib's high-level C-helper compression/decompression/transcoding functions in inc/crnlib.h. It's a fairly complete example of crnlib's functionality.

example2

Shows how to transcodec .CRN files to .DDS using only the functionality in inc/crn_decomp.h. It does not link against against crnlib.lib or depend on it in any way. (Note: The complete source code, approx. 4800 lines, to the CRN transcoder is included in inc/crn_decomp.h.)

example2 is intended to show how simple it is to integrate CRN textures into your application.

example3

Shows how to use the regular, low-level DXTn block compressor functions in inc/crnlib.h. This functionality is included for completeness. (Your engine or toolchain most likely already has its own DXTn compressor. crnlib's compressor is typically very competitive or superior to most available closed and open source CPU-based compressors.)

Creating Compressed Textures from the Command Line (crunch.exe)

The simplest way to create compressed textures using crnlib is to integrate the bin\crunch.exe or bin\crunch_x64.exe) command line tool into your texture build toolchain or export process. It can write DXTn compressed 2D/cubemap textures to regular DXTn compressed .DDS, clustered (or reduced entropy) DXTn compressed .DDS, or .CRN files. It can also transcode or decompress files to several standard image formats, such as TGA or BMP. Run crunch.exe with no options for help.

The .CRN files created by crunch.exe can be efficiently transcoded to DXTn using the included CRN transcoding library, located in full source form under inc/crn_decomp.h.

Here are a few example crunch.exe command lines:

  1. Compress blah.tga to blah.dds using normal DXT1 compression:
  • crunch -file blah.tga -fileformat dds -dxt1
  1. Compress blah.tga to blah.dds using clustered DXT1 at an effective bitrate of 1.5 bits/texel, display image statistic:
  • crunch -file blah.tga -fileformat dds -dxt1 -bitrate 1.5 -imagestats
  1. Compress blah.tga to blah.dds using clustered DXT1 at quality level 100 (from [0,255]), with no mipmaps, display LZMA statistics:
  • crunch -file blah.tga -fileformat dds -dxt1 -quality 100 -mipmode none -lzmastats
  1. Compress blah.tga to blah.crn using clustered DXT1 at a bitrate of 1.2 bits/texel, no mipmaps:
  • crunch -file blah.tga -dxt1 -bitrate 1.2 -mipmode none
  1. Decompress blah.dds to a .tga file:
  • crunch -file blah.dds -fileformat tga
  1. Transcode blah.crn to a .dds file:
  • crunch -file blah.crn
  1. Decompress blah.crn, writing each mipmap level to a separate .tga file:
  • crunch -split -file blah.crn -fileformat tga

crunch.exe can do a lot more, like rescale/crop images before compression, convert images from one file format to another, compare images, process multiple images, etc.

Note: I would have included the full source to crunch.exe, but it still has some low-level dependencies to crnlib internals which I didn't have time to address. This version of crunch.exe has some reduced functionality compared to an earlier eval release. For example, XML file support is not included in this version.

Using crnlib

The most flexible and powerful way of using crnlib is to integrate the library into your editor/toolchain/etc. and directly supply it your raw/source texture bits. See the C-style API's and comments in inc/crnlib.h.

To compress, you basically fill in a few structs in and call one function:

void *crn_compress( const crn_comp_params &comp_params,
                    crn_uint32 &compressed_size,
                    crn_uint32 *pActual_quality_level = NULL,
                    float *pActual_bitrate = NULL);

Or, if you want crnlib to also generate mipmaps, you call this function:

void *crn_compress( const crn_comp_params &comp_params,
                    const crn_mipmap_params &mip_params,
                    crn_uint32 &compressed_size,
                    crn_uint32 *pActual_quality_level = NULL,
                    float *pActual_bitrate = NULL);

You can also transcode/uncompress .DDS/.CRN files to raw 32bpp images using crn_decompress_crn_to_dds() and crn_decompress_dds_to_images().

Internally, crnlib just uses inc/crn_decomp.h to transcode textures to DXTn. If you only need to transcode .CRN format files to raw DXTn bits at runtime (and not compress), you don't actually need to compile or link against crnlib at all. Just include inc/crn_decomp.h, which contains a completely self-contained CRN transcoder in the "crnd" namespace. The crnd_get_texture_info(), crnd_unpack_begin(), crnd_unpack_level(), etc. functions are all you need to efficiently get at the raw DXTn bits, which can be directly supplied to whatever API or GPU you're using. (See example2.)

Important note: When compiling under native client, be sure to define the PLATFORM_NACL macro before including the inc/crn_decomp.h header file library.

Known Issues/Bugs

  • crnlib currently assumes you'll be further losslessly compressing its output .DDS files using LZMA. However, some engines use weaker codecs such as LZO, zlib, or custom codecs, so crnlib's bitrate measurements will be inaccurate. It should be easy to allow the caller to plug-in custom lossless compressors for bitrate measurement.

  • Compressing to a desired bitrate can be time consuming, especially when processing large (2k or 4k) images to the .CRN format. There are several high-level optimizations employed when compressing to clustered DXTn .DDS files using multiple trials, but not so for .CRN.

  • The .CRN compressor does not currently use 3 color (transparent) DXT1 blocks at all, only 4 color blocks. So it doesn't support DXT1A transparency, and its output quality suffers a little due to this limitation. (Note that the clustered DXTn compressor used when writing clustered .DDS files does not have this limitation.)

  • Clustered DXT5/DXT5A compressor is able to group DXT5A blocks into clusters only if they use absolute (black/white) selector indices. This hurts performance at very low bitrates, because too many bits are effectively given to alpha.

  • DXT3 is not supported when writing .CRN or clustered DXTn DDS files. (DXT3 is supported by crnlib's when compressing to regular DXTn DDS files.) You'll get DXT5 files if you request DXT3. However, DXT3 is supported by the regular DXTn block compressor. (DXT3's 4bpp fixed alpha sucks verses DXT5 alpha blocks, so I don't see this as a bug deal.)

  • The DXT5_CCXY format uses a simple YCoCg encoding that is workable but hasn't been tuned for max. quality yet.

  • Clustered (or rate distortion optimized) DXTc compression is only supported when writing to .DDS, not .KTX. Also, only plain block by block compression is supported when writing to ETC1, and .CRN does not support ETC1.

Compile to Javascript with Emscripten

Download and install Emscripten: http://kripken.github.io/emscripten-site/docs/getting_started/downloads.html

From the root directory, run:

    emcc -O3 emscripten/crn.cpp -I./inc -s EXPORTED_FUNCTIONS="['_malloc', '_free', '_crn_get_width', '_crn_get_height', '_crn_get_levels', '_crn_get_dxt_format', '_crn_get_bytes_per_block', '_crn_get_uncompressed_size', '_crn_decompress']" -s NO_EXIT_RUNTIME=1 -s NO_FILESYSTEM=1 -s ELIMINATE_DUPLICATE_FUNCTIONS=1 -s ALLOW_MEMORY_GROWTH=1 --memory-init-file 0 -o crunch.js
Comments
  • Improve default documentation

    Improve default documentation

    Change the README to be Markdown so it is much more friendly looking on Github versus plain text.

    Adds a "canonical" CHANGELOG.md, again for a more friendly project overview in Github.

    opened by Jake-Shadle 2
  • Please consider CC0 license instead

    Please consider CC0 license instead

    Hi! Some colleagues of mine work on VR gaming, and they were excited that you chose to dedicate this work to the public domain. Thank you!

    I'm a license nerd, since I worked for years on getting Wikipedia's image database properly licensed. Unfortunately, the status of "public domain dedications" is a bit fuzzy. It might hold in the USA and EU, but worldwide, there's no such standard. Even at small companies and non-profits, there are legal departments that have to be careful about these matters.

    It's true, Public Domain declarations are relatively low risk. But if you want to give the worldwide community maximum rights in a legally tested way, the current best option is to use the Creative Commons Zero license.

    opened by neilk 1
  • Missing break;

    Missing break;

    https://github.com/BinomialLLC/crunch/blob/80c087dbc90a12d1e47309679cc89c1fc2cf8650/crnlib/crnlib.cpp#L408

    There is missing break; in code. It'll fallback to XY path and break decompressed pixels.

    opened by sagaceilo 1
  • Texture distribution

    Texture distribution

    Hi. I would like to show you my tool www.Photopea.com . You can use it as a viewer of .DDS files (works even on your phone). It supports BC1, BC2, BC3 and BC7 (DX10) compressions.

    I also have a question about the strategy of the texture distribution. I am new to this area.

    First, we want textures to be small "on the wire" (on a DVD / HDD / delivered over the internet). Next, we want them to be small in the GPU memory. I think it is clear, that any non-GPU-ish lossy compression (such as JPG or WebP) can achieve much better quality/size ratio, than any DXTx format (even zipped DXTx). So JPG or WebP is more suiteble for using "on the wire".

    I often see developers directly distributing textues in DXTx format (DDS files) "on the wire". The usual excuse is, that decoding JPG and encoding it into DXTx (at the moment of using the texture) would be too time-consuming (while DXTx can be copied to the GPU without any modifications).

    I implemented a very naive DXT1 compression into Photopea (File - Export - DDS) and it is surprisingly fast (1 MPx texture takes 80 ms to encode). So I feel like compressing textures (to DXTx) right before sending them to the GPU makes sense. So what is the purpose of the DDS format? Why do developers distribute textures in the DDS "on the wire", when there are better compression methods?

    opened by photopea 1
  • ambiguous calls prevents compilation (gcc, clang)

    ambiguous calls prevents compilation (gcc, clang)

    Ambiguous calls prevents compilation (gcc, clang):

    crn_vector.cpp:26:53: error: call of overloaded ‘next_pow2(size_t&)’ is ambiguous
              new_capacity = math::next_pow2(new_capacity);
                                                         ^
    In file included from crn_core.h:173:0,
                     from crn_vector.cpp:3:
    crn_math.h:84:21: note: candidate: crnlib::uint32 crnlib::math::next_pow2(crnlib::uint32)
           inline uint32 next_pow2(uint32 val)
                         ^~~~~~~~~
    crn_math.h:95:21: note: candidate: crnlib::uint64 crnlib::math::next_pow2(crnlib::uint64)
           inline uint64 next_pow2(uint64 val)
    
    crn_vector.cpp:25:60: error: call of overloaded ‘is_power_of_2(size_t&)’ is ambiguous
           if ((grow_hint) && (!math::is_power_of_2(new_capacity)))
                                                                ^
    In file included from crn_core.h:173:0,
                     from crn_vector.cpp:3:
    crn_math.h:59:19: note: candidate: bool crnlib::math::is_power_of_2(crnlib::uint32)
           inline bool is_power_of_2(uint32 x) { return x && ((x & (x - 1U)) == 0U); }
                       ^~~~~~~~~~~~~
    crn_math.h:60:19: note: candidate: bool crnlib::math::is_power_of_2(crnlib::uint64)
           inline bool is_power_of_2(uint64 x) { return x && ((x & (x - 1U)) == 0U); }
                       ^~~~~~~~~~~~~
    

    Is there a need to keep both uint32/uint64 versions at the same time?

    opened by illwieckz 1
  • [Suggestion] Drop VS-specific files and use CMake to generate them instead

    [Suggestion] Drop VS-specific files and use CMake to generate them instead

    Pushing Visual Studio files seems like a bad idea as it enforces a specific IDE and keeps Crunch from being used in a wider project using any other IDE. Also by using CMake to generate your project's files you don't have to worry about maintaining your solution as new versions of VS comes out.

    opened by Gpinchon 0
  • Issue with converting KTX file to DDS

    Issue with converting KTX file to DDS

    Hi, I have this KTX file: 84a41266.zip

    It seems to be valid KTX2 file according to specification http://wiki.xentax.com/index.php/KTX_Image

    I've also checked it with "ktxinfo.exe" tool from official Khronos Software https://github.com/KhronosGroup/KTX-Software and it seems to print some info:

    identifier: «KTX 20»\r\n\x1A\n vkFormat: VK_FORMAT_UNDEFINED typeSize: 1 pixelWidth: 2048 pixelHeight: 2048 pixelDepth: 0 layerCount: 0 faceCount: 1 levelCount: 12 supercompressionScheme: KTX_SS_ZSTD dataFormatDescriptor.byteOffset: 0x170 dataFormatDescriptor.byteLength: 44 keyValueData.byteOffset: 0x19c keyValueData.byteLength: 132 supercompressionGlobalData.byteOffset: 0 supercompressionGlobalData.byteLength: 0

    But it can't be converted correctly with crunch.

    I'm getting "Error: Unable to read KTX file" error while trying to parse it.

    Can you add support for this file format to crunch?

    opened by bartlomiejduda 0
  • non-square textures missing 1x1 mip level, which WebGL needs to render

    non-square textures missing 1x1 mip level, which WebGL needs to render

    I was unable to get a non-square texture with mipmaps to render in WebGL unless I squared it myself beforehand. The non-square texture coming out of crunch looks healthy to me. Screen Shot 2021-11-20 at 12 31 02 PM

    The idea came from someone who had this issue with ETC textures, and their solution seems to work (for android, where ETC is most common): https://github.com/google/etc2comp/issues/31

    PVRTC ofcourse strictly requires square textures, so the bug is simply not possible there.

    opened by bunnybones1 1
  • CRNLIB_ASSERT(num_threads <= cMaxThreads) fails on many-proc computers

    CRNLIB_ASSERT(num_threads <= cMaxThreads) fails on many-proc computers

    I managed to "fix" this locally by bumping the cMaxThreads value in crn_threading_win32.h to 64. Is it safe to bump this? Is it just for a sanity check, or are other things driven by it somehow?

    Note: Could be this is a bug in an old version of the library. Haven't tried to repro on latest main branch here, since this is a very legacy project.

    The offending line seems to be: https://github.com/BinomialLLC/crunch/blob/master/crnlib/crn_image_utils.cpp#L605 It's not using the crn_get_max_helper_threads function when calling task_pool tp; tp.init(g_number_of_processors - 1);

    opened by shinymerlyn 0
  • Fuzzing set up and OSS-Fuzz integration

    Fuzzing set up and OSS-Fuzz integration

    Hi!

    I have been working on getting fuzzing into crunch and it would be great to get it aligned with OSS-Fuzz. I have set up an initial integration with OSS-Fuzz here: https://github.com/google/oss-fuzz/pull/6056. In order to integrate with OSS-Fuzz, I would just need an email and then get this PR merged, following that I can clean up the PR on the OSS-Fuzz repo https://github.com/google/oss-fuzz/pull/6056.

    Let me know what you think.

    opened by DavidKorczynski 1
  • Adaptive Size

    Adaptive Size

    hi, The size of the endpoint and selector codebooks is calculated based on the total number of blocks in the image and the quality parameter and image format , while the actual complexity of the image isn’t evaluated and isn’t taken into account.I want to control
    codebooks size by the complexity of the image( the lower the complexity of the image, the lower the size of codebooks), could you give me some suggestions?

    thanks

    opened by chenyangchenyang 0
LZFSE compression library and command line tool

LZFSE This is a reference C implementation of the LZFSE compressor introduced in the Compression library with OS X 10.11 and iOS 9. LZFSE is a Lempel-

null 1.7k Jan 4, 2023
Small strings compression library

SMAZ - compression for very small strings ----------------------------------------- Smaz is a simple compression library suitable for compressing ver

Salvatore Sanfilippo 1k Dec 28, 2022
A massively spiffy yet delicately unobtrusive compression library.

ZLIB DATA COMPRESSION LIBRARY zlib 1.2.11 is a general purpose data compression library. All the code is thread safe. The data format used by the z

Mark Adler 4.1k Dec 30, 2022
A simple C library implementing the compression algorithm for isosceles triangles.

orvaenting Summary A simple C library implementing the compression algorithm for isosceles triangles. License This project's license is GPL 2 (as of J

Kevin Matthes 0 Apr 1, 2022
Brotli compression format

SECURITY NOTE Please consider updating brotli to version 1.0.9 (latest). Version 1.0.9 contains a fix to "integer overflow" problem. This happens when

Google 11.7k Dec 29, 2022
Extremely Fast Compression algorithm

LZ4 - Extremely fast compression LZ4 is lossless compression algorithm, providing compression speed > 500 MB/s per core, scalable with multi-cores CPU

lz4 7.9k Dec 31, 2022
Zstandard - Fast real-time compression algorithm

Zstandard, or zstd as short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better comp

Facebook 19.2k Jan 1, 2023
Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++

LZHAM - Lossless Data Compression Codec Public Domain (see LICENSE) LZHAM is a lossless data compression codec written in C/C++ (specifically C++03),

Rich Geldreich 641 Dec 22, 2022
A bespoke sample compression codec for 64k intros

pulsejet A bespoke sample compression codec for 64K intros codec pulsejet lifts a lot of ideas from Opus, and more specifically, its CELT layer, which

logicoma 34 Jul 25, 2022
A variation CredBandit that uses compression to reduce the size of the data that must be trasnmitted.

compressedCredBandit compressedCredBandit is a modified version of anthemtotheego's proof of concept Beacon Object File (BOF). This version does all t

Conor Richard 18 Sep 22, 2022
Data compression utility for minimalist demoscene programs.

bzpack Bzpack is a data compression utility which targets retrocomputing and demoscene enthusiasts. Given the artificially imposed size limits on prog

Milos Bazelides 20 Jul 27, 2022
gzip (GNU zip) is a compression utility designed to be a replacement for 'compress'

gzip (GNU zip) is a compression utility designed to be a replacement for 'compress'

ACM at UCLA 8 Nov 6, 2022
Better lossless compression than PNG with a simpler algorithm

Zpng Small experimental lossless photographic image compression library with a C API and command-line interface. It's much faster than PNG and compres

Chris Taylor 214 Dec 23, 2022
A C++ static library offering a clean and simple interface to the 7-zip DLLs.

bit7z A C++ static library offering a clean and simple interface to the 7-zip DLLs Supported Features • Getting Started • Download • Requirements • Bu

Riccardo 326 Jan 1, 2023
miniz: Single C source file zlib-replacement library, originally from code.google.com/p/miniz

Miniz Miniz is a lossless, high performance data compression library in a single source file that implements the zlib (RFC 1950) and Deflate (RFC 1951

Rich Geldreich 1.6k Jan 5, 2023
Fork of the popular zip manipulation library found in the zlib distribution.

minizip-ng 3.0.0 minizip-ng is a zip manipulation library written in C that is supported on Windows, macOS, and Linux. Developed and maintained by Nat

zlib-ng 971 Jan 4, 2023
Fork of the popular zip manipulation library found in the zlib distribution.

minizip-ng 3.0.1 minizip-ng is a zip manipulation library written in C that is supported on Windows, macOS, and Linux. Developed and maintained by Nat

zlib-ng 971 Jan 4, 2023
PhysFS++ is a C++ wrapper for the PhysicsFS library.

PhysFS++ PhysFS++ is a C++ wrapper for the excellent PhysicsFS library by Ryan C. Gordon and others. It is licensed under the zlib license - same as P

Kevin Howell 80 Oct 25, 2022
An embedded-friendly library for decompressing files from zip archives

An 'embedded-friendly' (aka Arduino) library to extract and decompress files from ZIP archives

Larry Bank 33 Dec 30, 2022