Animation compression is a fundamental aspect of modern video game engines

Overview

CLA assistant All Contributors Build status Build status Sonar Status GitHub release GitHub license Conan Discord

Animation Compression Library

Animation compression is a fundamental aspect of modern video game engines. Not only is it important to keep the memory footprint down but it is also critical to keep the animation clip sampling performance fast.

The more memory an animation clip consumes, the slower it will be to sample it and extract a character pose at runtime. For these reasons, any game that attempts to push the boundaries of what the hardware can achieve will at some point need to implement some form of animation compression.

While some degree of compression can easily be achieved with simple tricks, achieving high compression ratios, fast decompression, while simultaneously not compromising the accuracy of the resulting compressed animation requires a great deal of care.

Goals

This library has four primary goals:

  • Implement state of the art and production ready animation compression algorithms
  • Be easy to integrate into modern video game engines
  • Serve as a benchmark to compare various techniques against one another
  • Document what works and doesn't work

Algorithms are optimized with a focus on (in this particular order):

  • Minimizing the compression artifacts in order to reach high cinematographic quality
  • Fast decompression on all our supported hardware
  • A small memory footprint to lower memory pressure at runtime as well as reducing disk and network usage

Decompression speed will not be sacrificed for a smaller memory footprint nor will accuracy be compromised under any circumstances.

Philosophy

Much thought was put into designing the library for it to be as flexible and powerful as possible. To this end, the following decisions were made:

Supported platforms

  • Windows VS2015 x86 and x64
  • Windows (VS2017, VS2019) x86, x64, and ARM64
  • Windows VS2019 with clang9 x86 and x64
  • Linux (gcc 5 to 10) x86 and x64
  • Linux (clang 4 to 11) x86 and x64
  • OS X (Xcode 10.3) x86 and x64
  • OS X (Xcode 11.2) x64
  • Android (NDK 21) ARMv7-A and ARM64
  • iOS (Xcode 10.3, 11.2) ARM64
  • Emscripten (1.39.11) WASM

The above supported platform list is only what is tested every release but if it compiles, it should run just fine.

Note: VS2017 and VS2019 compile with ARM64 on AppVeyor but I have no device to test them with.

The Unreal Engine is supported through a plugin found here.

Getting started

This library is 100% headers as such you just need to include them in your own project to start using it. However, if you wish to run the unit tests, regression tests, to contribute to ACL or use it for research, head on over to the getting started section in order to setup your environment and make sure to check out the contributing guidelines.

If you would like to integrate ACL into your own game engine, follow the integration instructions here.

You can install nfrechette-acl with Conan.

Performance metrics

External dependencies

You don't need anything else to get started: everything is self contained. See here for details.

License, copyright, and code of conduct

This project uses the MIT license.

Copyright (c) 2017 Nicholas Frechette & Animation Compression Library contributors

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Contributors

Thanks goes to these wonderful people (emoji key):


CodyDWJones

💻 🔣 🚧 🔧 🚇 🤔

Meradrin

💻

Martin Turcotte

💻 🔧 🤔

vjeffh

💻

Romain-Piquois

🐛

Michał Janiszewski

💻 🔧 🚧 🚇

Raymond Barbiero

🤔

ddeadguyy

💻 🤔

Yoann Potinet

🚇

This project follows the all-contributors specification. Contributions of any kind welcome!

Comments
  • Release/1.3

    Release/1.3

    I'm obviously a github noob. Perhaps this pull request will be smoother sailing. I'll respond to the points you made in the old pull request here once I find the time. For now, this version should allow me to sign the CLA.

    We didn't expect to push this into a new version of ACL immediately, especially not if 2.0 is almost done. This is more for preliminary info sharing. We're happy to reintegrate our edits into the next release, and consider full integration from there.

    I made the regression test compatible with ACL_BIND_POSE this time. I also filled in the code paths we didn't use for track animation, so I can run the regression test. Finally, note how I've disabled ACL_BIT_RATE until I figure out why it fails regression :|

    opened by ddeadguyy 9
  • Problem with default build on Japanese Windows 10. VC2019, latest CMake : code page 932 issues.

    Problem with default build on Japanese Windows 10. VC2019, latest CMake : code page 932 issues.

    Hello. I'm having problem with building ACL for windows. I'll probably fix it by disabling warning treated as errors but made an issue FYI :

    Platform : VC 2019 CMake : updated to latest version before build. Did properly all git submodule update --init first as documented.

    My machine is japanese Windows 10. (Note : my OS language is set to english but I bought and made it while in japan)

    Having a lot of those : W:\acl\external\catch2\single_include\catch2\catch.hpp(3243): warning C4819: The file contains a character that cannot be represented in the current code page (932). Save the file in Unicode format to prevent data loss (compiling source file W:\acl\tests\sources\core\test_bit_manip _utils.cpp) [W:\acl\build\tests\main_generic\acl_unit_tests.vcxproj]

    (Warning treated as error obviously stop the build)

    I temporarely removed the warning treated as errors and end up with just the following warning now. W:\acl\tools\acl_compressor\sources\acl_compressor.cpp(631): warning C4996: 'rtm::scalar_near_equal': was declared deprecated W:\acl\external\rtm\includes\rtm/scalarf.h(368): note: see declaration of 'rtm::scalar_near_equal' W:\acl\tools\acl_compressor\sources\acl_compressor.cpp(632): warning C4996: 'rtm::scalar_near_equal': was declared deprecated

    Warmest regards, Romain

    bug 
    opened by Romain-Piquois 9
  • feat: optimized compression and expanded bit rates

    feat: optimized compression and expanded bit rates

    https://github.com/nfrechette/acl/issues/353 https://github.com/nfrechette/acl/issues/373

    ACL_SJSON_FIX

    Fix that supports testing of sjson_file_type::raw_track_list, instead of only sjson_file_type::raw_clip.

    I needed this to test my edge case animation within ACL.

    ACL_COMPRESSION_OPTIMIZED

    Prevent low-magnitude channels from becoming constant when worst-case shell distance and object space distance are exceeded. Apply error correction after constant and default tracks are processed. Propagate shell distance and object space distance through ancestors before compression.

    Massive memory and quality upgrade. In our use case, there's a 28% reduction in memory overall(33% in humans, 22% in creatures). Our worst edge case used to compress in 30 seconds, and failed regression with an error of nearly 2 meters. Now regression passes well within precision settings, and it compresses 10x faster. Compression is faster generally, as seen in py make.py -regression_test:

    uniformly_sampled_database.config.sjson: 04.04s -> 03.38s
    uniformly_sampled_database_4kb.config.sjson: 04.03s -> 03.33s
    uniformly_sampled_database_4kb_mixed.config.sjson: 04.73s  -> 04.04s
    uniformly_sampled_database_mixed.config.sjson: 04.64s -> 03.99s
    uniformly_sampled_mixed_var_0.config.sjson: 01.14s -> 01.26s
    uniformly_sampled_mixed_var_1.config.sjson: 01.61s -> 01.35s
    uniformly_sampled_quant_bind_relative.config.sjson: 01.87s -> 01.29s
    uniformly_sampled_quant_high.config.sjson: 02.27s -> 01.33s
    uniformly_sampled_quant_highest.config.sjson: 03.82s -> 01.27s
    uniformly_sampled_quant_medium.config.sjson: 01.68s -> 01.27s
    uniformly_sampled_quant_mtx_error.config.sjson: 01.68s -> 01.27s
    uniformly_sampled_raw.config.sjson: 01.19s -> 01.20s
    

    I also tried a more accurate version of propagation which compared every bone with every ancestor. It resulted in smaller distances, but was more complex(O(N^2)), compression took longer, and memory savings shrank.

    I'm confident that introducing error correction within segment compression would improve memory and compression time further. It seems unlikely that most of the logic in find_optimal_bit_rates would be required anymore. Perhaps I'll experiment with this later, but the current state of ACL_COMPRESSION_OPTIMIZED is fast enough, and high-quality enough, for our immediate needs.

    ACL_BIT_RATE_EXPANSION

    Expand bit rate options, from [3..19] to [1..23].

    Note that these results assume that ACL_COMPRESSION_OPTIMIZED is enabled. I don't recommend trying ACL_BIT_RATE_EXPANSION without it. Additional 2% reduction in memory overall(3% in humans, 2% in creatures). No changes to py make.py -regression_test timings worth reporting, but some of our creature compression does get slower. Edge cases are rare, and max out at 7 seconds this time.

    opened by ddeadguyy 7
  • Allow game engines to define what a default sub-track should return/represent

    Allow game engines to define what a default sub-track should return/represent

    See this PR for original inspiration: https://github.com/nfrechette/acl/pull/348 by @ddeadguyy.

    Animations are typically stored in one of two formats: relative to the identity transform or relative to some base pose. I've already explored how to store relative to the bind pose here.

    As illustrated by that blog post, many joints end up being equal or close to the bind pose (the default character pose) which makes sense: not all joints are animated (and not all sub-tracks for joints that are animated).

    ACL currently considers sub-tracks to be in one of 3 possible states: animated (with N samples), constant (with 1 sample), or default (equal to the identity, 0 sample).

    In practice, animations that aren't relative to some base pose have their sub-tracks rarely equal to the identity. Instead it would be best if the default sub-track value could be something provided by the game engine: the bind pose. This would allow ACL to store sub-tracks that are equal to the bind pose with just two bits since we'll be able to avoid storing the bind pose per clip (it's often used for many things at runtime and could easily be provided by the game engine).

    However, this is a lossy process and special care must be taken. If a clip is compressed with that assumption and the bind pose isn't provided during decompression, then sub-tracks returned by ACL that are default will not be able to return the right value. In a sense, either those sub-tracks must be skipped during decompression or the bind pose must be provided and looked up.

    ACL 2.0 reworked the compression API and removed the RigidSkeleton which previous contained the bind_transform for debugging purposes. We would have to re-introduce the concept. Since this is optional, a separate track_qvvf should be provided to the compression API. Compression can continue normally except that when we detect if a track is default, we check against the bind pose if it is provided instead of the identity. The compressed clip will contain a new metadata flag to mark it as 'bind pose needed/aware'. During decompression, new optional functions on the pose writer will be added to handle this. Together with the metadata flag, we'll be able to assert at runtime that proper support is handled.

    Decompression can be handled in one of two ways by the engine:

    • Skip default tracks and output nothing
    • Read the bind pose value and output it

    The first case would be used when the engine pre-fills the output buffer with the bind pose. We would skip default sub-tracks to avoid overwriting the value.

    The second case is more efficient.

    To this end, a new function on the pose writer will be added: handle_default_rotation(uint32_t track_index, rtm::quatf_arg0) (and translation/scale). ACL will continue to provide the identity transform as an argument and it will be up to the game engine to handle default sub-tracks through the pose writer implementation.

    This branch will be used for the development work: feat/improve-bind-pose-handling

    enhancement 
    opened by nfrechette 6
  • Add benchmark

    Add benchmark

    Please consider using a tool like Google benchmark for providing measurements.

    See also

    • https://www.bfilipek.com/2016/05/google-benchmark-library.html
    • https://www.bfilipek.com/2016/01/micro-benchmarking-libraries-for-c.html
    enhancement help wanted 
    opened by janisozaur 6
  • Support of UWP ARM64 platform

    Support of UWP ARM64 platform

    Trying to generate Visual Studio 2017 project with cmake for ARM64 platform but fails to compile since "warning C4324: 'acl::RigidBone': structure was padded due to alignment specifier"

    Any way to get this to work?

    /Johan

    bug enhancement 
    opened by johanlindfors 5
  • Investigate fixed point arithmetic

    Investigate fixed point arithmetic

    Range reduction sometimes causes accuracy loss. Investigate fixed point arithmetic to see if it can improve accuracy.

    Perhaps a mix of fixed point/float32 arithmetic should be used for optimal results? Also keep in mind performance implications for the decompression.

    http://x86asm.net/articles/fixed-point-arithmetic-and-tricks/

    https://en.wikipedia.org/wiki/Fixed-point_arithmetic

    https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwj2_fSo87LXAhVV5GMKHS9BCVgQFggrMAE&url=http%3A%2F%2Fwww-inst.eecs.berkeley.edu%2F~cs61c%2Fsp06%2Fhandout%2Ffixedpt.html&usg=AOvVaw30e1B92ekXbTzJeUDNfMgb

    Use a rotation track from CMU for a segment, 16 rotations. Compare with current float32 code path. Compare with float64 code path. Compare with fixed point code path (possibly various precision settings).

    Exhaustive comparison for every possible bit rate?

    https://software.intel.com/en-us/forums/intel-isa-extensions/topic/301988

    http://codesuppository.blogspot.ca/2015/02/sse2neonh-porting-guide-and-header-file.html

    bug research 
    opened by nfrechette 5
  • Port to GCC on Linux

    Port to GCC on Linux

    ===============================================================================
    All tests passed (170 assertions in 6 test cases)
    

    Consider this a proof of concept, you shouldn't really be using conio.h in 2017…

    #include <windows.h> is also a bit iffy in this case, but I'll let you solve that yourself.

    opened by janisozaur 5
  • Add appveyor.yml

    Add appveyor.yml

    Please enable AppVeyor, free-for-open-source CI service that hosts (amongst others) MSVC toolchains. Once done, you can merge this to automatically build each commit and verify it against MSVC2015 and MSVC2017

    opened by janisozaur 5
  • Optimize decompression seek

    Optimize decompression seek

    When decompressing, we first seek with the decompression context. This performs a linear search to find the segments needed to sample at a time T. This can be slow for longer clips and isn't terribly cache friendly with the current memory layout (pointer offsets add noise that we have to skip over).

    A better idea would be to split the segment header into two parts. A first part with the first segment sample index contiguous with 16 or 32 bit per index (most clips are small and fit on 16 bits, should we do 8 bits?). This will allow for cache efficient searching. A binary search can be implemented for long clips and a SIMD search for short clips. The second part can contain the necessary offsets and can trivially be index from the segment index.

    enhancement 
    opened by nfrechette 4
  • GCC8 for travis

    GCC8 for travis

    It seems some tests fail:

    $ python3 make.py -clean -build -unit_test -${COMPILER} -Debug -x86
    <…snip…>
    /home/travis/build/janisozaur/acl/tests/sources/core/test_memory_utils.cpp:108: FAILED:
      REQUIRE( padding0 == 4 )
    with expansion:
      0 == 4
    
    opened by janisozaur 4
  • Implement a velocity/acceleration based error metric

    Implement a velocity/acceleration based error metric

    We currently use a position based error metric. This simulates the skinning process as described here.

    However, this does not really account for the error perceived by the end user on screen. For example, under high velocity, a larger error can be tolerated because it isn't as visible on account of the fast movement. An error of 5cm might be very visible when the velocity is low, but entirely invisible if sufficiently high. In the same vein, a large positional error could be tolerated on limbs not in contact with the environment/other things as there is no frame of reference for the user to evaluate accuracy. An error of 5cm on a floating hand is hard to see, but easy to spot if reaching for a door knob.

    The general idea is that the first derivative (velocity) of our data might provide more relevant information when it comes to measuring error accuracy. Similarly, perhaps using the second derivative (acceleration) can be of use as well.

    David Goodhue has published a paper where he uses velocity to compress animation data however I believe part of that work has been patented. See here: https://dl.acm.org/doi/10.1145/3102163.3102236

    enhancement research 
    opened by nfrechette 0
  • Remap pointers in `decompression_context` and `database_context`?

    Remap pointers in `decompression_context` and `database_context`?

    Is it possible to remap pointers decompression_context and database_context after the associated compressed_tracks or compressed_database get relocated?

    Given that both compressed_tracks and compressed_database are just plain bytes that can be memcpy'd around, a natural way to reduce runtime memory footprint is to compress them (losslessly) when the clip/database is not actively used and decompress when they get hot again.

    A problem arises is that we cannot use the same decompression/database_context later when the data got decompressed. Because there is no way to update the compressed_tracks/database in the contexts. The only available option I found is to destroy and re-initialize the contexts (also re-stream-in database bulk data), which is much more expensive than it needs. Note: The content of the new tracks/database is exactly the same as before, i.e. they are just relocated as seen by ACL.

    Conceptually, we need a cheap way to remap pointers in decompression/database_context. Is it possible to add such functionalities? Another option is to replace every pointers in the context with ptr_offset, but I am not sure how will this impact the performance.

    enhancement 
    opened by EAirPeter 3
  • Treat values as fixed point when converting to bit rates during quantization

    Treat values as fixed point when converting to bit rates during quantization

    When we try bit rates, we decay our values by simulating quantization to the desired number of bits. Perhaps instead we could quantize once to the highest bit rate, and shift the least significant bits off.

    Can we treat the values as fixed point since the range is fixed?

    If we can, can we leverage this somehow during decompression or by streaming those truncated bits?

    enhancement research 
    opened by nfrechette 0
  • Implement individual clip streaming support

    Implement individual clip streaming support

    The current database streaming works in bulk. All clips within stream together at the desired quality level. This can make it quite hard to have fined grained control over quality. It forces runtimes to group few clips together and have many databases. In turn, this harms the ability of the algorithm to globally optimize for quality when targeting a specific memory footprint.

    Streaming has been implemented in bulk to optimize for older disk based hardware where seeks and reads are expensive and must thus be minimized. However, with flash memory on mobile and SSD/NVMe on PC and consoles, seeks and reads are dramatically cheaper. As such, it makes sense to extend the ability to stream individual clips within a database for those platforms.

    Bulk streaming is nice because it requires little metadata. All chunks are read from disk linearly and they contain the necessary metadata. Streaming individual clips will require extra metadata. We need to know where in the bulk buffer the clip lives for each quality level. This metadata will need to be present in memory with the rest of the database runtime metadata.

    Streaming a clip would then first look up which offset to read from disk for the desired quality level. A read from disk would be performed, and we'd update the runtime metadata.

    Each quality level would perform an individual seek and read. We can later optimize this by swizzling the data such that data for all quality levels of a clip live contiguously on disk. This will allow us to perform a single read for all quality levels. Optimizing the data in this way will make it much slower to stream/unstream the whole database at a specified quality level.

    Streaming out no longer makes sense if we manipulate individual clips. Streaming out a single clip would not allow us to reduce the memory footprint as we can't allocate memory for individual clips.

    A different paradigm is necessary when managing memory at the clip level. We will need to allocate memory in chunks of a fixed size and clips will populate them in the order we stream them in. Because our runtime metadata stores 32bit offsets, we need to be careful where things live and how we manage it. We could reserve virtual memory up front in a single buffer large enough to accommodate multiple times the size of the database to account for fragmentation. When we need to stream out, we'd mark the allocation as free in our chunk. Later, a garbage compaction phase would kick off and reclaim unused space by compacting things into existing/new chunks. This can be done entirely async and in a thread safe way since we atomically update our offsets. Reserving virtual memory could also be avoided if we split out 32bit offset into two parts: a chunk index (8bit) and an offset into the chunk (24 bit). This would allow us to have non-contiguous chunks. If we do this, we need a heuristic to figure out how many chunk indices to reserve. We could reserve enough indices to store the whole database plus some slack for fragmentation. If fragmentation exceeds our expectations because compaction hasn't run in a while, we could force one if we run out of indices. Something along those lines.

    enhancement 
    opened by nfrechette 0
  • Investigate different segment range packing

    Investigate different segment range packing

    Currently, segment samples are normalized and thus the range values are also normalized. We currently encode the range as [min, extent] where max = min + extent. This allows us to represent the full range of [0.0, 1.0].

    An alternative could be to store it as [centre, half extent]. This has the benefit that we need 1 bit less for the half extent encoding because we need both +- values. The down side is that we can only represent [0.0, 1.0). This might be a good fit for bit shifting float coercion where we just line up the mantissa. This boundary condition can also be handled by the clip range since we already pad it to handle rounding.

    If we drop the number of bits for the extent portion, we can save up to 8 bits per component. However, because segments have ~16 samples, if the bit rate increases by 1 bit as a result, we'll end up with a net loss of 8 bits. We'd only win if we can maintain the same bit rate.

    If the range bounds aren't tight, it leaves room for sample values to encode values that are out of the possible range of values. As such, we'd have a portion of our sample encoded range that would be unused. This would decrease the percentage used.

    We could add fixed padding to the extent and use fewer bits.

    We could also leverage the fact that segment ranges are packed in groups of 4 sub-tracks. We current store 12+12 bytes for [min, extent]. We could do 12+8, or 12+9, etc.

    Can we leverage the group as well? Sort by min/centre and reduce the range of possible values that way? Store the sub-track order on 4-6 bits (2 bits per sub-track, last one can be dropped and reconstructed). Can we do something clever with this? This would still allow for fast single track decompression.

    Can we bucket ranges?

    If we need to, we could add metadata etc but ideally we have to keep the decompression path lean and simple.

    enhancement research 
    opened by nfrechette 0
Releases(v2.0.6)
  • v2.0.6(Jul 10, 2022)

  • v2.0.5(Jul 1, 2022)

    • Add support for clang 12, 13, and 14
    • Add support for GCC 11
    • Add support for XCode 12 and 13
    • Add support for Arm64 development on OS X and Linux
    • Misc CI improvements
    • Update to RTM v2.1.4
    • Update to Catch2 v2.13.7
    Source code(tar.gz)
    Source code(zip)
  • v2.0.4(May 8, 2022)

  • v2.0.3(May 7, 2022)

    • Update sjson-cpp to v0.8.2
    • Update rtm to v2.1.3
    • Add versioned namespace to allow multiple versions to coexist within a binary
    • Fix database sampling interpolation when using a rounding mode other than none
    • Other minor fixes
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Feb 15, 2022)

  • v2.0.1(Sep 6, 2021)

  • v2.0.0(May 1, 2021)

    • Unified and cleaned up APIs
    • Cleaned up naming convention to match C++ stdlib, boost
    • Introduced streaming database support
    • Decompression profiling now uses Google Benchmark
    • Decompression has been heavily optimized
    • Compression has been heavily optimized
    • First release to support backwards compatibility going forward
    • Migrated all math to Realtime Math
    • Clips now support 4 billion samples/tracks
    • WebAssembly support added through emscripten
    • Many other improvements
    Source code(tar.gz)
    Source code(zip)
  • v1.3.5(Sep 25, 2020)

  • v1.3.4(Aug 21, 2020)

  • v1.3.3(Aug 3, 2020)

    • Fix single track decompression when scale is present with more than one segment
    • Gracefully fail compression when we have more than 65535 samples
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(May 28, 2020)

    • Fix crash when compressing with an empty track array
    • Strip unused code when stat logging isn't required
    • Fix CompressedClip hash to be deterministic
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(May 6, 2020)

    • Fix bug with scalar track decompression where garbage could be returned
    • Fix scalar track quantization to properly check the resulting error
    • Fix scalar track creation and properly copy the sample size
    • Other minor fixes and improvements
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Nov 17, 2019)

    • Added support for VS2019, GCC 9, clang7, and Xcode 11
    • Updated sjson-cpp and added a dependency on Realtime Math (RTM)
    • Optimized compression and decompression significantly
    • Added support for multiple root bones
    • Added support for scalar track compression
    • Many bug fixes and improvements
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Sep 10, 2019)

  • v1.2.0(Apr 16, 2019)

    • Added support for GCC 8, clang 6, Xcode 10, and Windows ARM64
    • Updated catch2 and sjson-cpp
    • Integrated SonarCloud
    • Added a compression level setting and changed default to Medium
    • Various bug fixes, minor optimizations, and cleanup
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Sep 8, 2018)

    • Added proper ARM NEON support
    • Properly detect SSE2 with MSVC if AVX isn't used
    • Lots of decompression performance optimizations
    • Minor fixes and cleanup
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Jul 21, 2018)

  • v0.8.0(May 12, 2018)

    • Improved error handling
    • Added additive clip support
    • Added acl_decompressor tool to profile and test decompression
    • Increased warning levels to highest possible
    • Many more improvements and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 2, 2018)

    • Added full support for Android and iOS
    • Added support for GCC6 and GCC7 on Linux
    • Downgraded C++ version to from 14 to 11
    • Added regression tests
    • Added lots of unit tests for core and math headers
    • Many more improvements and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Jan 10, 2018)

    • Hooked up continuous build integration
    • Added support for VS2017 on Windows
    • Added support for GCC5, Clang4, and Clang5 on Linux
    • Added support for Xcode 8.3 and Xcode 9.2 on OS X
    • Added support for x86 on Windows, Linux, and OS X
    • Better handle scale with built in error metrics
    • Many more improvements and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Nov 23, 2017)

    • Added support for 3D scale
    • Added partial support for Android (works in Unreal 4.15)
    • A fix to the variable bit rate optimization algorithm
    • Added a CLA system
    • Refactoring to support multiple algorithms better
    • More changes and additions to stat logging
    • Many more improvements and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 10, 2017)

    • Lots of math performance, accuracy, and consistency improvements
    • Implemented segmenting support in uniformly sampled algorithm
    • Range reduction per segment support added
    • Minor fixes to fbx2acl
    • Optimized variable quantization considerably
    • Major changes to which stats are dumped and how they are processed
    • Many more improvements and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jul 28, 2017)

    • Added CMake support
    • Improved error measuring and accuracy
    • Improved variable quantization
    • Convert most math to float32 to improve accuracy and performance
    • Many more improvements and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jul 8, 2017)

    • Added clip_writer to create ACL files from a game integration
    • Added some unit tests and moved them into their own project
    • Added basic per track variable quantization
    • Added CMake support
    • Lots of cleanup, minor changes, and fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jun 26, 2017)

    • Uniformly sampled algorithm
    • Various rotation and vector formats
    • Clip range reduction
    • ACL SJSON file format
    • Custom allocator interface
    • Assert overrides
    • Custom math types and functions
    • Various tools to test the library
    • Visual Studio 2015 supported, x86 and x64
    Source code(tar.gz)
    Source code(zip)
Owner
Nicholas Frechette
I am a freelance video game programmer passionate about animation compression and performance.
Nicholas Frechette
VE Font Cache is a single header-only GPU font rendering library designed for game engines.

VE Font Cache is a single header-only GPU font rendering library designed for game engines. It aims to: Be fast and simple to integrate. Take advantag

Xi Ma Chen 340 Dec 7, 2022
A scripting language created mainly for game engines

HCode Hammurabi's Code A scripting language created for the use in games and game engines. Currently supported features and roadmap Structs Functions

null 4 Oct 11, 2020
A lightweight, self-contained library for gizmo editing commonly found in many game engines

This project is a lightweight, self-contained library for gizmo editing commonly found in many game engines. It includes mechanisms for manipulating 3d position, rotation, and scale. Implemented in C++11, the library does not perform rendering directly and instead provides a per-frame buffer of world-space triangles.

Dimitri Diakopoulos 392 Dec 31, 2022
This is a list of different open-source video games and commercial video games open-source remakes.

This is a list of different open-source video games and commercial video games open-source remakes.

Ivan Bobev 173 Jan 2, 2023
3D ray-tracing and animation engine for pixel matrices.

ProtoTracer: Protogen Ray-Tracing and Animation Engine This project is a 3D ray-tracing and animation engine for pixel matrices, designed to be used f

Open Protogen 86 Dec 20, 2022
A procedural sprite animation tool made with the nCine

SpookyGhost A procedural sprite animation tool made with the nCine. You can read the manual online or you can access it by pressing F1 in the program.

SpookyGhost 219 Dec 11, 2022
An OpenGL 4.3 / C++ 11 rendering engine oriented towards animation

aer-engine About An OpenGL 4.3 / C++ 11 rendering engine oriented towards animation. Features: Custom animation model format, SKMA, with a Blender exp

Thibault Coppex 29 Nov 22, 2022
This is 2048, the video game that was embodied in C language.

2048 This is 2048, the video game that was embodied in C language. How to use Main Page Operation keys Operation keys are used on the game page. △ - Y

이종민 1 Dec 31, 2021
MMORPG Video Game of the year 1999 created in Visual Basic 6.0 by Pablo Marquez (Morgolock)

Argentum Online C++ MMORPG Video Game of the year 1999 created in Visual Basic 6.0 by Pablo Marquez (Morgolock), this game is open source under the "G

Gastón Martínez 6 Dec 26, 2022
A video game I created for one of my CS classes.

Eclipse This is a video game I created for one of my CS classes. It game will run on Mac or Linux. Requirements This game requires that Mednafen be in

null 2 Oct 29, 2022
Perimeter is a real-time strategy video game with unique gameplay elements such as terraforming deformable terrain, morphing units, energy network, protective shield and surreal worlds.

Периметр | Perimeter About Perimeter is a real-time strategy video game with unique gameplay elements such as terraforming deformable terrain, morphin

null 414 Dec 27, 2022
Flexible, extensible, and scalable video game matchmaking.

Open Match is an open source game matchmaking framework that simplifies building a scalable and extensible Matchmaker. It is designed to give the game

GoogleForGames 2.7k Jan 9, 2023
Classic video game revived with a new story, now available for your MySQL database.

mysql-snake Classic video game revived with a new story, now available for your MySQL database. A supermarket is giving out free buckets to visitors.

Ville-Markus Yli-Suutala 14 Feb 16, 2022
Improved version of the X-Ray Engine, the game engine used in the world-famous S.T.A.L.K.E.R. game series by GSC Game World.

OpenXRay OpenXRay is an improved version of the X-Ray Engine, the game engine used in the world-famous S.T.A.L.K.E.R. game series by GSC Game World. S

null 2.2k Jan 1, 2023
Stealthy way to hijack the existing game process handle within the game launcher (currently supports Steam and Battle.net). Achieve external game process read/write with minimum footprint.

Launcher Abuser Stealthy way to hijack the existing game process handle within the game launcher (currently supports Steam and Battle.net). Achieve ex

Ricardo Nacif 80 Nov 25, 2022
Game Boy, Game Boy Color, and Game Boy Advanced Emulator

SkyEmu SkyEmu is low level cycle accurate GameBoy, GameBoy Color and Game Boy Advance emulator that I have been developing in my spare time. Its prima

Sky 321 Jan 4, 2023
A lightweight game engine written in modern C++

Halley Game Engine A lightweight game engine written in C++17. It has been used to ship Wargroove, a turn-based strategy game, on Windows, Mac (experi

Rodrigo Braz Monteiro 3.2k Dec 30, 2022
C++ game engine inspired by quake. Modern rendering and quake mapping tool integration.

Nuake Feel free to join the discord server for updates: What is it Nuake is a game engine written from scratch by myself. It is not meant to be a end-

Antoine Pilote 25 Oct 17, 2022
Engine being created for homeworks in UPC Master's Degree in Advanced Programming for AAA Video Games.

Strawhat Engine Strawhat Engine is a game engine under construction that has model loading and camera movement features along with an editor. Reposito

I. Baran Surucu 12 May 18, 2022