AI4Animation: Deep Learning, Character Animation, Control

Overview

AI4Animation: Deep Learning, Character Animation, Control

This project explores the opportunities of deep learning for character animation and control as part of my Ph.D. research at the University of Edinburgh in the School of Informatics, supervised by Taku Komura. Over the last couple years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training and runtime control, developed in Unity3D / Tensorflow / PyTorch. This repository enables using neural networks for animating biped locomotion, quadruped locomotion, and character-scene interactions with objects and the environment, plus sports games. Further advances on this research will continue being added to this project.


SIGGRAPH 2021
Neural Animation Layering for Synthesizing Martial Arts Movements
Sebastian Starke, Yiwei Zhao, Fabio Zinno, Taku Komura, ACM Trans. Graph. 40, 4, Article 92.

Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this research, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The system can be used for offline and online motion generation alike, provides an intuitive interface to integrate with animator workflows, and is relevant for real-time applications such as computer games.

- Video - Paper -


SIGGRAPH 2020
Local Motion Phases for Learning Multi-Contact Character Movements
Sebastian Starke, Yiwei Zhao, Taku Komura, Kazi Zaman. ACM Trans. Graph. 39, 4, Article 54.

Not sure how to align complex character movements? Tired of phase labeling? Unclear how to squeeze everything into a single phase variable? Don't worry, a solution exists!

Controlling characters to perform a large variety of dynamic, fast-paced and quickly changing movements is a key challenge in character animation. In this research, we present a deep learning framework to interactively synthesize such animations in high quality, both from unstructured motion data and without any manual labeling. We introduce the concept of local motion phases, and show our system being able to produce various motion skills, such as ball dribbling and professional maneuvers in basketball plays, shooting, catching, avoidance, multiple locomotion modes as well as different character and object interactions, all generated under a unified framework.

- Video - Paper - Code (finally working on it now) -


SIGGRAPH Asia 2019
Neural State Machine for Character-Scene Interactions
Sebastian Starke+, He Zhang+, Taku Komura, Jun Saito. ACM Trans. Graph. 38, 6, Article 178.
(+Joint First Authors)

Animating characters can be an easy or difficult task - interacting with objects is one of the latter. In this research, we present the Neural State Machine, a data-driven deep learning framework for character-scene interactions. The difficulty in such animations is that they require complex planning of periodic as well as aperiodic movements to complete a given task. Creating them in a production-ready quality is not straightforward and often very time-consuming. Instead, our system can synthesize different movements and scene interactions from motion capture data, and allows the user to seamlessly control the character in real-time from simple control commands. Since our model directly learns from the geometry, the motions can naturally adapt to variations in the scene. We show that our system can generate a large variety of movements, icluding locomotion, sitting on chairs, carrying boxes, opening doors and avoiding obstacles, all from a single model. The model is responsive, compact and scalable, and is the first of such frameworks to handle scene interaction tasks for data-driven character animation.

- Video - Paper - Code & Demo - Mocap Data -


SIGGRAPH 2018
Mode-Adaptive Neural Networks for Quadruped Motion Control
He Zhang+, Sebastian Starke+, Taku Komura, Jun Saito. ACM Trans. Graph. 37, 4, Article 145.
(+Joint First Authors)

Animating characters can be a pain, especially those four-legged monsters! This year, we will be presenting our recent research on quadruped animation and character control at the SIGGRAPH 2018 in Vancouver. The system can produce natural animations from real motion data using a novel neural network architecture, called Mode-Adaptive Neural Networks. Instead of optimising a fixed group of weights, the system learns to dynamically blend a group of weights into a further neural network, based on the current state of the character. That said, the system does not require labels for the phase or locomotion gaits, but can learn from unstructured motion capture data in an end-to-end fashion.

- Video - Paper - Code - Mocap Data - Windows Demo - Linux Demo - Mac Demo - ReadMe -


SIGGRAPH 2017
Phase-Functioned Neural Networks for Character Control
Daniel Holden, Taku Komura, Jun Saito. ACM Trans. Graph. 36, 4, Article 42.

This work continues the recent work on PFNN (Phase-Functioned Neural Networks) for character control. A demo in Unity3D using the original weights for terrain-adaptive locomotion is contained in the Assets/Demo/SIGGRAPH_2017/Original folder. Another demo on flat ground using the Adam character is contained in the Assets/Demo/SIGGRAPH_2017/Adam folder. In order to run them, you need to download the neural network weights from the link provided in the Link.txt file, extract them into the /NN folder, and store the parameters via the custom inspector button.

- Video - Paper - Code (Unity) - Windows Demo - Linux Demo - Mac Demo -


Processing Pipeline

In progress. More information will be added soon.

Copyright Information

This project is only for research or education purposes, and not freely available for commercial use or redistribution. The intellectual property for different scientific contributions belongs to the University of Edinburgh, Adobe Systems and Electronic Arts. Licensing is possible if you want to use the code for commercial use. For scientific use, please reference this repository together with the relevant publications below.

The motion capture data is available only under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.

Issues
  • How to train a fight animation?

    How to train a fight animation?

    Hello, I compiled the Android version, found that can not be compiled successfully, lack of Library dependency, can you provide the dependent source code? Since last year, I found your paper is waiting for your demo. Until recently, I saw that you posted it to GitHub. I am very interested in the application of AI in game, but I don't know much about neural network, so I can learn from your demo first.

    opened by hanbim520 14
  • Train failed

    Train failed

    I use the "D1_001_KAN01_001.bvh" (only the one) to train the model. This is my Motion Editor setting image I want to test whether the Motion Exporter output data can be correct training. However, i failed. Can you help find the problem? This is my scene. image This is upside down.

    opened by yh8899 9
  • How do I create my own training data?

    How do I create my own training data?

    There are 3 questions about SIGGRAPH 2017 paper:

    1. How do I create my own training data, given that I have VR devices like leg bands, hand controller, and headset?
    2. Is it possible to include hand and head input from VR devices into the model too (instead of only Up/Down/Left/Right input)?
    3. What is the training set size for the model provided in this repository?

    I will need to read more about the paper. Any advice is appreciated. Thank you!

    opened by off99555 8
  • Giving the AI spline waypoints

    Giving the AI spline waypoints

    How can I let the AI move the character using spline waypoints like the one you showed at the end of your video ? Do you have any Slack group or Gitter or something like that ??

    opened by siamaksalman 7
  • No Motion Editor found in scene in PFNN unity project

    No Motion Editor found in scene in PFNN unity project

    Hi! I am new to unity and this project. And I got one problem for your help : ( After I imported BVH file using Data Processing/BVH importer in the demo_adam (and demo_original) , and I wanted to export the BVH file to input.txt and output.txt as the training data. The Motion Exporter just shows that "no motion editor found in scene". I have no clue to solve this out. Is there missing scene.unity file in this project?

    opened by AndyVerne 6
  • In SIGGRAPH_2017,  I tried to import and run the unity project in unity, after I built and ran , the character can't be controlled when I pressed W,A,S,D keys on keyboard.

    In SIGGRAPH_2017, I tried to import and run the unity project in unity, after I built and ran , the character can't be controlled when I pressed W,A,S,D keys on keyboard.

    Hello, I even tried to directly built and ran project of unity in SIGGRAPH_2017 without editing. But in the program have built, the character doesn't respond to my input. And one more question, how could I use the weights I trained in PFNN to unity project? Thank you so much. By the way, I really appreciate your work in AI4Animation. Hope you have a nice day : )

    opened by AndyVerne 6
  • Wrong Data Processing Results for MANN

    Wrong Data Processing Results for MANN

    Hi, I tried to convert the raw motion capture data in MANN into the format for neural nerwork input, using the data processing scripts you provided in Unity (BVHImporter, MotionExporter etc.). However, the resulted Input.txt and Output.txt does not align with the ones you provided, both in trajectory and bone parameter fields. Training MANN with my own txts also displayed weird results, proving that the generated files are wrong.

    I looked through the code and everything seems to be right. I haven't alter any of the code you released, and only clicked "Export" botton and added Trajectory Module for each clip in the Motion Editor panel (leaving alone the Style Module because the annotation of styles does not affect other parameters).

    Do you have any idea where things might go wrong?

    Thank you! :)

    opened by Fiona730 5
  • How do I try the project?

    How do I try the project?

    Hello, I saw a video about this project and would like to try it, but found no instruction on how to do it.

    I assume it is played in Unity game engine but I tried "Adding" the "Adam" folder in the Projects screen but it said that it was an invalid path.

    Can you please provide some instructions?

    With regards, John

    opened by addeps3 5
  • Wolf not moving

    Wolf not moving

    Hi! I tried to check your Demo project but after I followed your instructions and stored the network parameters the wolf character is not moving if a press the control buttons. I am new to Unity, maybe it is a trivial one. Do you have any suggestions? (Windows 10, Unity 2018.2.0f2)

    opened by zovathio 4
  • Network weights for Adam in PFNN

    Network weights for Adam in PFNN

    Hi, thanks for your awesome work! I noticed that there is neither pretrained weights nor training data for the Adam model in PFNN. Just want to confirm whether these data are unreleased, in case I missed them somewhere.

    opened by Fiona730 3
  • Ballroom dancing dog?

    Ballroom dancing dog?

    image image

    Not sure what's going on here, but I've managed to... well, mangle the dog. Or make him dance, not entirely sure.

    1. Make the dog fully sit (hold V for a few seconds)
    2. Release V
    3. Wait 2 seconds
    4. Tap V
    5. Repeat steps 3 and 4 repeatedly.

    Is this an issue with the network, or just the Unity/demo code?

    opened by Qix- 3
  • consult opencode for your paper

    consult opencode for your paper

    Hi wanyue: I learning for the paper named Self-Supervised Global-Local Structure Modeling for Point Cloud Domain Adaptation with Reliable Voted Pseudo Labels, and applausing for you good result. I want to follow you experiment, but cannot obtain this code. are you open the code to some other web and could you mail this to me ([email protected]). thank you for you help.

    opened by mengyays 0
  • Question about AdamW optimizer implementation in NSM

    Question about AdamW optimizer implementation in NSM

    Hi! Thanks for providing the code.

    For Neural State Machine, in the paper you mentioned that you are using AdamWR optimizer. However, in the code (https://github.com/sebastianstarke/AI4Animation/blob/master/AI4Animation/SIGGRAPH_Asia_2019/TensorFlow/NSM/Lib_Optimizer/AdamW.py), why is weight decay (wdc) is only used in _apply_sparse and not in _apply_dense? Is this optimizer just vanilla Adam optimizer?

    opened by RosettaWYzhang 0
  • How do I add the status of a quadruped and use it in unity?

    How do I add the status of a quadruped and use it in unity?

    In BasketballDemo scene 2020,I'm trying to export new styles of the quadruped,such as sit. I trained the output data, but the results can't be applied in the QuadrupedDemo scene. The quadruped's sitting state is wrong. What is the reason,Is there any problem?

    opened by YLe1201 0
  • how to process different sequences in export period

    how to process different sequences in export period

    // foreach(Sequence seq in Editor.GetData().Sequences) { Sequence seq = Editor.GetData().GetUnrolledSequence(); {

    Hey, In MotionExport.cs file, you comment the top line, and use the bottom line code. I mentioned you split a whole sequence motion into several sequences, the top one code can process them seperately, but the bottom one just ignore the split and process the whole one. Why did you choose to do that? Is threr any problem if I use the top one line?

    opened by xjturobocon 0
  • Mocap data for 2019 NSM or 2017 PFNN

    Mocap data for 2019 NSM or 2017 PFNN

    opened by j-void 0
Owner
Sebastian Starke
Ph.D. Student in Character Animation @ The University of Edinburgh, AI Scientist @ Electronic Arts, Formerly @ Adobe Research
Sebastian Starke
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8k Aug 4, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20k Aug 5, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.2k Aug 6, 2022
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

tiny-dnn 5.6k Aug 11, 2022
LibDEEP BSD-3-ClauseLibDEEP - Deep learning library. BSD-3-Clause

LibDEEP LibDEEP is a deep learning library developed in C language for the development of artificial intelligence-based techniques. Please visit our W

Joao Paulo Papa 18 Mar 15, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 32.8k Aug 11, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Aug 2, 2022
Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Tencent 123 Mar 17, 2021
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Tencent 502 Jul 31, 2022
Nimble: Physics Engine for Deep Learning

Nimble: Physics Engine for Deep Learning

Keenon Werling 271 Aug 2, 2022
Deploying Deep Learning Models in C++: BERT Language Model

This repository show the code to deploy a deep learning model serialized and running in C++ backend.

null 42 Mar 24, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Tencent 110 Jul 16, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 75 Apr 14, 2022
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.

cDNN is a Deep Learning Library written in C Programming Language. cDNN provides functions that can be used to create Artificial Neural Networks (ANN)

Vishal R 11 Apr 18, 2022
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.

PPLNN, which is short for "PPLNN is a Primitive Library for Neural Network", is a high-performance deep-learning inference engine for efficient AI inferencing.

null 847 Aug 6, 2022
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

OpenAI 3.9k Aug 12, 2022
deep learning vision detector/estimator

libopenvision deep learning visualization C library Prerequest ncnn Install openmp vulkan(optional) Build git submodule update --init --recursuve cd b

Prof Syd Xu 2 Feb 8, 2022