AI4Animation: Deep Learning, Character Animation, Control

Overview

AI4Animation: Deep Learning, Character Animation, Control

This project explores the opportunities of deep learning for character animation and control as part of my Ph.D. research at the University of Edinburgh in the School of Informatics, supervised by Taku Komura. Over the last couple years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training and runtime control, developed in Unity3D / Tensorflow / PyTorch. This repository enables using neural networks for animating biped locomotion, quadruped locomotion, and character-scene interactions with objects and the environment, plus sports games. Further advances on this research will continue being added to this project.


SIGGRAPH 2021
Neural Animation Layering for Synthesizing Martial Arts Movements
Sebastian Starke, Yiwei Zhao, Fabio Zinno, Taku Komura, ACM Trans. Graph. 40, 4, Article 92.

Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this research, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The system can be used for offline and online motion generation alike, provides an intuitive interface to integrate with animator workflows, and is relevant for real-time applications such as computer games.

- Video - Paper -


SIGGRAPH 2020
Local Motion Phases for Learning Multi-Contact Character Movements
Sebastian Starke, Yiwei Zhao, Taku Komura, Kazi Zaman. ACM Trans. Graph. 39, 4, Article 54.

Not sure how to align complex character movements? Tired of phase labeling? Unclear how to squeeze everything into a single phase variable? Don't worry, a solution exists!

Controlling characters to perform a large variety of dynamic, fast-paced and quickly changing movements is a key challenge in character animation. In this research, we present a deep learning framework to interactively synthesize such animations in high quality, both from unstructured motion data and without any manual labeling. We introduce the concept of local motion phases, and show our system being able to produce various motion skills, such as ball dribbling and professional maneuvers in basketball plays, shooting, catching, avoidance, multiple locomotion modes as well as different character and object interactions, all generated under a unified framework.

- Video - Paper - Code (finally working on it now) -


SIGGRAPH Asia 2019
Neural State Machine for Character-Scene Interactions
Sebastian Starke+, He Zhang+, Taku Komura, Jun Saito. ACM Trans. Graph. 38, 6, Article 178.
(+Joint First Authors)

Animating characters can be an easy or difficult task - interacting with objects is one of the latter. In this research, we present the Neural State Machine, a data-driven deep learning framework for character-scene interactions. The difficulty in such animations is that they require complex planning of periodic as well as aperiodic movements to complete a given task. Creating them in a production-ready quality is not straightforward and often very time-consuming. Instead, our system can synthesize different movements and scene interactions from motion capture data, and allows the user to seamlessly control the character in real-time from simple control commands. Since our model directly learns from the geometry, the motions can naturally adapt to variations in the scene. We show that our system can generate a large variety of movements, icluding locomotion, sitting on chairs, carrying boxes, opening doors and avoiding obstacles, all from a single model. The model is responsive, compact and scalable, and is the first of such frameworks to handle scene interaction tasks for data-driven character animation.

- Video - Paper - Code & Demo - Mocap Data -


SIGGRAPH 2018
Mode-Adaptive Neural Networks for Quadruped Motion Control
He Zhang+, Sebastian Starke+, Taku Komura, Jun Saito. ACM Trans. Graph. 37, 4, Article 145.
(+Joint First Authors)

Animating characters can be a pain, especially those four-legged monsters! This year, we will be presenting our recent research on quadruped animation and character control at the SIGGRAPH 2018 in Vancouver. The system can produce natural animations from real motion data using a novel neural network architecture, called Mode-Adaptive Neural Networks. Instead of optimising a fixed group of weights, the system learns to dynamically blend a group of weights into a further neural network, based on the current state of the character. That said, the system does not require labels for the phase or locomotion gaits, but can learn from unstructured motion capture data in an end-to-end fashion.

- Video - Paper - Code - Mocap Data - Windows Demo - Linux Demo - Mac Demo - ReadMe -


SIGGRAPH 2017
Phase-Functioned Neural Networks for Character Control
Daniel Holden, Taku Komura, Jun Saito. ACM Trans. Graph. 36, 4, Article 42.

This work continues the recent work on PFNN (Phase-Functioned Neural Networks) for character control. A demo in Unity3D using the original weights for terrain-adaptive locomotion is contained in the Assets/Demo/SIGGRAPH_2017/Original folder. Another demo on flat ground using the Adam character is contained in the Assets/Demo/SIGGRAPH_2017/Adam folder. In order to run them, you need to download the neural network weights from the link provided in the Link.txt file, extract them into the /NN folder, and store the parameters via the custom inspector button.

- Video - Paper - Code (Unity) - Windows Demo - Linux Demo - Mac Demo -


Processing Pipeline

In progress. More information will be added soon.

Copyright Information

This project is only for research or education purposes, and not freely available for commercial use or redistribution. The intellectual property for different scientific contributions belongs to the University of Edinburgh, Adobe Systems and Electronic Arts. Licensing is possible if you want to use the code for commercial use. For scientific use, please reference this repository together with the relevant publications below.

The motion capture data is available only under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.

Comments
  • How to train a fight animation?

    How to train a fight animation?

    Hello, I compiled the Android version, found that can not be compiled successfully, lack of Library dependency, can you provide the dependent source code? Since last year, I found your paper is waiting for your demo. Until recently, I saw that you posted it to GitHub. I am very interested in the application of AI in game, but I don't know much about neural network, so I can learn from your demo first.

    opened by hanbim520 14
  • Train failed

    Train failed

    I use the "D1_001_KAN01_001.bvh" (only the one) to train the model. This is my Motion Editor setting image I want to test whether the Motion Exporter output data can be correct training. However, i failed. Can you help find the problem? This is my scene. image This is upside down.

    opened by yh8899 9
  • How do I create my own training data?

    How do I create my own training data?

    There are 3 questions about SIGGRAPH 2017 paper:

    1. How do I create my own training data, given that I have VR devices like leg bands, hand controller, and headset?
    2. Is it possible to include hand and head input from VR devices into the model too (instead of only Up/Down/Left/Right input)?
    3. What is the training set size for the model provided in this repository?

    I will need to read more about the paper. Any advice is appreciated. Thank you!

    opened by off99555 8
  • Giving the AI spline waypoints

    Giving the AI spline waypoints

    How can I let the AI move the character using spline waypoints like the one you showed at the end of your video ? Do you have any Slack group or Gitter or something like that ??

    opened by siamaksalman 7
  • No Motion Editor found in scene in PFNN unity project

    No Motion Editor found in scene in PFNN unity project

    Hi! I am new to unity and this project. And I got one problem for your help : ( After I imported BVH file using Data Processing/BVH importer in the demo_adam (and demo_original) , and I wanted to export the BVH file to input.txt and output.txt as the training data. The Motion Exporter just shows that "no motion editor found in scene". I have no clue to solve this out. Is there missing scene.unity file in this project?

    opened by AndyVerne 6
  • In SIGGRAPH_2017,  I tried to import and run the unity project in unity, after I built and ran , the character can't be controlled when I pressed W,A,S,D keys on keyboard.

    In SIGGRAPH_2017, I tried to import and run the unity project in unity, after I built and ran , the character can't be controlled when I pressed W,A,S,D keys on keyboard.

    Hello, I even tried to directly built and ran project of unity in SIGGRAPH_2017 without editing. But in the program have built, the character doesn't respond to my input. And one more question, how could I use the weights I trained in PFNN to unity project? Thank you so much. By the way, I really appreciate your work in AI4Animation. Hope you have a nice day : )

    opened by AndyVerne 6
  • Wrong Data Processing Results for MANN

    Wrong Data Processing Results for MANN

    Hi, I tried to convert the raw motion capture data in MANN into the format for neural nerwork input, using the data processing scripts you provided in Unity (BVHImporter, MotionExporter etc.). However, the resulted Input.txt and Output.txt does not align with the ones you provided, both in trajectory and bone parameter fields. Training MANN with my own txts also displayed weird results, proving that the generated files are wrong.

    I looked through the code and everything seems to be right. I haven't alter any of the code you released, and only clicked "Export" botton and added Trajectory Module for each clip in the Motion Editor panel (leaving alone the Style Module because the annotation of styles does not affect other parameters).

    Do you have any idea where things might go wrong?

    Thank you! :)

    opened by Fiona730 6
  • How do I try the project?

    How do I try the project?

    Hello, I saw a video about this project and would like to try it, but found no instruction on how to do it.

    I assume it is played in Unity game engine but I tried "Adding" the "Adam" folder in the Projects screen but it said that it was an invalid path.

    Can you please provide some instructions?

    With regards, John

    opened by addeps3 5
  • Wolf not moving

    Wolf not moving

    Hi! I tried to check your Demo project but after I followed your instructions and stored the network parameters the wolf character is not moving if a press the control buttons. I am new to Unity, maybe it is a trivial one. Do you have any suggestions? (Windows 10, Unity 2018.2.0f2)

    opened by zovathio 4
  • Network weights for Adam in PFNN

    Network weights for Adam in PFNN

    Hi, thanks for your awesome work! I noticed that there is neither pretrained weights nor training data for the Adam model in PFNN. Just want to confirm whether these data are unreleased, in case I missed them somewhere.

    opened by Fiona730 3
  • Ballroom dancing dog?

    Ballroom dancing dog?

    image image

    Not sure what's going on here, but I've managed to... well, mangle the dog. Or make him dance, not entirely sure.

    1. Make the dog fully sit (hold V for a few seconds)
    2. Release V
    3. Wait 2 seconds
    4. Tap V
    5. Repeat steps 3 and 4 repeatedly.

    Is this an issue with the network, or just the Unity/demo code?

    opened by Qix- 3
  • Siggraph Asia 2019 Motion Data Processing

    Siggraph Asia 2019 Motion Data Processing

    Hi Sebastian,

    Thank you for posting this amazing work!

    I am trying to reproduce the results from scratch (the raw .bvh files -> asset data -> input.txt/output.txt for training). But the results I reproduced have some problems:

    1. The agent cannot turn left or right when running, I mean when I press Shift+W+E/Q at the same time, the action of agent will crash;
    2. The posture of the agent is unnatural when sitting down, and there is always an offset from the marked contacts point; And when I directly export your data as training data pressing Export Data button in MotionExporter, the results don't have these problems.

    This is my data processing:

    1. Use BVH Importer to import .bvh files which comes from MotionCapture.zip. When importing some .bvh files, the Flip option is checked, such as Jump, RunTurn, WalkTurn etc.. The data is saved in Assets/MotionCapture_reproduce/;
    2. Copy the *.unity files in Assets/MotionCapture/ and overwrite the *.unity files in Assets/MotionCapture_reproduce/. I found that many actions need to create objects by myself, such as Amchair, Avoid, Sit etc., which would be time-consuming to do manually, so I directly copied them;
    3. Click the Import option in the figure below, and in the 'public void Import()' in MotionEditor.cs, I added the following code to import the Modules, Sequences, Export, Framerate, MirrorAxis, and Offset parameters from Assets/MotionCapture/ and copied to the data in Assets/MotionCapture_reproduce/;
    4. Use the MotionExporter in the unity scene to export my own data; image

    At the same time, I also noticed that the data in Assets/MotionCapture/ is far more than the .bvh files from MotionCapture.zip, but I don't know how the extra data came from, so I only used the data in MotionCapture.zip to reproduce the results;

    So, is my data processing missing something? Or which step is wrong? Where does the extra data in Assets/MotionCapture/ come from? Can you elaborate your data processing from the raw .bvh files to asset data?

    Need your help. Thanks a lot!

    opened by walkerwjt 0
  • Velocity details in Deepphase

    Velocity details in Deepphase

    Hello, Thank you for your great work!

    Can you detail how to extract the velocity? Or the code? It seems to be the acceleration that is extracted in the readme?

    opened by YoungSeng 0
  • Is it possible to run at 30hz? (SIGGRAPH_2017)

    Is it possible to run at 30hz? (SIGGRAPH_2017)

    This code is included with a comment saying it is for 60hz (BioAnimation_Adam.cs):

    //Trajectory for 60 Hz framerate
    private const int Framerate = 60;
    private const int Points = 111;
    private const int PointSamples = 12;
    private const int PastPoints = 60;
    private const int FuturePoints = 50;
    private const int RootPointIndex = 60;
    private const int PointDensity = 10;
    

    Is it possible to change these values to have it run at 30hz? If so, aside from changing the 'Framerate' variable to 30, what other values need to be changed?

    opened by benjaminnoer2112 0
  • Possiblely mistakes in the paper

    Possiblely mistakes in the paper "Deep phase"

    Hello Sebastian,

    I''ve sent you email twice but have not recieved any response. I am not sure if there is any connection problem and I try to connect you via github issues.

    I found there might be two mistakes in the paper.

    1. In Eq. 5, the range of S should be (-pi, pi), so I think S should be out of the innermost scope: $A\cdot sin(2pi(F\cdot Tau-S)+B (5) -> A\cdot sin(2pi(F\cdot Tau)-S)+B (5.1) $, or divided by 2pi before passing it in Eq. (5). I found it hard to predict S to be similar to the curve in fig. 3 using Eq. 5 directly. I tested Eq. (5) and Eq. (5.1) and plotted the figures after training for 5 epoches. Eq 5: Eq 5 Eq 5.1: Eq 5 1

    2. In Eq 9, the next phase is calculated by interpolating two phases, and multiply it by $A_{t+dt}$: Snipaste_2022-08-22_17-33-09 However, the phase $P_t$ and $P_{t+dt}$ also contains the information about the amplitudes, so the expected prediction of $A_{t+dt}$ should be very similar to one, which means scale the interpolated result. I am not sure if the amplitudes should be predicted by the difference between two frames, just like the frequency does?

    Besides, I am not sure how to handle the loss of the phase. I employ the mse loss between the predicted phase in eq 9 and the groundtruth phase, but the frequency of the predicted phase is inaccurate. Do we have to employ loss on the amplitudes and frequence additionally? Which method do you use?

    I will appreciate it if you correct me if I am wrong. I look forward to hearing from you.

    Best regards,

    Xiangjun Tang

    opened by yuyujunjun 1
  • Questions about

    Questions about "DeepPhase: Periodic Autoencoders for learning motion phase manifolds"

    Hi, Sebastian

    How are you? We are following closely with your research work on applying Deep Learning onto character animation, and I want to say they are great work! We are reading your Siggraph 2022 paper "DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds" and trying to reproduce the work, but got stuck on some questions. I am wondering if you could help me with these detailed questions.

    1. What's the kernel-size of the convolultional layer?
    2. What method did you use to initialize the weights?
    3. What are the validation/test loss you achieved after you finished training?
    4. If I change the kernal size, there are quite a few occations that loss became nan,do you know what could be the reason for this?
    5. In the paper, does every channel connect to a unique fully connected layer? What's the activation function of the fully connected layer?
    6. Does the FFT layer has weights to learn as well?
    7. The sampling time for a time window is 2 second, correct? Also the T in "f" in formula (3) is also 2 seconds, right?

    We used your dataset from paper "Neural State Machine for Character-Scene Interactions",but the lowest loss we could get is 0.2. We think it is too high and don't find a way to reduce it. Can you shed some light on this?

    Avoid 18863(5.24min) Carry 53094(14.75min) Crouch 7659 (2.13min) Door 58479(16.24min) Jump 4511 (1.25min) Loco 59859(16.63min) Sit 199472(55.41min) total: 401937 (111min)

    Thanks a lot!

    opened by wengn 0
Owner
Sebastian Starke
Ph.D. Student in Character Animation @ The University of Edinburgh, AI Scientist @ Electronic Arts, Formerly @ Adobe Research
Sebastian Starke
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Dec 30, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Dec 31, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 23, 2022
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

tiny-dnn 5.6k Dec 31, 2022
LibDEEP BSD-3-ClauseLibDEEP - Deep learning library. BSD-3-Clause

LibDEEP LibDEEP is a deep learning library developed in C language for the development of artificial intelligence-based techniques. Please visit our W

Joao Paulo Papa 22 Dec 8, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 33k Dec 30, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Dec 30, 2022
Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Tencent 123 Mar 17, 2021
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Tencent 509 Dec 17, 2022
Nimble: Physics Engine for Deep Learning

Nimble: Physics Engine for Deep Learning

Keenon Werling 312 Dec 27, 2022
Deploying Deep Learning Models in C++: BERT Language Model

This repository show the code to deploy a deep learning model serialized and running in C++ backend.

null 43 Nov 14, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Tencent 113 Dec 23, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 80 Dec 27, 2022
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.

cDNN is a Deep Learning Library written in C Programming Language. cDNN provides functions that can be used to create Artificial Neural Networks (ANN)

Vishal R 12 Dec 24, 2022
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.

PPLNN, which is short for "PPLNN is a Primitive Library for Neural Network", is a high-performance deep-learning inference engine for efficient AI inferencing.

null 939 Dec 29, 2022
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

OpenAI 4.6k Dec 26, 2022
deep learning vision detector/estimator

libopenvision deep learning visualization C library Prerequest ncnn Install openmp vulkan(optional) Build git submodule update --init --recursuve cd b

Prof Syd Xu 3 Sep 17, 2022