Nimble: Physics Engine for Deep Learning


Stanford Nimble Logo

Build Status

Stanford Nimble

pip3 install nimblephysics


Use physics as a non-linearity in your neural network. A single timestep, nimble.timestep(state, controls), is a valid PyTorch function.

Forward pass illustration

We support an analytical backwards pass, that works even through contact and friction.

Backpropagation illustration

It's as easy as:

from nimble import timestep

# Everything is a PyTorch Tensor, and this is differentiable!!
next_state = timestep(world, current_state, control_forces)

Nimble started life as a fork of the popular DART physics engine, with analytical gradients and a PyTorch binding. We've worked hard to maintain as much backwards compatability as we can, so many simulations that worked in DART should translate directly to Nimble.

Check out our website for more information.

  • Issue with Rajagopal example

    Issue with Rajagopal example


    Thanks a lot for the amazing work! When I try to run the example, I encountered the error below:

    ❯ python
    Msg [NameManager::issueNewName] (default) The name [Joint_rot_z] is a duplicate, so it has been renamed to [Joint_rot_z(1)]
    Msg [NameManager::issueNewName] (default) The name [hip_r_z] is a duplicate, so it has been renamed to [hip_r_z(1)]
    Msg [NameManager::issueNewName] (default) The name [hip_l_z] is a duplicate, so it has been renamed to [hip_l_z(1)]
    Msg [NameManager::issueNewName] (default) The name [back_z] is a duplicate, so it has been renamed to [back_z(1)]
    Msg [NameManager::issueNewName] (default) The name [acromial_r_z] is a duplicate, so it has been renamed to [acromial_r_z(1)]
    Msg [NameManager::issueNewName] (default) The name [acromial_l_z] is a duplicate, so it has been renamed to [acromial_l_z(1)]
    Traceback (most recent call last):
      File "", line 12, in <module>
    TypeError: addSkeleton(): incompatible function arguments. The following argument types are supported:
        1. (self: nimblephysics_libs._nimblephysics.simulation.World, skeleton: nimblephysics_libs._nimblephysics.dynamics.Skeleton) -> str
    Invoked with: <nimblephysics_libs._nimblephysics.simulation.World object at 0x7f95df2df880>, <nimblephysics_libs._nimblephysics.biomechanics.OpenSimFile object at 0x7f95df2df8b8>

    I have the version 0.6.1 installed via pip. How can I solve this problem?

    opened by AlleUndKalle 3
  • How to reset the joint DOF

    How to reset the joint DOF


    I am loading a humanoid from a urdf file. It seems that we can not directly define a ball joint in urdf. So I wonder if we can load a urdf with 1 DOF and then change the DOF to 3 after loading the model into the nimble?


    opened by zhangy76 2
  • Joint degree of freedom

    Joint degree of freedom


    The object is described via generalized coordinate. I wonder how to get the joint DOF of each link? The question may be very basic... So I wonder if there is any documents besides the examples in the current repository that introduce the basic functions?


    opened by zhangy76 2
  • getMultipleContactInverseDynamicsOverTime



    I am able to run all the examples related to the inverse dynamics but fail to call the "getMultipleContactInverseDynamicsOverTime" function. Specifically, I input the arguments following the requirements but the program still indicates the input format is wrong. So I am wondering if anyone has use this function successfully ?


    opened by zhangy76 1
  • segmentation fault when calling `getJoints()`

    segmentation fault when calling `getJoints()`

    The following is the behavior that I noticed in Python:

        world = nimble.simulation.World()
        skel: nimble.dynamics.Skeleton = world.loadSkeleton(
    opened by michguo 1
  • `integrateVelocitiesFromImpulses` and `integratePositions`

    `integrateVelocitiesFromImpulses` and `integratePositions`

    Round 3 Review

    • Add default arg for integrateVelocitiesFromImpulses.

    Round 2 Review

    • Add python bindings.

    Round 1 Review

    • Separate position integration from impulse velocity integration.
    • Create functions for velocity (impulse) integration and position integration.
    opened by michguo 1
  • Drop frames in excess of 100fps on C++ end, before hitting web GUI

    Drop frames in excess of 100fps on C++ end, before hitting web GUI

    If we try to display a simulation from 1000fps at real time in the browser, right now the C++ will blindly attempt to send JSON packets to the browser at 1000fps. That's too much for the browser to handle, and slows everything down. So we should really have the C++ rate-limit itself to 100fps, and silently drop/batch updates that come in faster than that on the C++ side.

    opened by keenon 1
  • dart_layer needs update setForces -> setExternalForces

    dart_layer needs update setForces -> setExternalForces

    File "/home/yannis/anaconda3/envs/pytorch/lib/python3.7/site-packages/diffdart/", line 96, in dart_layer return DartLayer.apply(world, pos, vel, torque, pointer) # type: ignore File "/home/yannis/anaconda3/envs/pytorch/lib/python3.7/site-packages/diffdart/", line 35, in forward world.setForces(torque.detach().numpy()) AttributeError: 'diffdart_libs._diffdart.simulation.World' object has no attribute 'setForces'

    opened by iexarchos 1
  • renderTrajectoryLines produces jittery lines

    renderTrajectoryLines produces jittery lines

    If you call gui.stateMachine().renderTrajectoryLines(...) multiple times with the same input, you'll get different results visually. This indicates to me some kind of data corruption or race conditions hidden in here that we need to ferret out.

    opened by keenon 1
  • Capsule-Floor penetration

    Capsule-Floor penetration

    We're seeing penetration in Yannis's demos:

    Specifically, HalfCheetah and Reacher

    This is probably an issue with capsule-box collisions.

    opened by keenon 1
  • Support differentiating through

    Support differentiating through "Constraint Force Mixing"

    Our LCP's are only guaranteed solvable if A is positive-semidefinite and there are no force bounds. To increase stability of our LCPs, we can do a trick called "Constraint Force Mixing". In practice, this means multiplying the elements on the diagonal of A by 1.0 + eps, where eps is some small positive value. This has the effect of ensuring A isn't singular, and in general reducing "singular-ness" of A.

    You can turn constraint force mixing on and off with void World::setConstraintForceMixingEnabled(bool enable). Currently, by default CFM is turned off, because our Jacobians don't support it.

    This ticket is about supporting it.

    The matrix A is computed in dart/constrant/BoxedLcpConstraintSolver.cpp in the method solveConstrainedGroup(). The A matrix it computes is in Open Dynamics Engine format, which means row-major order, where each row's length is rounded up to the nearest multiple of 4 (to allow vectorization) and any padding entries are ignored. This calls out to individual constraints to populate A. For each constraint, it applies a unit impulse, and then measures the change in relative velocity at the constraint. The method that applies the CFM is ContactConstraint::getVelocityChange(), towards the bottom.

    Supporting this in our differentiation means tracking all the CFM constants for each element of the diagonal of A, and storing them for later. These are constants wrt differentation, but we need to ensure that we scale A's diagonals, and the gradient of A's diagonals, by these constants wherever it is computed.

    opened by keenon 1
  • Wrong interpretation of OpenSim polynomial curves of OpenSim?

    Wrong interpretation of OpenSim polynomial curves of OpenSim?

    I used addBiomechanics to process data with this model.This model is based on the ragagopal opensim model, but I modified the knee joint definitions to use polynomials instead of SimmSplines. Looking great on the nimble viewer but crap in the OpenSim GUI, suggesting a mismatch between OpenSim and Nimble Physics. When replacing the polynomials with the original SimmSplines, it looks good, confirming the polynomials are the problem and probably wrongly interpreted by Nimble Physics.

    For reference, hereis the OpenSim polynomialFunction class. If I were to guess, I would first check if you use the same order for the coefficients. OpenSim uses decreasing order

    opened by antoinefalisse 0
  • Consufing Gradients on a Simple Scene

    Consufing Gradients on a Simple Scene

    Confusing Gradient Output

    I got confusing output gradients from Nimble on a simple scene. The scene is about two balls with the same mass making fully elastic collision. In this scene, Nimble gives inconsistent gradients with the analytical gradients.

    Scene description


    Two balls are allowed to move horizontally. There is no friction or gravity. The two balls are of the same mass 1kg and have the same radius r = 0.1m. In the beginning, the left ball at x1 = 0 (shown in blue) moves at v0 = 1m/s to the right, while the right ball at x2 = 0.52m (shown in green) has velocity u0 = 0. Since there is no friction, the blue ball would make the uniform motion. At t = 0.5s, the two balls would collide. Then the two balls would exchange their speeds since they are of the same mass and the collision is fully elastic. The blue ball would then stay still while the green ball moves at 1m/s to the right. At t = T = 1s, the green ball would appear at xf = 1.2m.

    Gradients computation

    It is easy to show that the analytical form of xf w.r.t. (x1, x2, v0, u0) is: xf = v0 T + x1 + 2r

    So the analytical gradient of xf w.r.t. (x1, x2, v0, u0) is (1, 0, 1, 0). However, the output gradients from Nimble is (0.75, 0.25, 0.91, 0.08), which is obviously inconsistent with the analytical gradients.


    System configuration:

    • OS: Ubuntu 20.04 LTS
    • CPU: AMD Ryzen Threadripper 3970X 32-Core Processor
    • GPU: NVIDIA GeForce RTX 3090
    • Nimblephysics version: 0.8.38
    • Pytorch version: 1.13.0
    • Python: 3.9.13

    Source code:

    import nimblephysics as nimble
    import torch
    def create_ball(radius, color):
        ball = nimble.dynamics.Skeleton()
        sphereJoint, sphereBody = ball.createTranslationalJoint2DAndBodyNodePair() 
        sphereShape = sphereBody.createShapeNode(nimble.dynamics.SphereShape(radius))
        sphereVisual = sphereShape.createVisualAspect()
        sphereVisual.setColor([i / 255.0 for i in color])
        return ball
    def create_world():
        world = nimble.simulation.World()
        world.setGravity([0, 0, 0]) # No gravity
        radius = 0.1
        world.addSkeleton(create_ball(radius, [68, 114, 196]))
        world.addSkeleton(create_ball(radius, [112, 173, 71]))
        return world
    def simulate_and_backward(world, x1, x2, v0, u0):
        # Ball 1 is initialized to be at x1 on the x-axis, with velocity v0.
        # Ball 2 is initialized to be at x2 on the x-axis, with velocity u0.
        # The zeros below mean that the vertical positions and velocities are all zero.
        # So the balls woul only move in the horizontal direction.
        init_state = torch.tensor([x1, 0, x2, 0, v0, 0, u0, 0], requires_grad=True)
        control_forces = torch.zeros(4) # No external forces
        total_simulation_time = 1.0 # simulate for 1 second
        num_time_steps = 1000       # split into 1000 discrete small time steps
        # Each time step has length 0.001 seconds
        world.setTimeStep(total_simulation_time / num_time_steps)
        state = init_state
        states = [state]
        for i in range(num_time_steps):
            state = nimble.timestep(world, state, control_forces)
        # xf is the final x-coordinate of ball 2
        xf = state[2]
        # The gradients on the y-axis are irrelevant, so we exclude them.
        grad = (init_state.grad)[0:8:2] 
        print(f"xf = {xf.detach().item()}")
        print(f"gradients of xf = {grad}")
        return states
    if __name__ == "__main__":
        world = create_world()
        gui = nimble.NimbleGUI(world)
        states = simulate_and_backward(world, x1=0, x2=0.52, v0=1, u0=0)

    Execution results:

    xf = 1.1989999809264922
    gradients of xf = tensor([0.7500, 0.2500, 0.9197, 0.0803])
    opened by Wilbur-Django 3
  • SliderJoint support in the OpenSim parser

    SliderJoint support in the OpenSim parser

    Is your feature request related to a problem? Please describe. When trying to import the Tug of War osim model I got a segmentation error, due to the SliderJoint not being supported by the osim parser (was using v0.8.34 in python 3.8). Sidenote: got a segmentation error instead of the error message that the joint was not yet implemented.

    Describe the solution you'd like

    • Support the opensim SliderJoint with the dart/dynamics/PrismaticJoint
    • Normal error message instead of segmentation error when trying to parse a unsupported joint.

    Describe alternatives you've considered None

    Additional context Would expect that implementing the SliderJoint (PrismaticJoint) is quite easy, since I expect that the syntax is basically the same as the already support PinJoint ('RevoluteJoint`).

    opened by TJStienstra 0
  • gui not working, mac OS

    gui not working, mac OS


    • Nimble physics version: master, v0.8.34
    • OS name and version name(or number): [macOS]
    • Browser : Safari, Firefox

    Expected Behavior

    showing cheetah GUI in web browser with some random motion

    Current Behavior

    1, states update loop runs correctly

    2, I get this warning during running : Warning [BodyNode.cpp:619] [BodyNode] A negative or zero mass [0] is set to BodyNode [h_pelvis_aux2]

    3, but gui in browser shows only empty page (no matters if Safari or Firefox used), javascript is enabled in my browser

    loaded page source : page_html

    javascript : bundle

    Code to Reproduce

    used example code :

    opened by michalnand 0
  • Body node setScale() leads to incorrect origin of the child link

    Body node setScale() leads to incorrect origin of the child link

    Bug Report


    Ubuntu 18.04, GCC 7.4.0

    Expected Behavior

    Current Behavior

    When I use setScale(), only the shape of the current node is scaled while the origin of the child link becomes locating at the zeros.

    Steps to Reproduce

    Code to Reproduce

    robot_path = "~.urdf" world: nimble.simulation.World = nimble.simulation.World() skel: nimble.dynamics.Skeleton = world.loadSkeleton(robot_path) world.getBodyNodeByIndex(node).setScale([1, 0.8, 1])

    opened by zhangy76 0
  • add getLocalVertices()

    add getLocalVertices()

    [Remove this line and describe this pull request. Link to relevant GitHub issues, if any.]

    Before creating a pull request

    • [ ] Document new methods and classes
    • [ ] Format new code files using clang-format

    Before merging a pull request

    • [ ] Set version target by selecting a milestone on the right side
    • [ ] Summarize this change in
    • [ ] Add unit test(s) for this change
    opened by michguo 0
  • v0.8.51(Nov 24, 2022)

    Eliminating GRF coverage blips in all calls that modify GRF coverage, stabilizing the smoothing on accs/grfs when detecting unmeasured external forces

    Source code(tar.gz)
    Source code(zip)
  • v0.8.50(Nov 22, 2022)

    DynamicsFitter was showing different results in production versus on the development server, and so we're trying going back to a single-threaded version of the getLinearTrajectoryLinearSystem() to check for race conditions being the problem.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.49(Nov 22, 2022)

  • v0.8.48(Nov 20, 2022)

  • v0.8.47(Nov 18, 2022)

  • v0.8.46(Nov 18, 2022)

    This exposes setNumThreads() on the DynamicsFitProblemConfig, and allows DynamicsFitter to spread gradient and loss computations over multiple threads.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.45(Nov 17, 2022)

  • v0.8.44(Nov 17, 2022)

  • v0.8.43(Nov 17, 2022)

  • v0.8.42(Nov 16, 2022)

  • v0.8.41(Nov 15, 2022)

  • v0.8.40(Nov 11, 2022)

  • v0.8.39(Nov 11, 2022)

    A number of improvements in DynamicsFitter allow it to handle larger sets of longer trials, including zeroLinearResidualsAndOptimizeAngular()

    Source code(tar.gz)
    Source code(zip)
  • v0.8.38(Nov 2, 2022)

    DynamicsFitter gets a method (recalibrateForcePlates) to automatically shift force plate spatial calibration to the optimal match with the marker data after you've got a zero residual trajectory.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.37(Nov 2, 2022)

  • v0.8.36(Oct 29, 2022)

  • v0.8.35(Oct 24, 2022)

    This fixes a few bugs with Anthropometrics. Most importantly, we no longer have unavailable measurements default to 0, instead we default to the mean in the dataset, so we don't accidentally skew covariances.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.34(Oct 24, 2022)

    This adds support to the body of Nimble for the first-class scaling in biological joints:

    • ScapulathoracicJoint
    • EllipsoidJoint
    • ConstantCurvatureJoint
    • ConstantCurvatureIncompressibleJoint
    Source code(tar.gz)
    Source code(zip)
  • v0.8.33(Oct 4, 2022)

  • v0.8.32(Sep 15, 2022)

  • v0.8.31(Sep 15, 2022)

  • v0.8.30(Sep 14, 2022)

    The biomechanics package now includes a DynamicsFitter object, whose job is to take the output of MarkerFitter and make the dynamics consistent.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.29(Aug 8, 2022)

  • v0.8.28(Aug 5, 2022)

    This is a very minor patch on the MJCF exporter. Now, rather than export a joint with a 0 range of motion for a locked joint (which MuJoCo doesn't like) it'll just remove the joint altogether in the exported skeleton.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.27(Aug 4, 2022)

  • v0.8.26(Aug 2, 2022)

  • v0.8.25(Jul 26, 2022)

  • v0.8.24(Jul 25, 2022)

  • v0.8.23(Jul 25, 2022)

    biomechanics.SkeletonConverter.convertMotion(...) will no longer quit early if it runs into a bad IK solve on an intermediate frame, since that behavior isn't appropriate for production use cases.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.22(Jul 25, 2022)

A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.

PPLNN, which is short for "PPLNN is a Primitive Library for Neural Network", is a high-performance deep-learning inference engine for efficient AI inferencing.

null 928 Nov 24, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Nov 17, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Nov 18, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit ( is a unified deep learning toolkit that describes

Microsoft 17.3k Nov 22, 2022
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

tiny-dnn 5.6k Nov 21, 2022
LibDEEP BSD-3-ClauseLibDEEP - Deep learning library. BSD-3-Clause

LibDEEP LibDEEP is a deep learning library developed in C language for the development of artificial intelligence-based techniques. Please visit our W

Joao Paulo Papa 20 Oct 23, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 33k Nov 26, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect ( is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Nov 17, 2022
Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Tencent 123 Mar 17, 2021
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Tencent 509 Nov 21, 2022
Deploying Deep Learning Models in C++: BERT Language Model

This repository show the code to deploy a deep learning model serialized and running in C++ backend.

null 43 Nov 14, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Tencent 112 Sep 28, 2022
AI4Animation: Deep Learning, Character Animation, Control

This project explores the opportunities of deep learning for character animation and control as part of my Ph.D. research at the University of Edinburgh in the School of Informatics, supervised by Taku Komura. Over the last couple years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training and runtime control, developed in Unity3D / Tensorflow / PyTorch.

Sebastian Starke 5.5k Nov 29, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 78 Nov 2, 2022
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.

cDNN is a Deep Learning Library written in C Programming Language. cDNN provides functions that can be used to create Artificial Neural Networks (ANN)

Vishal R 12 Oct 27, 2022
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

OpenAI 4.3k Nov 25, 2022
deep learning vision detector/estimator

libopenvision deep learning visualization C library Prerequest ncnn Install openmp vulkan(optional) Build git submodule update --init --recursuve cd b

Prof Syd Xu 3 Sep 17, 2022