a Lightweight Motion Planning Package

Overview

MPlib

MPlib is a lightweight python package for motion planning, which is decoupled from ROS and is easy to set up. With a few lines of python code, one can achieve most of the motion planning functionalities in robot manipulation.

Installation

Pre-built pip packages support Ubuntu 18.04+ with Python 3.6-3.9.

pip install mplib

Usage

See our tutorial for detailed usage and examples.

Issues
  • MPlib installation issue on Mac

    MPlib installation issue on Mac

    When I try to install MPlib using pip install mplib I get:

    ERROR: Could not find a version that satisfies the requirement mplib (from versions: none)
    ERROR: No matching distribution found for mplib
    

    I am on MacOS with python 3.7 and pip 21.3.1. Any ideas?

    opened by arjung128 2
  • Remove the requirements of <effort> tag and SRDF

    Remove the requirements of tag and SRDF

    Since tag is often not provided in simulations, it would be better to allow URDF without it. I would also be better to include a default SRDF collision pair generator.

    enhancement 
    opened by fbxiang 2
  • Collision checking doesn't seem to work as expected

    Collision checking doesn't seem to work as expected

    Does self.planning_world.collide_full() (from here) check for collisions with the pointcloud added using update_point_cloud (from here), or does it just check for collisions between links of the robot? Additionally, update_point_cloud takes in pointcloud from the robot, as is obtained using the gym environment but centered around the robot's base, correct?

    I tried to construct the following simple example: Here, the scene simply consists of a robot and a box (loaded using a custom .obj file -- I know the Sapien collision avoidance tutorial creates the box directly, but I want to be able to do this using custom .obj files). I place the robot to be in collision with the box, and I would like to detect this collision. I obtain a 360 degree pointcloud (as done in the ManiSkill gym environment), center the pointcloud to be around the robot's base, and then use update_point_cloud, but somehow self.planning_world.collide_full() is still empty:

    import sapien.core as sapien
    from sapien.utils import Viewer
    import numpy as np
    from scipy.spatial.transform import Rotation as R
    from PIL import Image
    from copy import deepcopy
    from ManiSkill.mani_skill.env.camera import CombinedCamera, read_images_from_camera, read_pointclouds_from_camera
    
    # these first four functions are from ManiSkill/mani_skill/env/base_env.py, modified to not use 'self'
    def _load_camera(cam_info, agent, scene):
            cam_info = deepcopy(cam_info)
            if 'parent' in cam_info:
                    if cam_info['parent'] == 'robot':
                            parent = agent.get_base_link()
                    else:
                            assert False
                            parent = self.objects[cam_info['parent']]
                            if isinstance(parent, sapien.Articulation):
                                    parent = parent.get_links()[0]
                    camera_mount_actor = parent
                    del cam_info['parent']
            else:
                camera_mount_actor = scene.create_actor_builder().build_kinematic()
            pose = sapien.Pose(cam_info['position'], cam_info['rotation'])
            del cam_info['position'], cam_info['rotation']
            camera = scene.add_mounted_camera(
                actor=camera_mount_actor, pose=pose, **cam_info, fovx=0
            )
            return camera
    
    def render(mode='color_image', depth=False, seg=None, camera_names=None, scene=None, cameras=None):
            scene.update_render()
            if mode == 'human':
                    if self._viewer is None:
                            self._viewer = Viewer(self._renderer)
                            self._setup_viewer()
                    self._viewer.render()
                    return self._viewer
            else:
                    if seg is not None:
                            if seg == 'visual':
                                    seg_idx = 0
                            elif seg == 'actor':
                                    seg_idx = 1
                            elif seg == 'both':
                                    seg_idx = [0, 1]
                            else:
                                    raise NotImplementedError()
                    else:
                            seg_idx = None
                    if camera_names is None:
                            cameras = self.cameras
                    else:
                            cameras_new = []
                            for camera in cameras:
                                    if camera.get_name() in camera_names:
                                            cameras_new.append(camera)
                            cameras = cameras_new
                    if mode == 'color_image' or mode == 'pointcloud':
                            views = {}
                            get_view_func = read_images_from_camera if mode == 'color_image' else read_pointclouds_from_camera
                            for cam in cameras:
                                    cam.take_picture()
                            for cam in cameras:
                                    if isinstance(cam, CombinedCamera):
                                            view = cam.get_combined_view(mode, depth, seg_idx) # list of dict for image, dict for pointcloud
                                    else:
                                            view = get_view_func(cam, depth, seg_idx) # dict
                                    views[cam.get_name()] = view
                            return views
    
    def _post_process_view(view_dict, robot_link_ids=None):
            actor_id_seg = view_dict['seg'] # (n, m, 1)
            mask = np.zeros(actor_id_seg.shape, dtype=np.bool)
            for actor_id in robot_link_ids:
                mask = mask | ( actor_id_seg == actor_id )
    
            view_dict['seg'] = mask
    
    def post_processing(obs, obs_mode, robot_link_ids):
            views = obs[obs_mode]
            for cam_name, view in views.items():
                    if isinstance(view, list):
                            for view_dict in view:
                                    _post_process_view(view_dict, robot_link_ids=robot_link_ids)
                            combined_view = {}
                            for key in view[0].keys():
                                    combined_view[key] = np.concatenate([view_dict[key] for view_dict in view], axis=-1)
                            views[cam_name] = combined_view
                    else: # view is a dict
                            _post_process_view(view, robot_link_ids=robot_link_ids)
                    if len(views) == 1:
                            view = next(iter(views.values()))
                            obs[obs_mode] = view
            return obs
    
    def main():
            # setup
            engine = sapien.Engine()
            renderer = sapien.VulkanRenderer()
            engine.set_renderer(renderer)
    
            scene = engine.create_scene()
            scene.set_timestep(1 / 100.0)
    
            rscene = scene.get_renderer_scene()
            rscene.set_ambient_light([0.5, 0.5, 0.5])
            rscene.add_directional_light([0, 1, -1], [0.5, 0.5, 0.5], shadow=True)
            rscene.add_point_light([1, 2, 2], [1, 1, 1], shadow=True)
            rscene.add_point_light([1, -2, 2], [1, 1, 1], shadow=True)
            rscene.add_point_light([-1, 0, 1], [1, 1, 1], shadow=True)
    
            # viewer
            viewer = Viewer(renderer)  # Create a viewer (window)
            viewer.set_scene(scene)  # Bind the viewer and the scene
            viewer.set_camera_xyz(x=-4, y=0, z=2)
            viewer.set_camera_rpy(r=0, p=-np.arctan2(2, 4), y=0)
            viewer.window.set_camera_parameters(near=0.05, far=100, fovy=1)
    
            # load box object
            loader: sapien.URDFLoader = scene.create_urdf_loader()
            urdf = 'box.urdf'
            asset = loader.load(urdf)
            rotation = np.array([1, 0, 0, 0])
            translation = np.array([-1, -1, 0])
            asset.set_pose(sapien.Pose(p=translation, q=rotation))
    
            # load and place robot (using code from the gym environment so we can obtain pointclouds)
            import pathlib
            from mani_skill.utils.config_parser import (
                    preprocess,
                    process_variables,
                    process_variants,
            )
            import importlib
            config_file = pathlib.Path('ManiSkill/mani_skill/assets/config_files/open_cabinet_drawer.yml')
            yaml_config = preprocess(config_file)
            config = deepcopy(yaml_config)
            _level_rng = np.random.RandomState(seed=0)
            variant_config = {'partnet_mobility_id': '1004'}
            config = process_variables(config, _level_rng)
            level_config, level_variant_config = process_variants(
                    config, _level_rng, variant_config
            )
            agent_config = level_config['agent']
            module_name, class_name = agent_config['agent_class'].rsplit('.', 1)
            module = importlib.import_module(module_name)
            AgentClass = getattr(module, class_name)
            agent = AgentClass(engine, scene, agent_config)
    
            # choose robot configuration such that arm is colliding with box
            cur_state = agent.get_state()
            cur_state[12:14] = np.array([-1, 0])
            cur_state[18:28] = np.zeros(10)
            cur_state[19] = 1.45
            cur_state[20] = -0.7 # 1.75
            agent.set_state(cur_state)
    
            # get pointcloud from this position
            robot_link_ids = agent.get_link_ids()
            cam_infos = [level_config['render']['cameras'][1]] # only keep robot cameras, no world camera
            cameras = []
            for cam_info in cam_infos:
                    if 'sub_cameras' in cam_info:
                            sub_cameras = [_load_camera(sub_cam_info, agent, scene) for sub_cam_info in cam_info['sub_cameras']]
                            combined_camera = CombinedCamera(cam_info['name'], sub_cameras)
                            cameras.append(combined_camera)
                    else:
                            assert False
                            camera = self._load_camera(cam_info)
                            cameras.append(camera)
            seg='actor'
            obs = {'agent': agent.get_state(with_controller_state=False), 'pointcloud': render(mode='pointcloud', camera_names=['robot'], seg=seg, scene=scene, cameras=cameras)}
            obs = post_processing(obs, obs_mode='pointcloud', robot_link_ids=robot_link_ids)
            np.save('pointcloud_test.npy', obs['pointcloud'])
    
            # adjust pointcloud to be centered around robot
            obs['pointcloud']['xyz'][:, 0] += 1
    
            '''
            # collision detection
            import mplib
            link_names = ['panda_link0', 'panda_link1', 'panda_link2', 'panda_link3', 'panda_link4', 'panda_link5', 'panda_link6', 'panda_link7', 'panda_link8', 'panda_hand', 'panda_leftfinger', 'panda_rightfinger']
            joint_names = ['panda_joint1', 'panda_joint2', 'panda_joint3', 'panda_joint4', 'panda_joint5', 'panda_joint6', 'panda_joint7', 'panda_finger_joint1', 'panda_finger_joint2']
            planner = mplib.Planner(
                    urdf="./assets/robot/panda/panda.urdf",
                    srdf="./assets/robot/panda/panda.srdf",
                    user_link_names=link_names,
                    user_joint_names=joint_names,
                    move_group="panda_hand",
                    joint_vel_limits=np.ones(7),
                    joint_acc_limits=np.ones(7))
            planner.update_point_cloud(obs['pointcloud']['xyz'])
            path = planner.plan(goal_pose=np.zeros(9),current_qpos=agent.get_state(by_dict=True)['qpos'][1:],use_point_cloud=True)
            '''
    
            while not viewer.closed:  # Press key q to quit
                viewer.render()
    
    
    if __name__ == '__main__':
            main()
    

    With the collision detection portion at the end commented out, we can see in the viewer that the robot is clearly in collision with the box. However, when we uncomment this part, we see that self.planning_world.collide_full() is still empty -- this is not the expected behavior, is it?

    For completeness, box.urdf:

    <?xml version="1.0" ?>
    <robot name="box">
            <link name="link_box">
                    <visual name="box">
                            <origin xyz="0 0 0"/>
                            <geometry>
                                    <mesh filename="box.obj"/>
                            </geometry>
                    </visual>
                    <collision>
                            <origin xyz="0 0 0"/>
                            <geometry>
                                    <mesh filename="box.obj"/>
                            </geometry>
                    </collision>
            </link>
    </robot>
    

    and box.obj:

    o Mesh
    v -0.15 -0.15 -0.25
    v 0.15 -0.15 -0.25
    v 0.15 0.15 -0.25
    v -0.15 0.15 -0.25
    v -0.15 -0.15 0.5
    v 0.15 -0.15 0.5
    v 0.15 0.15 0.5
    v -0.15 0.15 0.5
    vn 0 -1 0.5
    vn 0 1 0.5
    vn -1 0 0.5
    vn 1 0 0
    vn 0 0 1
    vn 0 0 -1
    vt 0 0
    vt 1 0
    vt 0 1
    vt 1 1
    f 1/1/1 2/2/1 6/4/1
    f 1/1/1 6/4/1 5/3/1
    f 3/1/2 4/2/2 8/4/2
    f 3/1/2 8/4/2 7/3/2
    f 4/1/3 1/2/3 5/4/3
    f 4/1/3 5/4/3 8/3/3
    f 2/1/4 3/2/4 7/4/4
    f 2/1/4 7/4/4 6/3/4
    f 5/1/5 6/2/5 7/4/5
    f 5/1/5 7/4/5 8/3/5
    f 4/1/6 3/2/6 2/4/6
    f 4/1/6 2/4/6 1/3/6
    

    Apologies for the incredibly long post.

    opened by arjung128 1
  • Refactor mplib structure and add development utilities

    Refactor mplib structure and add development utilities

    Improvement

    • Add Dockerfile
    • Add dev scripts under dev/
    • Add basic unittest
    • Add docstring for Planner

    Changes

    • Simplify CMakeList
    • Rename _mplib to mplib.pymp
    opened by Jiayuan-Gu 0
  • improve searching for .convex.stl

    improve searching for .convex.stl

    Maybe you could check whether the file already ends with .convex.stl, and also check if the convex file exists?

    https://github.com/haosulab/MPlib/blob/main/src/fcl_model.cpp#L32

    opened by fbxiang 0
A motion-activated LED light strip controller. Supports up to two motion detectors.

A simple standalone motion activated controller for WS2812b LED lights Version 0.30 adds the ability to change settings from a web interface without r

null 4 Mar 12, 2022
AutoDrive Planning Research

AutoDrive Planning Research

Col_In_Coding 2 Nov 29, 2021
An Open-source Strong Baseline for SE(3) Planning in Autonomous Drone Racing

Fast-Racing An Open-source Strong Baseline for SE(3) Planning in Autonomous Drone Racing 0. Overview Fast-Racing is a strong baseline that focuses on

ZJU FAST Lab 81 Jun 15, 2022
mCube's ultra-low-power wake-on-motion 3-axis accelerometer

MC3635 mCube's ultra-low-power wake-on-motion 3-axis accelerometer Based on mCube's Arduino demo driver, this sketch is specific for the MC3635 3-axis

Kris Winer 3 Jun 2, 2021
Motion planner built upon Tesseract and Trajopt

motion_planner Motion planner built upon Tesseract and Trajopt The abb_example package is similar to the tesseract_ros_examples, but it contain more e

Mohamed Ahmed 8 Jan 18, 2022
FluidNC - The next generation of motion control firmware

FluidNC (CNC Controller) For ESP32 Introduction FluidNC is the next generation of Grbl_ESP32. It has a lot of improvements over Grbl_ESP32 as listed b

null 441 Jul 1, 2022
DIY Zigbee CC2530 Motion sensor (AM312/ AM412/ BS312/ BS412), Temperature /Humidity /Pressure sensor (BME280), Ambient Light sensor (BH1750), 2.9inch e-Paper Module

How to join: If device in FN(factory new) state: Press and hold button (1) for 2-3 seconds, until device start flashing led Wait, in case of successfu

Sergey Koptyakov 5 Feb 13, 2022
DIY Zigbee CC2530 Motion sensor (AM312/ AM412/ BS312/ BS412), Temperature /Humidity /Pressure sensor (BME280), Ambient Light sensor (BH1750), 2.9/2.13/1.54 inch e-Paper Module

How to join: If device in FN(factory new) state: Press and hold button (1) for 2-3 seconds, until device start flashing led Wait, in case of successfu

Sergey Koptyakov 24 Jun 28, 2022
Updated Vindriktning with Wifi Connectivity, Motion sensor, Temperature and Humidity

Vindriktning-plus Updated Vindriktning with Wifi Connectivity, Motion sensor, Temperature and Humidity Inspired & parts of the code are used from: htt

Glittering Dealer 3 Apr 18, 2022
Code accompanying our SIGGRAPH 2021 Technical Communications paper "Transition Motion Tensor: A Data-Driven Approach for Versatile and Controllable Agents in Physically Simulated Environments"

SIGGRAPH ASIA 2021 Technical Communications Transition Motion Tensor: A Data-Driven Framework for Versatile and Controllable Agents in Physically Simu

null 10 Apr 21, 2022
Lidar-with-velocity - Lidar with Velocity: Motion Distortion Correction of Point Clouds from Oscillating Scanning Lidars

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. This repository is a barebones implementati

ISEE Research Group 128 Jun 20, 2022
Driver leap - Self-sustainable fork of SteamVR driver for Leap Motion controller with updated vendor libraries

Driver Leap Self-sustainable fork of SteamVR driver for Leap Motion controller with updated vendor libraries Installation (for users) Install Ultralea

null 42 Jun 17, 2022
The Leap Motion cross-format, cross-platform declarative serialization library

Introduction to LeapSerial LeapSerial is a cross-format, declarative, serialization and deserialization library written and maintained by Leap Motion.

Leap Motion (Ultraleap) 15 Jan 17, 2022
The OpenEXR project provides the specification and reference implementation of the EXR file format, the professional-grade image storage format of the motion picture industry.

OpenEXR OpenEXR provides the specification and reference implementation of the EXR file format, the professional-grade image storage format of the mot

Academy Software Foundation 1.2k Jun 27, 2022
Collection of Arduino sketches for TDK's combo accel/gyro motion sensor

ICM42688 Collection of Arduino sketches for TDK's combo accel/gyro motion sensor The basic sketch configures the sensors' data rates and full scale se

Kris Winer 8 May 2, 2022
image_projection is a ROS package to create various projections from multiple calibrated cameras.

image_projection Overview image_projection is a ROS package to create various projections from multiple calibrated cameras. Use cases involve: Rectify

Technische Universität Darmstadt ROS Packages 104 Jun 23, 2022
A Dart package to send native 💬 toasts on Windows. Based on WinToast.

desktoasts A Dart package to send native ?? toasts on Windows. Installation For Flutter dependencies: ... desktoasts: ^0.0.2 For Dart CLI here Sup

Hitesh Kumar Saini 37 Mar 7, 2022
An R package for monitoring the trend of daily COVID-19 positive cases in Tokyo

R Package TokyoCovidMonitor A simple RStan-based package for monitoring daily COVID-19 positive cases in Tokyo Overview This package offers a simple m

Takashi J. OZAKI 14 Mar 15, 2022
Pardus WSL package

PardusWSL https://dogukan.dev/pardus-wsl-kurulumu.html Pardus on WSL (Windows 10 FCU or later) based on DistroLauncher ?? Requirements Windows 10 1709

Doğukan Öksüz 9 Mar 7, 2022