A Multi-sensor Fusion Odometry via Smoothing and Mapping.

Overview

LVIO-SAM

A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.

  • The code is still being integrated. we will release it in the feature.

LVIO-SAM

A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.

Contributors Forks Stargazers Issues


Logo Logo

LVIO-SAM

A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.
Demo Youtube · Demo Bilibili · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Simulations environment
  3. How to run
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgements

About The Project

   This project is provide a a multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR,stereo camera and inertial measurement unit (IMU) via smoothing and mapping.

!!!Important Notes!!!

Simulations environment

   We modify the Gazebo world proposed in here and adding our own sensors to test our proposed method. We use Husky as the base robot and we modify the urdf. The robot is equiped with A velodyne VLP 16 lidar, stereo camera(640x480) and an IMU (50Hz).

Download the CMU campus model to sim_env/husky_gazebo/mesh/

cd YOUR_WORD_PATH/LVIO_SAM/sim_env/husky_gazebo/mesh/
unzip autonomus_exploration_environments.zip

I guess c campus model it to ~/.gazebo/models/.

cd autonomus_exploration_environments/
cp -r campus ~/.gazebo/models/

you can launch gazebo and find campu model to check if it is OK.

git clone https://github.com/TurtleZhong/LVIO-SAM.git

cd YOUR_PATH/LVIO-SAM
catkin build -DCMAKE_BUILD_TYPE=Release
source devel/setup.bash

roslaunch husky_gazebo husky_campus.launch

It will take a few minutes to load the world. please start a new terminal and launch the husky and sensor model.

roslaunch husky_gazebo spawn_husky.launch

If everything is OK, you will get this:

[Logo]

if you want control the robot, you can use the keyboard i,j,k,l etc.

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

How to run in Docker

   Since our code is still being integrated. we will release it in the feature. But we provide a docker environment for users. So Docker should be correctly installed.

  Step1. Prepare Datasets

  1. KITTI datasets
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_sync.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_extract.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_calib.zip
unzip 2011_09_30_drive_0084_sync.zip
unzip 2011_09_30_drive_0084_extract.zip
unzip 2011_09_30_calib.zip
python kitti2bag.py -t 2011_09_30 -r 0027 raw_synced .

That's it. You have a bag that contains your data.

╰─$ rosbag info kitti_2011_09_30_drive_0027_synced.bag 
path:        kitti_2011_09_30_drive_0027_synced.bag
version:     2.0
duration:    1:55s (115s)
start:       Sep 30 2011 12:40:25.07 (1317357625.07)
end:         Sep 30 2011 12:42:20.41 (1317357740.41)
size:        6.0 GB
messages:    35278
compression: none [4435/4435 chunks]
types:       geometry_msgs/TwistStamped [98d34b0043a2093cf9d9345ab6eef12e]
             sensor_msgs/CameraInfo     [c9a58c1b0b154e0e6da7578cb991d214]
             sensor_msgs/Image          [060021388200f6f0f447d0fcd9c64743]
             sensor_msgs/Imu            [6a62c6daae103f4ff57a132d6f95cec2]
             sensor_msgs/NavSatFix      [2d3a8cd499b9b4a0249fb98fd05cfa48]
             sensor_msgs/PointCloud2    [1158d486dd51d683ce2f1be655c3c181]
topics:      /gps/fix                                 1106 msgs    : sensor_msgs/NavSatFix     
             /gps/vel                                 1106 msgs    : geometry_msgs/TwistStamped
             /imu_correct                            11556 msgs    : sensor_msgs/Imu           
             /imu_raw                                11556 msgs    : sensor_msgs/Imu           
             /kitti/camera_color_left/camera_info     1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_color_left/image_raw       1106 msgs    : sensor_msgs/Image         
             /kitti/camera_color_right/camera_info    1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_color_right/image_raw      1106 msgs    : sensor_msgs/Image         
             /kitti/camera_gray_left/camera_info      1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_gray_left/image_raw        1106 msgs    : sensor_msgs/Image         
             /kitti/camera_gray_right/camera_info     1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_gray_right/image_raw       1106 msgs    : sensor_msgs/Image         
             /points_raw                              1106 msgs    : sensor_msgs/PointCloud2

Other source files can be found at KITTI raw data page.

  1. sim_env datasets

You can record datasets from our simulation environments or download the sample dataset from BaiduYun Link, the extract code is f8to.

  Get docker images and create your own datasets..

docker pull xinliangzhong/ubuntu-18.04-novnc-lvio-sam:v1

use docker images check the image is ok.

docker run -it --rm -p 8080:80 xinliangzhong/ubuntu-18.04-novnc-lvio-sam:v1

then open the Chrome browser and type http://127.0.0.1:8080/

open 3 terminal and run

cd /root
source .bashrc
cd work/ws_lvio/
source devel/setup.bash

roslaunch husky_gazebo husky_campus.launch
roslaunch husky_gazebo husky_campus.launch

It will take a few minutes to load the world. please start a new terminal and launch the husky and sensor model.

roslaunch husky_gazebo spawn_husky.launch

  

roslaunch husky_viz view_robot.launch

If everything is OK, you will get this in your chrome browser:

[Logo]

Run LVIO-SAM in docker

Follow the above steps to get the docker image, and open it in browser:

[Logo]

cd /root
source .bashrc
cd work/ws_lvio/
source devel/setup.bash

roslaunch lvio_sam run_kitti_debug_test_vo_between_factor.launch #for kitti dataset.
roslaunch lvio_sam run_kitti_debug_test_vo_between_factor.launch #for sim dataset.

we prepare 2 sample bag in the docker, you can use it directly.

rosbag play kitti_2011_09_30_drive_0027_synced.bag --pause --clock #for kitti dataset.
rosbag play 2021-08-04-09-49-56.bag --pause --clock #for sim dataset.

If everything is OK, you will get this in your chrome browser:

[Logo]

Roadmap

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License.

Contact

Xinliang Zhong - @zxl - [email protected]

Project Link: https://github.com/TurtleZhong/LVIO-SAM

Acknowledgements

You might also like...
Fast and Light-weight path smoothing methods for vehicles
Fast and Light-weight path smoothing methods for vehicles

path_smoother About Fast and Light-weight path smoothing methods for vehicles Denpendencies This project has been tested on Ubuntu 18.04. sudo apt-get

Sensirion Mass Flow Sensor Arduino library, modified from MyElectrons and Nabilphysics Arduino repositories for SFM3300 Digital Mass Flow Sensor
Sensirion Mass Flow Sensor Arduino library, modified from MyElectrons and Nabilphysics Arduino repositories for SFM3300 Digital Mass Flow Sensor

Sensirion Mass Flow Sensor Arduino library, modified from MyElectrons and Nabilphysics Arduino repositories for SFM3300 Digital Mass Flow Sensor. When the sensor data gets stuck, the library has a hard reset function to ensure that it is read continuously.

Allows for multiple SwitchBot buttons and curtains to be controlled via MQTT sent to ESP32. ESP32 will send BLE commands to switchbots and return MQTT responses to the broker. Also supports Meter/Temp Sensor

SwitchBot-MQTT-BLE-ESP32 Switchbot local control using ESP32. no switchbot hub used/required. works with any smarthub that supports MQTT https://githu

multispectral monitoring of a sourdough starter; esp32 eink module, scd30 co2 sensor, vl6180 distance sensor

EINK STARTER MONITOR See full blogpost here Tracks height of starter with a VL6180 i2c distance sensor, and CO2/temperature/humidity with an SCD30. A

Tasmota-Berry Tank Sensor for fuel-oil usind VL53L1X or SR04 sensor

Tasmota-Tank-Sensor Tasmota-Berry Tank Sensor for fuel-oil volume measurement using an VL53L1X or SR04 sensor The Sensor body The sensor was prepared

Generic embedded C driver to work with Sensirion's SEN5x environmental sensor modules via I2C
Generic embedded C driver to work with Sensirion's SEN5x environmental sensor modules via I2C

Sensirion Embedded I2C SEN5x Driver This is a generic embedded driver for the Sensirion SEN5x sensor modules. It enables developers to communicate wit

Fisheye version of VINS-Fusion
Fisheye version of VINS-Fusion

VINS-Fisheye This repository is a Fisheye version of VINS-Fusion with GPU and Visionworks acceleration. It can run on Nvidia TX2 in real-time, also pr

EKF-based late fusion

深蓝学院多传感器融合感知课程 项目实现了Lidar与Camera的后融合感知算法,融合的算法基于扩展卡尔曼滤波(Extended Kalman Filter,EKF)。输入数据为Lidar检测结果以及Camera检测结果,检测算法与Apollo 6.0一致,Lidar检测算法为PointPillar

Multi-sensor perception for autonomous vehicles
Multi-sensor perception for autonomous vehicles

Multi-Sensor Fusion for Perception -- 多传感器融合感知 Maintained by Ge Yao, [email protected] Up & Running Overview 本Repo为基于ROS melodic @ Ubuntu 18.04的Mul

Comments
  • Run with a slam system

    Run with a slam system

    Hi @TurtleZhong,
    Thanks for your sharing, i'm trying to run the gazebo with lio-sam. When robot is stationary at origin, lio-sam begins to drift in z axis. I am not familar with gazebo, i am not sure i set the params correctly. Look forward for your suggestions and thanks in advance. I modify these params.

    • comment joystick teleop.launch in spawn_husky.launch.
    • disable enable_ekf in husky_control/launch/control.launch.
    • config vlp-16 params. <VLP-16 parent="base_link" name="velodyne" topic="/points_raw" hz="10" samples="1800" min_range="0.1" gpu="false" organize_cloud="true"> in husky.urdf.xacro.
    • lookup extrinsic between laser and imu. /imu/data in base_link frame_id,

    ➜ [/home/jxl] rosrun tf tf_echo velodyne base_link At time 0.000 - Translation: [-0.081, 0.000, -0.913] - Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000] in RPY (radian) [0.000, -0.000, 0.000] in RPY (degree) [0.000, -0.000, 0.000]

    • modify imu noise and add yaw noise in husky.urdf.xacro.
        <plugin name="imu_controller" filename="libhector_gazebo_ros_imu.so">
          <robotNamespace>$(arg robot_namespace)</robotNamespace>
          <updateRate>50.0</updateRate>
          <bodyName>base_link</bodyName>
          <topicName>imu/data</topicName>
          <accelDrift>0.005 0.005 0.005</accelDrift>
          <accelGaussianNoise>0.005 0.005 0.005</accelGaussianNoise> <!--default: 0.05 -->
          <!-- <rateDrift>0.005 0.005 0.005 </rateDrift> -->
          <!-- <rateGaussianNoise>0.005 0.005 0.005 </rateGaussianNoise> -->
          <!-- <headingDrift>0.005</headingDrift> -->
          <!-- <headingGaussianNoise>0.005</headingGaussianNoise> -->
          <yawDrift>0.0005 0.0005 0.0005</yawDrift>
          <yawGaussianNoise>0.005 0.005 0.005</yawGaussianNoise> 
          <gaussianNoise>0.005</gaussianNoise>
        </plugin>
    
    • set lio-sam params, imu noise is the default.

    sensor: velodyne N_SCAN: 16 Horizon_SCAN: 1800 imuAccNoise: 3.9939570888238808e-03
    imuGyrNoise: 1.5636343949698187e-03 imuAccBiasN: 6.4356659353532566e-05 imuGyrBiasN: 3.5640318696367613e-05 imuGravity: 9.80511 imuRPYWeight: 0.01 extrinsicTrans: [-0.081, 0.000, -0.913] extrinsicRot: [1, 0, 0, 0, 1, 0, 0, 0, 1] extrinsicRPY: [1, 0, 0, 0, 1, 0, 0, 0, 1]

    Best regards narutojxl

    opened by narutojxl 11
Owner
Xinliang Zhong
Visual SLAM & Sensor Fusion
Xinliang Zhong
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
Visual-inertial-wheel fusion odometry, better performance in scenes with drastic changes in light

VIW-Fusion An visual-inertial-wheel fusion odometry VIW-Fusion is an optimization-based viusla-inertial-wheel fusion odometry, which is developed as a

庄庭达 261 Dec 30, 2022
DIY Zigbee CC2530 Motion sensor (AM312/ AM412/ BS312/ BS412), Temperature /Humidity /Pressure sensor (BME280), Ambient Light sensor (BH1750), 2.9inch e-Paper Module

How to join: If device in FN(factory new) state: Press and hold button (1) for 2-3 seconds, until device start flashing led Wait, in case of successfu

Sergey Koptyakov 5 Feb 13, 2022
DIY Zigbee CC2530 Motion sensor (AM312/ AM412/ BS312/ BS412), Temperature /Humidity /Pressure sensor (BME280), Ambient Light sensor (BH1750), 2.9/2.13/1.54 inch e-Paper Module

How to join: If device in FN(factory new) state: Press and hold button (1) for 2-3 seconds, until device start flashing led Wait, in case of successfu

Sergey Koptyakov 33 Dec 9, 2022
A ros package for robust odometry and mapping using LiDAR with aid of different sensors

W-LOAM A ros package for robust odometry and mapping using LiDAR with aid of different sensors Demo Video https://www.bilibili.com/video/BV1Fy4y1L7kZ?

Saki-Chen 51 Nov 2, 2022
Livox-Mapping - An all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR

Livox-Mapping This repository implements an all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR. The system is developed based

null 257 Dec 27, 2022
Hands-On example code for Sensor Fusion and Autonomous Driving Stack based on Autoware

Autoware "Hands-On" Stanford Lecture AA274 / Graz University of Technology M. Schratter, J. Zubaca, K. Mautner-Lassnig, T. Renzler, M. Kirchengast, S.

Virtual Vehicle 27 Dec 12, 2022
Mars_lib - MaRS: A Modular and Robust Sensor-Fusion Framework

Introduction The Modular and Robust State-Estimation Framework, or short, MaRS, is a recursive filtering framework that allows for truly modular multi

Control of Networked Systems - University of Klagenfurt 146 Jan 5, 2023
Port of Adafruit / NXP Sensor Fusion filter

AHRS Fusion Port of Adafruit NXP sensor fusion algorithms based on Kalman filters for rust. Resources https://github.com/adafruit/Adafruit_AHRS https:

Gaute Hope 4 May 14, 2022
BAAF-Net - Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

null 90 Dec 29, 2022