LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

Overview

LVI-SAM

This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono at a system level.

drawing


Dependency

  • ROS (Tested with kinetic and melodic)
  • gtsam (Georgia Tech Smoothing and Mapping library)
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip
    cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/
    cd ~/Downloads/gtsam-4.0.2/
    mkdir build && cd build
    cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..
    sudo make install -j4
    
  • Ceres (C++ library for modeling and solving large, complicated optimization problems)
    sudo apt-get install -y libgoogle-glog-dev
    sudo apt-get install -y libatlas-base-dev
    wget -O ~/Downloads/ceres.zip https://github.com/ceres-solver/ceres-solver/archive/1.14.0.zip
    cd ~/Downloads/ && unzip ceres.zip -d ~/Downloads/
    cd ~/Downloads/ceres-solver-1.14.0
    mkdir ceres-bin && cd ceres-bin
    cmake ..
    sudo make install -j4
    

Compile

You can use the following commands to download and compile the package.

cd ~/catkin_ws/src
git clone https://github.com/TixiaoShan/LVI-SAM.git
cd ..
catkin_make

Datasets

drawing

The datasets used in the paper can be downloaded from Google Drive. The data-gathering sensor suite includes: Velodyne VLP-16 lidar, FLIR BFS-U3-04S2M-CS camera, MicroStrain 3DM-GX5-25 IMU, and Reach RS+ GPS.

https://drive.google.com/drive/folders/1q2NZnsgNmezFemoxhHnrDnp1JV_bqrgV?usp=sharing

Note that the images in the provided bag files are in compressed format. So a decompression command is added at the last line of launch/module_sam.launch. If your own bag records the raw image data, please comment this line out.

drawing drawing


Run the package

  1. Configure parameters:
Configure sensor parameters in the .yaml files in the ```config``` folder.
  1. Run the launch file:
roslaunch lvi_sam run.launch
  1. Play existing bag files:
rosbag play handheld.bag 

Paper

Thank you for citing our paper if you use any of this code or datasets.

@inproceedings{lvisam2021shan,
  title={LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping},
  author={Shan, Tixiao and Englot, Brendan and Ratti, Carlo and Rus Daniela},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  pages={to-be-added},
  year={2021},
  organization={IEEE}
}

Acknowledgement

  • The visual-inertial odometry module is adapted from Vins-Mono.
  • The lidar-inertial odometry module is adapted from LIO-SAM.
Comments
  • Some problems about LVI SAM data evaluation

    Some problems about LVI SAM data evaluation

    Hello, I have a few questions to ask, thank you:

    1. I output GPS data and IMU from the handheld data set_ Correct, it is found that the starting point and the ending point are not at the same point. How to evaluate the accuracy of the method;

    2. Slam path is a relative position. How to align it with the ENU of GPS path;

    3. The sampling rates of GPS and slam path are different, and the time axis is also misplaced. How does RMSE w.r.t GPS evaluate it.

    stale 
    opened by liu19980515 16
  • About external parameters

    About external parameters

    Hello here, the data is from KITTI dataset, I gave sensor information and internal and external parameters in yaml as required, but every time I open lidar for depth info in params_camera.yaml, the vio part will crash with the following error. Is this a problem with external parameter settings? image

    stale 
    opened by winnnda 7
  • Does LVI-SAM really improve performance?

    Does LVI-SAM really improve performance?

    After reading your paper and code, thank you very much for your work. But I have the following questions:

    1. The laser odometry only uses the result of the visual odometry as the initial optimization value. This should be loosely coupled.
    2. Compared with LIO-SAM, LVI-SAM only uses the result of the visual odometry as the initial value of scanning matching, and does not improve the laser odometry. Can the performance of the LVI-SAM really be significantly improved?
    3. The results of laser odometry, visual odometry, and IMU pre-integration are used in the paper to perform factor graph optimization. But only the results of the laser odometry and IMU are used in the code. The laser odometry has higher estimation accuracy than the visual odometry, so adding the result of the visual odometry to the factor graph will reduce the optimization accuracy, right?
    stale 
    opened by robosu12 7
  • Process has died

    Process has died

    Hello, when I run LVI Sam on the Jetson NX development board,catkin_make has been completed,but when I tried roslaunch, I encountered the following node death problem

    [ INFO] [1636441314.154527795]: ----> Visual Odometry Estimator Started. [ INFO] [1636441314.487589783]: ----> Visual Loop Detection Started. [ INFO] [1636441314.585487422]: ----> Visual Feature Tracker Started. [lvi_sam_visual_odometry-9] process has died [pid 13242, exit code -11, cmd /home/nvidia/catkin_sam/devel/lib/lvi_sam/lvi_sam_visual_odometry __name:=lvi_sam_visual_odometry __log:=/home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_odometry-9.log]. log file: /home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_odometry-9*.log [lvi_sam_visual_odometry-9] restarting process process[lvi_sam_visual_odometry-9]: started with pid [13528] [ INFO] [1636441314.871099244]: ----> Visual Odometry Estimator Started. [ INFO] [1636441314.993614341]: ----> Lidar IMU Preintegration Started. [lvi_sam_visual_loop-10] process has died [pid 13249, exit code -11, cmd /home/nvidia/catkin_sam/devel/lib/lvi_sam/lvi_sam_visual_loop __name:=lvi_sam_visual_loop __log:=/home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_loop-10.log]. log file: /home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_loop-10*.log [lvi_sam_visual_loop-10] restarting process [ INFO] [1636441315.091523468]: ----> Lidar Cloud Deskew Started. process[lvi_sam_visual_loop-10]: started with pid [13703] [ INFO] [1636441315.169702089]: ----> Lidar Feature Extraction Started. [lvi_sam_visual_feature-8] process has died [pid 13231, exit code -11, cmd /home/nvidia/catkin_sam/devel/lib/lvi_sam/lvi_sam_visual_feature __name:=lvi_sam_visual_feature __log:=/home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_feature-8.log]. log file: /home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_feature-8*.log

    My environment is: Ubuntu18.04 ROS melodic gtsam4.0.2 ceres1.14.0 opencv4.1.1 pcl1.8

    How can I solve the problem?

    stale 
    opened by Stephen1e 6
  • large velocity or bias, reset IMU-preintegration!

    large velocity or bias, reset IMU-preintegration!

    There will be a significant drop in the z-axis after around 2mins running. After then, LVI-SAM will become totally unstable. Is there anybody who comes across this issue? Screenshot from 2021-10-13 18-54-30 Screenshot from 2021-10-13 18-55-05

    opened by SnowCarter 5
  • question about

    question about "q_lidar_to_cam q_lidar_to_cam_eigen"

    Hi,thanks for your great works. the code test well with your data,but it seems the vins program did not work well with my data. I’m confused about what the “question about q_lidar_to_cam q_lidar_to_cam_eigen” mean in the code. hope your reply ,thanks agian. @TixiaoShan

    stale 
    opened by HeXu1 5
  • malloc(): memory corruption

    malloc(): memory corruption

    malloc(): memory corruption [lvi_sam_mapOptmization-6] process has died [pid 13598, exit code -6, cmd /home/mwy/lvisam/devel/lib/lvi_sam/lvi_sam_mapOptmization __name:=lvi_sam_mapOptmization __log:=/home/mwy/.ros/log/f460d3d2-9099-11ec-a978-9061ae86e6b5/lvi_sam_mapOptmization-6.log]. log file: /home/mwy/.ros/log/f460d3d2-9099-11ec-a978-9061ae86e6b5/lvi_sam_mapOptmization-6*.log

    How to solve this problem? I install the gtsam following your introduction. I am sure I have used cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..
    And my pcl version is 1.8.

    stale 
    opened by DavidNY123 4
  • Is deskewing correct?

    Is deskewing correct?

      void findPosition(double relTime, float *posXCur, float *posYCur, float *posZCur)
      {
          *posXCur = 0; *posYCur = 0; *posZCur = 0;
    
          // if (cloudInfo.odomAvailable == false || odomDeskewFlag == false)
          //     return;
    
          // float ratio = relTime / (timeScanNext - timeScanCur);
    
          // *posXCur = ratio * odomIncreX;
          // *posYCur = ratio * odomIncreY;
          // *posZCur = ratio * odomIncreZ;
      }
    

    As far as I understand, this will result in the pcl being deskewed in rotation only, is this correct?

    stale 
    opened by juliangaal 4
  • Can't reproduce result with handheld.bag

    Can't reproduce result with handheld.bag

    Thank you for share awesome work.

    I can't reproduce result with shared dataset(https://drive.google.com/drive/folders/1q2NZnsgNmezFemoxhHnrDnp1JV_bqrgV?usp=sharing).

    It works fine at the beginning. However, at several points, displayed the warning messages and the trajectory was drifted. also, I checked library version. (gtsam-4.0.2, ceres-solver-1.14.0)

    Could PC specs have an impact? I installed Linux on Macbook pro A1707. Processo : 2.6 GHz Intel Core i7 (I7-6700HQ) RAM: 16GB GPU: Radeon Pro 450

    Warning message: Large bias, reset IMU-preintegration! Large velocity, reset IMU-preintegration!

    stale 
    opened by smwgf 4
  • Test bag(more than 10G) couldn't be downloaded from the google drive

    Test bag(more than 10G) couldn't be downloaded from the google drive

    Hello @TixiaoShan, Thanks for your great work! I couldn't download the test data from the google drive due to it's size is larger than 10G. Could you consider to upload your data to Baidu Netdisk? This may be convenient to people in China.

    opened by wwtinwhu 4
  • Experiment on NTU VIRAL datasets

    Experiment on NTU VIRAL datasets

    Hi Tixiao,

    I am trying to do some experiments of LVI SAM on the NTU VIRAL public dataset (download page: https://ntu-aris.github.io/ntu_viral_dataset/). Hence, I will include LVI-SAM to the list of applicable methods to NTU VIRAL website.

    However LVI SAM diverges quickly after a few minutes. This does not happen if the visual nodes are disabled. Could you please take a look at the configurations and suggest the best configurations?

    The forked repository can be found here: https://github.com/brytsknguyen/LVI-SAM

    Working examples of LIO SAM and VINs-Mono on NTU VIRAL datasets can be found here: https://github.com/brytsknguyen/LIO-SAM https://github.com/brytsknguyen/VINS-Mono

    opened by brytsknguyen 3
  • Moving Objects in SLAM Output

    Moving Objects in SLAM Output

    Hi there!

    Does anyone have any thoughts or suggestions on how / whether to use LVI-SAM in more dynamic environments (lots of moving objects including urban environments or indoor scenes)?

    I saw previously that LIO-SAM you can utilize Scan Context + Removert and there was a recent paper RF-LIO that focused on object removal, but I would like to utilize the additional VIO data for better map quality while also being able to filter out for moving objects (ideally I would also like to get a track for the moving objects as well or at least the points associated).

    Any thoughts on this?

    opened by bfan1256 1
  • Add the save trajectory function

    Add the save trajectory function

    @TixiaoShan
    Hi, Thanks for your great work.
    I add some functions to save Trajectory in the txt file.
    This file is in TUM format, so it could directly visualize using evo tool.

    Below is a visualization of the results of the M2DGR dataset using evo tool.

    result_lvi-sam

    I hope this PR is helpful.

    Thanks,

    opened by Taeyoung96 4
  • difference between code and article

    difference between code and article

    The sentence "The constraints from visual odometry, lidar odometry, IMU preintegration, and loop closure are optimized jointly in the factor graph" is mentioned in related articles. but it seems that visual constrains and inertial constrains are not contained in SAM optimization of mapOptimization.cpp file.

    stale 
    opened by wjf1997 1
  • calibration of fisheye and Lidar

    calibration of fisheye and Lidar

    Hi! Thanks for your excellent work. I am running your algorithm, but I do not know how to calibrate fisheye and Lidar. could you please tell me how do you calibrate and which algorithm you use to finish calibration?

    Many thanks! :)

    stale 
    opened by kakghiroshi 3
  • Question about Lidar-Inertial System Fail detection

    Question about Lidar-Inertial System Fail detection

    @TixiaoShan
    Thanks for your great work!
    In the LVI-SAM paper, this algorithm adapted LIS(LiDAR inertial system) fail detection.

    However, maybe because I'm an newbie, I couldn't find which part of the code it was.
    Which part of the code has the LIS fail detection part implemented?

    Thanks,

    stale 
    opened by Taeyoung96 2
Livox-Mapping - An all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR

Livox-Mapping This repository implements an all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR. The system is developed based

null 257 Dec 27, 2022
Tightly coupled GNSS-Visual-Inertial system for locally smooth and globally consistent state estimation in complex environment.

GVINS GVINS: Tightly Coupled GNSS-Visual-Inertial Fusion for Smooth and Consistent State Estimation. paper link Authors: Shaozu CAO, Xiuyuan LU and Sh

HKUST Aerial Robotics Group 587 Dec 30, 2022
A Multi-sensor Fusion Odometry via Smoothing and Mapping.

LVIO-SAM A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping. The

Xinliang Zhong 152 Dec 24, 2022
A Robust LiDAR-Inertial Odometry for Livox LiDAR

LIO-Livox (A Robust LiDAR-Inertial Odometry for Livox LiDAR) This respository implements a robust LiDAR-inertial odometry system for Livox LiDAR. The

livox 363 Dec 26, 2022
RRxIO - Robust Radar Visual/Thermal Inertial Odometry: Robust and accurate state estimation even in challenging visual conditions.

RRxIO - Robust Radar Visual/Thermal Inertial Odometry RRxIO offers robust and accurate state estimation even in challenging visual conditions. RRxIO c

Christopher Doer 63 Dec 20, 2022
A ros package for robust odometry and mapping using LiDAR with aid of different sensors

W-LOAM A ros package for robust odometry and mapping using LiDAR with aid of different sensors Demo Video https://www.bilibili.com/video/BV1Fy4y1L7kZ?

Saki-Chen 51 Nov 2, 2022
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
Continuous-Time Spline Visual-Inertial Odometry

Continuous-Time Spline Visual-Inertial Odometry Related Publications Direct Sparse Odometry, J. Engel, V. Koltun, D. Cremers, In IEEE Transactions on

Minnesota Interactive Robotics and Vision Laboratory 71 Dec 7, 2022
Visual-inertial-wheel fusion odometry, better performance in scenes with drastic changes in light

VIW-Fusion An visual-inertial-wheel fusion odometry VIW-Fusion is an optimization-based viusla-inertial-wheel fusion odometry, which is developed as a

庄庭达 261 Dec 30, 2022
Continuous Time LiDAR odometry

CT-ICP: Elastic SLAM for LiDAR sensors This repository implements the SLAM CT-ICP (see our article), a lightweight, precise and versatile pure LiDAR o

null 384 Dec 21, 2022
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 369 Dec 30, 2022
This repo includes SVO Pro which is the newest version of Semi-direct Visual Odometry (SVO) developed over the past few years at the Robotics and Perception Group (RPG).

rpg_svo_pro This repo includes SVO Pro which is the newest version of Semi-direct Visual Odometry (SVO) developed over the past few years at the Robot

Robotics and Perception Group 1k Dec 26, 2022
Fast and Light-weight path smoothing methods for vehicles

path_smoother About Fast and Light-weight path smoothing methods for vehicles Denpendencies This project has been tested on Ubuntu 18.04. sudo apt-get

MingwangZhao 4 Dec 1, 2021
A dataset containing synchronized visual, inertial and GNSS raw measurements.

GVINS-Dataset Author/Maintainer: CAO Shaozu (shaozu.cao AT gmail.com), LU Xiuyuan (xluaj AT connect.ust.hk) This repository hosts dataset collected du

HKUST Aerial Robotics Group 134 Dec 21, 2022
Lidar-with-velocity - Lidar with Velocity: Motion Distortion Correction of Point Clouds from Oscillating Scanning Lidars

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. This repository is a barebones implementati

ISEE Research Group 163 Dec 15, 2022
SAM (Software Automatic Mouth) for Godot

gdsam SAM (Software Automatic Mouth) for Godot 3.4+ A GDNative library wrapper around the C port of SAM by Sebastian Macke over at https://github.com/

null 2 Dec 28, 2021
Visual Leak Detector for Visual C++ 2008-2015

Visual Leak Detector Introduction Visual C++ provides built-in memory leak detection, but its capabilities are minimal at best. This memory leak detec

Arkady Shapkin 908 Jan 8, 2023
DG-Mesh-Optimization - Discontinuous Galerkin (DG) solver coupled with a Quasi-Newton line-search algorithm to optimize the DG mesh.

Date written: December 2020 This project was pursued as my final project for MECH 579 (Multidisciplinary Design Optimization) at McGill University, ta

Julien Brillon 8 Sep 18, 2022
This package provides localization in a pre-built map using ICP and odometry (or the IMU measurements).

Localization using ICP in a known map Overview This package localizes the lidar sensor in a given map using the ICP algorithm. It subscribes to lidar

Robotic Systems Lab - Legged Robotics at ETH Zürich 131 Jan 3, 2023