R3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

Overview

R3LIVE

A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

News

[Dec 31, 2021] Release of code: Our codes are now available in this repo, please kindly follow our instructions to launch our package ^_^. If you have met any bug or problem, please feel free to draw an issue and I will respond ASAP.

[Dec 29, 2021] Release of datasets: Our datasets for evaluation can now be accessed from Google drive or Baidu-NetDisk [百度网盘] (code提取码: wwxw). We have released totally 9 rosbag files for evaluating r3live, with the introduction of these datasets can be found on this page.

1. Introduction

R3LIVE is a novel LiDAR-Inertial-Visual sensor fusion framework, which takes advantage of measurement of LiDAR, inertial, and visual sensors to achieve robust and accurate state estimation. R3LIVE is built upon our previous work R2LIVE, is contained of two subsystems: the LiDAR-inertial odometry (LIO) and the visual-inertial odometry (VIO). The LIO subsystem (FAST-LIO) takes advantage of the measurement from LiDAR and inertial sensors and builds the geometric structure of (i.e. the position of 3D points) global maps. The VIO subsystem utilizes the data of visual-inertial sensors and renders the map's texture (i.e. the color of 3D points).
  Our preprint paper is available here, with our accompanying videos are now available on YouTube (click below images to open) and Bilibili1, 2.

video video

2. What can R3LIVE do?

2.1 Strong robustness in various challenging scenarios

R3LIVE is robust enough to work well in various of LiDAR-degenerated scenarios (see following figures):

And even in simultaneously LiDAR degenerated and visual texture-less environments (see Experiment-1 of our paper).

video video

2.2 Real-time RGB maps reconstruction

R3LIVE is able to reconstruct the precise, dense, 3D, RGB-colored maps of surrounding environment in real-time (watch this video).

video

2.3 Ready for 3D applications

To make R3LIVE more extensible, we also provide a series of offline utilities for reconstructing and texturing meshes, which further reduce the gap between R3LIVE and various 3D applications (watch this video).

video video

3. Prerequisites

3.1 ROS

Following this ROS Installation to install ROS and its additional pacakge:

sudo apt-get install ros-XXX-cv-bridge ros-XXX-tf ros-XXX-message-filters ros-XXX-image-transport ros-XXX-image-transport*

NOTICE: remember to replace "XXX" on above command as your ROS distributions, for example, if your use ROS-kinetic, the command should be:

sudo apt-get install ros-kinetic-cv-bridge ros-kinetic-tf ros-kinetic-message-filters ros-kinetic-image-transport*

3.2. livox_ros_driver

Follow this livox_ros_driver Installation.

3.3 CGAL and pcl_viewer (optional)

sudo apt-get install libcgal-dev pcl-tools

3.4 OpenCV >= 3.3

You can use the following command to check your OpenCV version, if your openCV version lower than OpenCV-3.3, we recommend you to update your you openCV version if you meet errors in complying our codes. Otherwise, skip this step ^_^

pkg-config --modversion opencv

We have successfully test our algorithm with version 3.3.1, 3.4.16, 4.2.1 and 4.5.3.

4. Build R3LIVE on ROS:

Clone this repository and catkin_make:

cd ~/catkin_ws/src
git clone https://github.com/hku-mars/r3live.git
cd ../
catkin_make
source ~/catkin_ws/devel/setup.bash

5. Run our examples

5.1 Download our rosbag files (r3live_dataset)

Our datasets for evaluation can be download from our Google drive or Baidu-NetDisk [百度网盘] (code提取码: wwxw). We have released totally 9 rosbag files for evaluating r3live, with the introduction of these datasets can be found on this page.

5.2 Run our examples

After you have downloaded our bag files, you can now run our example ^_^

roslaunch r3live r3live_bag.launch
rosbag play YOUR_DOWNLOADED.bag

If everything is correct, you will get the result that matches our paper and the results posted on this page.

5.3 Save the maps to your disk

R3LIVE allow you to save the maps you build at anytime you wanted. You just need to click on the "Control panel" and press 'S' or 's' key.

video

5.3 Reconstruct and texture your mesh

After you have save your offline map on your disk (default save in directory: ${HOME}/r3live_output), you can launch our utility to reconstruct and texture your mesh.

roslaunch r3live r3live_reconstruct_mesh.launch

5.4 Visualize your saved maps.

As default, your offline map (and reconstructed mesh) will be saved in the directory ${HOME}/r3live_output, you can open it with pcl_viewer (and meshlab).

Install pcl_viewer and meshlab:

sudo apt-get install pcl-tools meshlab

Visualizing your offline point cloud maps (with suffix *.pcd):

cd ${HOME}/r3live_output
pcl_viewer rgb_pt.pcd

Visualizing your reconstructed mesh (with suffix *.ply):

cd ${HOME}/r3live_output
meshlab textured_mesh.ply

6. Sample and run your own data

Since the LiDAR data and IMU data published by the official Livox-ros-driver is with the timestamp of LiDAR (started from 0 in each recording), and the timestamp of the image is usually recorded with the timestamp of the operation system. To make them working under the same time-based, we modified the source code of Livox-ros-driver, which is available at here. We suggest you replace the official driver with it when sampling your own data for R3LIVE.

Report our problems and bugs

We know our packages might not totally stable in this stage, and we are keep working on improving the performance and reliability of our codes. So, if you have met any bug or problem, please feel free to draw an issue and I will respond ASAP.

  For reporting our problems and bugs, please attach both your hardware and software environment if possible (printed by R3LIVE, see the following figure), which will be a great help for me in locating your problems.

video

Acknowledgments

In the development of R3LIVE, we stand on the shoulders of the following repositories:

  1. R2LIVE: A robust, real-time tightly-coupled multi-sensor fusion package.
  2. FAST-LIO: A computationally efficient and robust LiDAR-inertial odometry package.
  3. ikd-Tree: A state-of-art dynamic KD-Tree for 3D kNN search.
  4. LOAM-Livox: A robust LiDAR Odometry and Mapping (LOAM) package for Livox-LiDAR.
  5. openMVS: A library for computer-vision scientists and especially targeted to the Multi-View Stereo reconstruction community.
  6. VCGlib: An open source, portable, header-only Visualization and Computer Graphics Library.
  7. CGAL: A C++ Computational Geometry Algorithms Library.

License

The source code is released under GPLv2 license.

We are still working on improving the performance and reliability of our codes. For any technical issues, please contact me via email Jiarong Lin < ziv.lin.ljrATgmail.com >.

If you use any code of this repo in your academic research, please cite at least one of our papers:

[1] Lin, Jiarong, and Fu Zhang. "R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package." 
[2] Xu, Wei, et al. "Fast-lio2: Fast direct lidar-inertial odometry."
[3] Lin, Jiarong, et al. "R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping." 
[4] Xu, Wei, and Fu Zhang. "Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter."
[5] Cai, Yixi, Wei Xu, and Fu Zhang. "ikd-Tree: An Incremental KD Tree for Robotic Applications."
[6] Lin, Jiarong, and Fu Zhang. "Loam-livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV."

For commercial use, please contact me < ziv.lin.ljrATgmail.com > and Dr. Fu Zhang < fuzhangAThku.hk >.

Comments
  • Transformation matrix

    Transformation matrix

    Hello,

    Thanks for sharing the code. In the config YAML file, there is extrinsic between camera and lidar, I want to know that the matrix is the camera to lidar or lidar to the camera?

    And is there any reference for fine calibration between camera and lidar?

    Thank you

    opened by aditdoshi333 56
  • 您好,请问您在测试的时候有遇到过这个问题吗

    您好,请问您在测试的时候有遇到过这个问题吗

    ================================================================================REQUIRED process [r3live_mapping-3] has died! process has died [pid 7193, exit code -11, cmd /home/crz/SLAM/R3Live/devel/lib/r3live/r3live_mapping __name:=r3live_mapping __log:=/home/crz/.ros/log/b600fbac-6ad9-11ec-b8ae-2cf05d2b732e/r3live_mapping-3.log]. log file: /home/crz/.ros/log/b600fbac-6ad9-11ec-b8ae-2cf05d2b732e/r3live_mapping-3*.log Initiating shutdown! [================================================================================

    编译过后运行r3live_bag.lanuch,使用您提供的bag运行就会提示出这个错误,我的ros是melodic

    opened by ToutDonner 46
  • Large drift issue!

    Large drift issue!

    Hello @ziv-lin,

    I had an issue while mapping sometimes I am getting the large drift using r3live. But if I use vanilla fastlio 2 there is no visible drift. I was wondering how is that possible? Like for LIO r3live is also using fastlio right. Do I have to do any config changes? In some cases, r3live is performing exceptionally well. And I don't think there is any compute limitation as I am using 64 core 512 GB RAM.

    Thank you

    opened by aditdoshi333 23
  • Support for external IMU and Fisheye cameras

    Support for external IMU and Fisheye cameras

    Hi

    First I'd like to thank you for the source code release, it appears to be running well with the provided datasets. I'm now trying to run r3live using a Livox MID-70 and Realsense T265 (and its internal IMU).

    Could you please let me know if the following is possible:

    • Custom IMU->LiDAR extrinsics. Due to packaging constraints, my LiDAR is not mounted aligned with the IMU.
    • Fisheye correction on input images from the T265.

    Also, is it possible to use this in pure localization mode - i.e. disable RGB map generation for real-time operation?

    Many thanks!

    opened by tlin442 18
  • r3live runs on arm architecture

    r3live runs on arm architecture

    hi, I want to run r3live on jetson agx Xavier, but I get an error: cpuid.h not found, and 'c++ unrecognized command line option -msse2 -msse3', how to solve this? Still it doesn't currently support running on arm.

    stale 
    opened by tgj891 15
  • lidar update

    lidar update

    image 大佬新年好! 论文里的公式是这个样子的,但是代码中 solution = K * ( meas_vec - Hsub * vec.block< 6, 1 >( 0, 0 ) ); 这里是不是把论文中的Jk近似为单位阵,但是论文中的I-KH,中的I好像代码中没对应上? 我是哪里理解错了呢?多谢!

    opened by dongfangzhou1108 14
  • The height error in mapping

    The height error in mapping

    Hello @ziv-lin ,

    Thanks for sharing your great works. The accuracy and detail of R3LIVE real-time mapping are impressive, but there are still some problems. I built R3LIVE according to the instructions and run "hkust_campus_seq_00.bag", the results I got are not consistent with the PCD file published on GitHub. As shown in the following screenshot, the picture above is the result I got, the picture below is the PCD result published on GitHub. There exist obvious height error in my testing. In the test of FAST-LIO2, the problem of height error also appears. Did I miss something?

    1

    2

    Best regards

    opened by OliverShaoPT 14
  • Reconstruct and texture  mesh failed!

    Reconstruct and texture mesh failed!

    Hi, Lin. It's really a great job, especially the reconstruct module. However, I occurred some weird problems.

    ============================================================= App name : R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package Build date : Jan 7 2022 14:23:18 CPU infos : 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz RAM infos : 31.21GB Physical Memory 24.81GB Virtual Memory OS infos : Linux 5.4.0-92-generic (x86_64) ============================================================= +++++++++++++++++++++++++++++++++++++++++++++++ Here is the your software environments: GCC version : 8.4.0 Boost version : 1.65.1 Eigen version : 3.3.4 OpenCV version : 3.4.16 +++++++++++++++++++++++++++++++++++++++++++++++

    I can't reconstruct and texture mesh when I use hkust_campus_seq_01.bag .And sometimes with the error catkin_r3live_ws/src/r3live/r3live/src/loam/IMU_Processing.cpp, 132, check_state fail !!!! -8.28157 -10.0624 -6.91589

    It seems that there is only RGB_map_0 point cloud. 2022-01-07 14-37-03 的屏幕截图

    After save the map , the point cloud is very sparse.

    2022-01-07 14-38-17 的屏幕截图

    so the final illustration as follows(include rgb_pt.pcd): 2022-01-07 15-35-25 的屏幕截图

    stale 
    opened by Leonard-Yao 11
  • imu出现路径漂移问题

    imu出现路径漂移问题

    您好!我的静止不动运行r3live时能顺利生成正确点云,但对设备进行微小位移时就会出现大幅度路径漂移问题。如图,仅仅移动了一点点r3live就显示路径就开始乱窜,请问怎么解决呢。雷达使用的是livox-mid40,没有自带imu,相机使用的是azura kinect dk,imu使用的是kinect的imu 屏幕截图 2022-04-04 15:26:57

    stale 
    opened by fl840249479 10
  • Support of spinning LIDARs?

    Support of spinning LIDARs?

    Great paper and videos 😍 Can't wait to try out the pipeline. Especially to build textured meshes, loving it!

    I was wondering if R3LIVE will have support for system with camera, IMU and spinning LIDARs (Ouster, Velodyne). I didn't find any clue in the paper about that...

    The only clue I found on this R2LIVE thread where the author was saing that spinning LIDARs will be supported in the next iteration of the pipeline, which I asume, is R3Live...

    So, my question are:

    • does R3LIVE support spinning LIDARs?
    • if not, is it because the architecture of the system doesn't allow it or becaause it is just not on your plans?

    Once again, thank you so much for the contribution, great job 👏

    opened by jobesu14 9
  • livox lidar data type error

    livox lidar data type error

    Hi, I use my livox-mid70 and realsense d435 to run r3live, the lidar data type is always. when I use sensor_msgs/PointCloud2, it told me should use livox/cutom, and when I change the type, it told me need sensor_msgs ....error logs is like this: [ERROR] [1646964020.047520342]: Client [/r3live_LiDAR_front_end] wants topic /livox/lidar to have datatype/md5sum [livox_ros_driver/CustomMsg/e4d6829bdfe657cb6c21a746c86b21a6], but our version has [sensor_msgs/PointCloud2/1158d486dd51d683ce2f1be655c3c181]. Dropping connection.

    [ERROR] [1646963951.610199012]: Client [/r3live_mapping] wants topic /livox/lidar to have datatype/md5sum [sensor_msgs/PointCloud2/1158d486dd51d683ce2f1be655c3c181], but our version has [livox_ros_driver/CustomMsg/e4d6829bdfe657cb6c21a746c86b21a6]. Dropping connection.

    My system is ubuntu 20.04, livox_driver is from r2live version. here is config file: config and here is launch file: r3launch

    opened by gongyue666 8
Releases(R3LIVE_V1.0)
Owner
HKU-Mars-Lab
Mechatronics and Robotic Systems (MaRS) Laboratory
HKU-Mars-Lab
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV Workshop @ CVPR 2021.

MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 47 Nov 18, 2022
A 3D DNN-based Metric Semantic Dense Mapping pipeline and a Visual Inertial SLAM system

MSDM-SLAM This repository represnets a 3D DNN-based Metric Semantic Dense Mapping pipeline and a Visual Inertial SLAM system that can be run on a grou

ITMO Biomechatronics and Energy Efficient Robotics Laboratory 11 Jul 23, 2022
A real-time LiDAR SLAM package that integrates FLOAM and ScanContext.

SC-FLOAM What is SC-FLOAM? A real-time LiDAR SLAM package that integrates FLOAM and ScanContext. FLOAM for odometry (i.e., consecutive motion estimati

Jinlai Zhang 16 Jan 8, 2023
A real-time LiDAR SLAM package that integrates TLOAM and ScanContext.

SC-TLOAM What is SC-TLOAM? A real-time LiDAR SLAM package that integrates TLOAM and ScanContext. TLOAM for odometry. ScanContext for coarse global loc

Jinlai Zhang 3 Sep 17, 2021
[CVPR 2021] NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning

NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning Project Page | Paper | Supplemental material #1 | Supplement

KAIST VCLAB 50 Jan 2, 2023
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Build Type Linux MacOS Windows Build Status OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facia

null 25.6k Dec 29, 2022
Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.

Cartographer Purpose Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platfo

Cartographer 6.3k Jan 4, 2023
Real-Time Neural 3D Hand Pose Estimation from an Event Stream [ICCV 2021]

EventHands: Real-Time Neural 3D Hand Pose Estimation from an Event Stream Project Page Index TRAIN.md -- how to train the model from scratch EVAL_REAL

null 23 Nov 7, 2022
ROS wrapper for real-time incremental event-based vision motion estimation by dispersion minimisation

event_emin_ros ROS wrapper for real-time incremental event-based vision motion estimation by dispersion minimisation (EventEMin). This code was used t

Imperial College London 2 Jan 10, 2022
Fast and robust certifiable relative pose estimation

Fast and Robust Relative Pose Estimation for Calibrated Cameras This repository contains the code for the relative pose estimation between two central

null 42 Dec 6, 2022
Tandem - [CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler1*    Nan Yang1,2*,†    Niclas Zeller2,3    Daniel Cremers1

TUM Computer Vision Group 742 Dec 31, 2022
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

PointCloud on Image The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera. Th

Edison Velasco Sánchez 5 Aug 12, 2022
Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor"

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 252 Dec 21, 2022
Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)

SC-LeGO-LOAM NEWS (Nov, 2020) A Scan Context integration for LIO-SAM, named SC-LIO-SAM (link), is also released. Real-time LiDAR SLAM: Scan Context (1

Giseop Kim 11 Jul 15, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 319 Dec 30, 2022
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 296 Jan 1, 2023
Implementation of "An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems"

An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems Implementation of "An Analytical Solution to the IMU Initializati

David Zuniga-Noel 94 Nov 23, 2022
C++ Implementation of "An Equivariant Filter for Visual Inertial Odometry", ICRA 2021

EqF VIO (Equivariant Filter for Visual Inertial Odometry) This repository contains an implementation of an Equivariant Filter (EqF) for Visual Inertia

null 60 Nov 15, 2022
Code for "Photometric Visual-Inertial Navigation with Uncertainty-Aware Ensembles" in TRO 2022

Ensemble Visual-Inertial Odometry (EnVIO) Authors : Jae Hyung Jung, Yeongkwon Choe, and Chan Gook Park 1. Overview This is a ROS package of Ensemble V

Jae Hyung Jung 94 Dec 8, 2022