A Robust LiDAR-Inertial Odometry for Livox LiDAR


LIO-Livox (A Robust LiDAR-Inertial Odometry for Livox LiDAR)

This respository implements a robust LiDAR-inertial odometry system for Livox LiDAR. The system uses only a single Livox LiDAR with a built-in IMU. It has a robust initialization module, which is independent to the sensor motion. It can be initialized with the static state, dynamic state, and the mixture of static and dynamic state. The system achieves super robust performance. It can pass through a 4km-tunnel and run on the highway with a very high speed (about 80km/h) using a single Livox Horizon. Moreover, it is also robust to dynamic objects, such as cars, bicycles, and pedestrains. It obtains high precision of localization even in traffic jams. The mapping result is precise even most of the FOV is occluded by vehicles. Videos of the demonstration of the system can be found on Youtube and Bilibili. NOTE: Images are only used for demonstration, not used in the system.

Developer: Livox

System achritecture

The system consists of two ros nodes: ScanRegistartion and PoseEstimation.

  • The class "LidarFeatureExtractor" of the node "ScanRegistartion" extracts corner features, surface features, and irregular features from the raw point cloud.
  • In the node "PoseEstimation", the main thread aims to estimate sensor poses, while another thread in the class "Estimator" uses the class "MapManager" to build and manage feature maps.

The system is mainly designed for car platforms in the large scale outdoor environment. Users can easily run the system with a Livox Horizon LiDAR.Horizon

The system starts with the node "ScanRegistartion", where feature points are extracted. Before the feature extraction, dynamic objects are removed from the raw point cloud, since in urban scenes there are usually many dynamic objects, which affect system robustness and precision. For the dynamic objects filter, we use a fast point cloud segmentation method. The Euclidean clustering is applied to group points into some clusters. The raw point cloud is divided into ground points, background points, and foreground points. Foreground points are considered as dynamic objects, which are excluded form the feature extraction process. Due to the dynamic objects filter, the system obtains high robustness in dynamic scenes.

In open scenarios, usually few features can be extracted, leading to degeneracy on certain degrees of freedom. To tackle this problem, we developed a feature extraction process to make the distribution of feature points wide and uniform. A uniform and wide distribution provides more constraints on all 6 degrees of freedom, which is helpful for eliminating degeneracy. Besides, some irregular points also provides information in feature-less scenes. Therefore, we also extract irregular features as a class for the point cloud registration. Feature points are classifed into three types, corner features, surface features, and irregular features, according to their local geometry properties. We first extract points with large curvature and isolated points on each scan line as corner points. Then principal components analysis (PCA) is performed to classify surface features and irregular features, as shown in the following figure. For points with different distance, thresholds are set to different values, in order to make the distribution of points in space as uniform as possible.

In the node "PoseEstimation", the motion distortion of the point cloud is compensated using IMU preintegration or constant velocity model. Then the IMU initialization module is performed. If the Initialization is successfully finished, the system will switch to the LIO mode. Otherwise, it run with LO mode and initialize IMU states. In the LO mode, we use a frame-to-model point cloud registration to estimate the sensor pose. Inspired by ORB-SLAM3, a maximum a posteriori (MAP) estimation method is adopted to jointly initialize IMU biases, velocities, and the gravity direction. This method doesn't need a careful initialization process. The system can be initialized with an arbitrary motion. This method takes into account sensor uncertainty, which obtains the optimum in the sense of maximum posterior probability. It achieves efficient, robust, and accurate performance. After the initialization, a tightly coupled slding window based sensor fusion module is performed to estimate IMU poses, biases, and velocities within the sliding window. Simultaneously, an extra thread builds and maintains the global map in parallel.



cd ~/catkin_ws/src
git clone https://github.com/Livox-SDK/LIO-Livox
cd ..

Run with bag files:

Run the launch file:

cd ~/catkin_ws
source devel/setup.bash
roslaunch lio_livox horizon.launch

Play your bag files:

rosbag play YOUR_ROSBAG.bag

Run with your device:

Run your LiDAR with livox_ros_driver

cd ~/catkin_ws
source devel/setup.bash
roslaunch livox_ros_driver livox_lidar_msg.launch

Run the launch file:

cd ~/catkin_ws
source devel/setup.bash
roslaunch lio_livox horizon.launch


The current version of the system is only adopted for Livox Horizon. In theory, it should be able to run directly with a Livox Avia, but we haven't done enough tests. Besides, the system doesn't provide a interface of Livox mid series. If you want use mid-40 or mid-70, you can try livox_mapping.

The topic of point cloud messages is /livox/lidar and its type is livox_ros_driver/CustomMsg.
The topic of IMU messages is /livox/imu and its type is sensor_msgs/Imu.

There are some parameters in launch files:

  • IMU_Mode: choose IMU information fusion strategy, there are 3 modes:
    • 0 - without using IMU information, pure LiDAR odometry, motion distortion is removed using a constant velocity model
    • 1 - using IMU preintegration to remove motion distortion
    • 2 - tightly coupling IMU and LiDAR information
  • Extrinsic_Tlb: extrinsic parameter between LiDAR and IMU, which uses SE3 form. If you want to use an external IMU, you need to calibrate your own sensor suite and change this parameter to your extrinsic parameter.

There are also some parameters in the config file:

  • Use_seg: choose the segmentation mode for dynamic objects filtering, there are 2 modes:
    • 0 - without using the segmentation method, you can choose this mode if there is few dynamic objects in your data
    • 1 - using the segmentation method to remove dynamic objects


Thanks for following work:

  • LOAM (LOAM: Lidar Odometry and Mapping in Real-time)
  • VINS-Mono (VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator)
  • LIO-mapping (Tightly Coupled 3D Lidar Inertial Odometry and Mapping)
  • ORB-SLAM3 (ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM)
  • LiLi-OM (Towards High-Performance Solid-State-LiDAR-Inertial Odometry and Mapping)
  • MSCKF_VIO (Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight)
  • horizon_highway_slam
  • livox_mapping
  • livox_horizon_loam


You can get support from Livox with the following methods:

  • Send email to [email protected] with a clear description of your problem and your setup
  • Report issues on github
  • [ScanRegistration-1] process has died [pid 15243

    [ScanRegistration-1] process has died [pid 15243

    Converted lvx to bag file

    roslaunch livox_ros_driver lvx_to_rosbag.launch lvx_file_path:=/home/arjun/Downloads/Horizon_.lvx xfer_format:=1 imu_bag:=true

    Then I launch

    roslaunch lio_livox horizon.launch
    rosbag play Horizon_.bag

    But I'm unable to run the package, it is giving an error as mentioned in the subject.

    opened by arjunskumar 8
  • Something wrong with spin lidar(ouster64)

    Something wrong with spin lidar(ouster64)

    Hi, friends, I modify the feature extract moudle for my spin lidar--ouster,and the code is implemented based on lio_sam. Now, the feature extract module is normal. And the result is good with the IMU_Mode is 0 and 1. But there's something wrong when IMU_Mode is 2(Tightly Coupled IMU).

    2022-03-10 17-33-21屏幕截图

    As the picture above, when process "Frame:4654", the trajectory starts to shift. The points and imu data record by ouster laser(OS1-64), so the transformation matrix of the laser and imu is the unit matrix. Other parameters are default. The printing parameters are as follows:

    微信截图_20220310174838 微信截图_20220310174913

    What causes this? Someone has a similar situation? Thanks again.

    opened by chengwei0427 4
  • message not compatible when using lvx-converted bag file

    message not compatible when using lvx-converted bag file

    1.download lvx file from 'https://www.livoxtech.com/downloads', Livox Horizon Point Cloud Data 1 2020-03-18 2.use ‘roslaunch livox_ros_driver lvx_to_rosbag.launch’ to convert the lvx file to bag file 3.run LIO-livox and rosbag play, error reported as: [RUNNING] Bag Time: 3036.199661 Duration: 0.000000 / 94.499459
    [ERROR] [1628220411.045738465]: Client [/ScanRegistration] wants topic /livox/lidar to have datatype/md5sum [livox_ros_driver/CustomMsg/e4d6829bdfe657cb6c21a746c86b21a6], but our version has [sensor_msgs/PointCloud2/1158d486dd51d683ce2f1be655c3c181]. Dropping connection. [RUNNING] Bag Time: 3036.199742 Duration: 0.000081 / 94.499459 ....

    opened by sume-cn 4
  • PoseEstimation will crash

    PoseEstimation will crash

    When I run PoseEstimation, will crash when ceres start to optimize, I guess it is related to pcl version, which pcl version, eigen version should be used?

    or provide a docker file ?

    opened by zhezhao1989 2
  • Global Feature Maps not published

    Global Feature Maps not published

    ive seen references to global feature maps in the code, but they are not published by any ros node, how can i get these global maps published in ROS?


    opened by KarlssonW 0
  • marginalization code question

    marginalization code question


    如图在Estimator.cpp执行完滑窗优化,对lidar特征factor作边缘化时,我理解这里是要对第0帧相关factor作边缘化操作,计算的transformTobeMapped应该对应是第0帧的pose,frame_curr应该切换到第0帧了,而这里的frame_curr似乎还是第1帧。不知是我理解错了还是这里漏了代码。 please correct me,thanks!

    opened by levenJun 0
  • Unable to run with loam horizon data

    Unable to run with loam horizon data

    Hi, i got the data of loam horizon from your own repo i.e 2020_open_road.bag from https://github.com/Livox-SDK/livox_horizon_loam

    However i keep getting, [ScanRegistration-1] process has died and correspondingly a blank screen in RViz.

    I have read other threads, but i already have Eigen version 3.3.7 and pcl version -> 1.10.0+dfsg-5ubuntu1. I am using Ubuntu 20.04 but i meet dependency for the modules used (Eigen, pcl etc.)

    Please tell me what suitable changes should i make. And where could i be possibly going wrong?

    opened by 777learn 0
  • when I use the official rosbag named:hku_campus_seq_00.bag, still can't run horizon.launch

    when I use the official rosbag named:hku_campus_seq_00.bag, still can't run horizon.launch

    0 above ground points! terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc [ScanRegistration-2] process has died [pid 7634, exit code -6, cmd /home/arafat/lio-livox/devel/lib/lio_livox/ScanRegistration __name:=ScanRegistration __log:=/home/arafat/.ros/log/64b287f4-1d60-11ed-839f-e86a640f2b9b/ScanRegistration-2.log]. log file: /home/arafat/.ros/log/64b287f4-1d60-11ed-839f-e86a640f2b9b/ScanRegistration-2*.log

    opened by Arafat-ninja 3
  • can I use cuda to accelerate this project?

    can I use cuda to accelerate this project?

    I run this program successfully on intel nuc11,which have an i7-cpu and nvidia 2060 gpu. But the fps is too low,only 2 fps,Can I use cuda in this program to increase fps? @Livox-SDK

    opened by fd3s1 0
  • catkin_make error

    catkin_make error

    /home/pei/catkin_ws/src/LIO-Livox/src/lio/PoseEstimation.cpp: In function ‘bool TryMAPInitialization()’: /home/pei/catkin_ws/src/LIO-Livox/src/lio/PoseEstimation.cpp:190:49: error: expected type-specifier ceres::LocalParameterization quatParam = new ceres::QuaternionParameterization(); ^~~~~ /home/pei/catkin_ws/src/LIO-Livox/src/lio/PoseEstimation.cpp:193:57: error: no matching function for call to ‘ceres::Problem::AddParameterBlock(double [4], int, ceres::LocalParameterization&)’ problem_quat.AddParameterBlock(para_quat, 4, quatParam); ^ In file included from /usr/local/include/ceres/ceres.h:64:0, from /home/pei/catkin_ws/src/LIO-Livox/include/utils/ceresfunc.h:3, from /home/pei/catkin_ws/src/LIO-Livox/include/Estimator/Estimator.h:20, from /home/pei/catkin_ws/src/LIO-Livox/src/lio/PoseEstimation.cpp:1: /usr/local/include/ceres/problem.h:261:8: note: candidate: void ceres::Problem::AddParameterBlock(double*, int) void AddParameterBlock(double* values, int size); ^~~~~~~~~~~~~~~~~ /usr/local/include/ceres/problem.h:261:8: note: candidate expects 2 arguments, 3 provided /usr/local/include/ceres/problem.h:274:8: note: candidate: void ceres::Problem::AddParameterBlock(double*, int, ceres::Manifold*) void AddParameterBlock(double* values, int size, Manifold* manifold); ^~~~~~~~~~~~~~~~~ /usr/local/include/ceres/problem.h:274:8: note: no known conversion for argument 3 from ‘ceres::LocalParameterization*’ to ‘ceres::Manifold*’ [ 91%] Linking CXX executable /home/pei/catkin_ws/devel/lib/lio_livox/ScanRegistration [ 91%] Built target ScanRegistration LIO-Livox/CMakeFiles/PoseEstimation.dir/build.make:75: recipe for target 'LIO-Livox/CMakeFiles/PoseEstimation.dir/src/lio/PoseEstimation.cpp.o' failed make[2]: *** [LIO-Livox/CMakeFiles/PoseEstimation.dir/src/lio/PoseEstimation.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... CMakeFiles/Makefile2:3120: recipe for target 'LIO-Livox/CMakeFiles/PoseEstimation.dir/all' failed make[1]: *** [LIO-Livox/CMakeFiles/PoseEstimation.dir/all] Error 2 Makefile:145: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j16 -l16" failed

    opened by fuggyme 0
RRxIO - Robust Radar Visual/Thermal Inertial Odometry: Robust and accurate state estimation even in challenging visual conditions.

RRxIO - Robust Radar Visual/Thermal Inertial Odometry RRxIO offers robust and accurate state estimation even in challenging visual conditions. RRxIO c

Christopher Doer 62 Nov 30, 2022
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Nov 25, 2022
A ros package for robust odometry and mapping using LiDAR with aid of different sensors

W-LOAM A ros package for robust odometry and mapping using LiDAR with aid of different sensors Demo Video https://www.bilibili.com/video/BV1Fy4y1L7kZ?

Saki-Chen 51 Nov 2, 2022
This is a simulation of Livox lidar

Livox Laser Simulation A package to provide plug-in for Livox Series LiDAR.

lvfengchi 63 Nov 26, 2022
A package to provide plug-in for Livox Series LiDAR.

Livox Laser Simulation A package to provide plug-in for Livox Series LiDAR. Requirements ROS(=Melodic) Gazebo (= 9.x, http://gazebosim.org/) Ubuntu(=1

livox 81 Nov 25, 2022
calibrate a Livox LiDAR and a camera

Livox LiDAR-Camera Calibration This method is from the official method of Livox(https://github.com/Livox-SDK/livox_camera_lidar_calibration) It's just

null 10 Nov 24, 2022
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.

fusion-lidar-camera-ROS 一、介绍 本仓库是一个ROS工作空间,其中ws_fusion_camera/src有一个工具包color_pc ws_fusion_camera │ README.md │ └───src │ └───package: c

hongyu wang 22 Dec 2, 2022
Continuous-Time Spline Visual-Inertial Odometry

Continuous-Time Spline Visual-Inertial Odometry Related Publications Direct Sparse Odometry, J. Engel, V. Koltun, D. Cremers, In IEEE Transactions on

Minnesota Interactive Robotics and Vision Laboratory 68 Nov 18, 2022
Visual-inertial-wheel fusion odometry, better performance in scenes with drastic changes in light

VIW-Fusion An visual-inertial-wheel fusion odometry VIW-Fusion is an optimization-based viusla-inertial-wheel fusion odometry, which is developed as a

庄庭达 248 Nov 29, 2022
Continuous Time LiDAR odometry

CT-ICP: Elastic SLAM for LiDAR sensors This repository implements the SLAM CT-ICP (see our article), a lightweight, precise and versatile pure LiDAR o

null 365 Dec 2, 2022
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 359 Nov 30, 2022
A real-time, direct and tightly-coupled LiDAR-Inertial SLAM for high velocities with spinning LiDARs

LIMO-Velo [Alpha] ?? [16 February 2022] ?? The project is on alpha stage, so be sure to open Issues and Discussions and give all the feedback you can!

Andreu Huguet 139 Nov 25, 2022

SC-A-LOAM What is SC-A-LOAM? A real-time LiDAR SLAM package that integrates A-LOAM and ScanContext. A-LOAM for odometry (i.e., consecutive motion esti

Giseop Kim 242 Nov 29, 2022
Lidar-with-velocity - Lidar with Velocity: Motion Distortion Correction of Point Clouds from Oscillating Scanning Lidars

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. This repository is a barebones implementati

ISEE Research Group 160 Nov 29, 2022
A Multi-sensor Fusion Odometry via Smoothing and Mapping.

LVIO-SAM A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping. The

Xinliang Zhong 151 Nov 20, 2022
The QPEP-Enhanced Direct Sparse Odometry (DSO) with Loop Closure

QPEP-DSO: Quadratic Pose Estimation Problems (QPEP) Enhanced Direct Sparse Odometry with Loop Closure Code Arch The codes are motivated by DSO (https:

Jin Wu 8 Jun 23, 2022
This package provides localization in a pre-built map using ICP and odometry (or the IMU measurements).

Localization using ICP in a known map Overview This package localizes the lidar sensor in a given map using the ICP algorithm. It subscribes to lidar

Robotic Systems Lab - Legged Robotics at ETH Zürich 116 Nov 24, 2022
This repo includes SVO Pro which is the newest version of Semi-direct Visual Odometry (SVO) developed over the past few years at the Robotics and Perception Group (RPG).

rpg_svo_pro This repo includes SVO Pro which is the newest version of Semi-direct Visual Odometry (SVO) developed over the past few years at the Robot

Robotics and Perception Group 1k Nov 29, 2022