Lidar-with-velocity - Lidar with Velocity: Motion Distortion Correction of Point Clouds from Oscillating Scanning Lidars

Overview

Lidar with Velocity

A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud.

​ This repository is a barebones implementation for our paper Lidar with Velocity : Motion Distortion Correction of Point Clouds fromOscillating Scanning Lidars . It's a fusion based method to handle the oscillating scan Lidar points distortion problem, and can also provide a accurate velocity of the objects. result

​ Here is a Wiki to give a brief intro about the distortion from TOF Lidar and our proposed method. For more infomation, u can also check out the paper arXiv.

1. Prerequisites

Ubuntu and ROS. Tested on Ubuntu 18.04. ROS Melodic

Eigen 3.3.4

Ceres Solver 1.14.0

Opencv 3.2.0

2. Build on ROS

Clone the repository and catkin_make:

cd ~/catkin_ws/src
git clone https://github.com/ISEE-Technology/lidar-with-velocity
cd ../
catkin_make
source ~/catkin_ws/devel/setup.bash

3. Directly run

First download the dataset and extract in /catkin_ws/ path.

replace the "DATASET_PATH" in config/config.yaml with your extracted dataset path, example: (notice the "/")

dataset_path: YOUR_CATKIN_WS_PATH/catkin_ws/data/

Then follow the commands blow :

roscore
rviz -d src/lidar-with-velocity/rviz_cfg/vis.rviz
rosrun lidar-with-velocity main_ros

there will be a Rviz window and a PCL Viewer window to show the results, press key "space" in the PCL Viewer window to process the next frame.

Comments
  • The result using your dataset

    The result using your dataset

    Hi! I test your code with your dataset, but the result is not as good as that in paper. Is there anything wrong when I use the code? image image As you can see, the distortion of point cloud is not completely compensated.

    opened by Psyclonus2887 5
  • How about the distortion affect detection?

    How about the distortion affect detection?

    Hi, nice work!

    But I have some questions, looking forward your reply.

    First of all, the distortion pointcloud may affect the object detection, I think that is the main reason why we must do pointcloud distortion. But in your paper, I understand you do undistortion after the object and MOT, did the reason and result revert?

    And the second, throuh we are already get the detection an tracking result, why we need do undistortion then? May be use the boost idea to refine detection result? But I do really think it is cost. Is there any evaluation about the time and computing resource costing?

    And the third, I think the reason why you need camera, may be promote the detection ap or tracking performance. But if the pointcloud is distorted, if the calibration precision will be affected? espically for the long distance(may be 50 miles or 100 miles away?) we known the calibration precision will really low, and then the the calibration precision may be lower!

    Thx again, the first question really confuse me and I really hope your help.

    opened by mjjdick 2
  • A question about the results in the paper

    A question about the results in the paper

    Hello, I'm interested in your work! While reading your paper, I have a question that I hope I can get an answer from you. In the results of your system (Fig 9,especially the two pictures on the left), I noticed there are some noise at the rail of the green point cloud (corrected point cloud). I would like to ask if this noise is caused by the motion distortion correction or the inaccurate object detection/clustering results? Thanks!

    您好,我对你们的工作很感兴趣! 在阅读你们的论文时,我有一个问题,希望可以得到你们的解答。 在你们的系统输出结果中(Fig 9,左边两张图比较明显),我注意到了在绿色的点云(矫正后的点云)的尾部有一些噪点,我想请问下这个噪点是因为畸变矫正产生的还是因为检测/分割的不够准确而产生的呢? 感谢!

    opened by Liuyaqi99 2
  • The maximum frames can be merged

    The maximum frames can be merged

    Thank you for your excellent work! From the paper it seems the experiments are done in the consecutive 3 frames of point cloud from Livox Horizon. How much frames can be merged to the maximum? Is that corresponding to the velocity and acceleration of the moving object in the scene?

    opened by Psyclonus2887 1
  • What if there are objects turning left or right?

    What if there are objects turning left or right?

    Thank you for your work! It seems the method can only work when the object is moving along its axis. If the object is turning left or right, can this method be used to fix the distortion?

    opened by Psyclonus2887 0
Owner
ISEE Research Group
ISEE Research Group @ SUSTech
ISEE Research Group
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 369 Dec 30, 2022
A real-time, direct and tightly-coupled LiDAR-Inertial SLAM for high velocities with spinning LiDARs

LIMO-Velo [Alpha] ?? [16 February 2022] ?? The project is on alpha stage, so be sure to open Issues and Discussions and give all the feedback you can!

Andreu Huguet 150 Dec 28, 2022
A motion-activated LED light strip controller. Supports up to two motion detectors.

A simple standalone motion activated controller for WS2812b LED lights Version 0.30 adds the ability to change settings from a web interface without r

null 4 Mar 12, 2022
Path Tracking PID offers a tuneable PID control loop, decouling steering and forward velocity

path_tracking_pid Overview Path Tracking PID offers a tuneable PID control loop decouling steerting and forward velocity. The forward velocity is gene

Nobleo Technology 83 Dec 26, 2022
Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras

Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras The pre-print version of our paper is available here. The pre-release code has

HKU-Mars-Lab 244 Dec 24, 2022
This repo contains source code of our paper presented in IROS2021 "Single-Shot is Enough: Panoramic Infrastructure Based Calibration of Multiple Cameras and 3D LiDARs"

Single-Shot is Enough: Panoramic Infrastructure Based Calibration of Multiple Cameras and 3D LiDARs Updates [2021/09/01] first commit, source code of

Alibaba 83 Dec 19, 2022
Dwm_lut - Apply 3D LUTs to the Windows desktop for system-wide color correction/calibration

About This tool applies 3D LUTs to the Windows desktop by hooking into DWM. It works in both SDR and HDR modes, and uses tetrahedral interpolation on

null 212 Jan 3, 2023
AviSynthPlus color correction filter.

Description A color constancy filter that applies color correction based on the grayworld assumption. For more info. This is a port of the FFmpeg filt

null 8 Dec 20, 2022
Overdrive/Distortion simulation

CollisionDrive.lv2 Overdrive/Distortion Features Overdrive/Distortion simulation. Dependencys libcairo2-dev libx11-dev Build git submodule init git su

Hermann 10 Dec 15, 2022
Flutter real-time magnifying glass lens widget with Barrel/Pincushion distortion

MagnifyingGlass Flutter plugin Flutter real-time magnifying glass lens widget with Barrel/Pincushion distortion. Works on Android, iOS and desktop. Do

Marco Bavagnoli 10 Nov 9, 2022
ZXing ("Zebra Crossing") barcode scanning library for Java, Android

Project in Maintenance Mode Only The project is in maintenance mode, meaning, changes are driven by contributed patches. Only bug fixes and minor enha

ZXing Project 30.5k Dec 27, 2022
Skrull is a malware DRM, that prevents Automatic Sample Submission by AV/EDR and Signature Scanning from Kernel.

Skrull is a malware DRM, that prevents Automatic Sample Submission by AV/EDR and Signature Scanning from Kernel. It generates launchers that can run malware on the victim using the Process Ghosting technique. Also, launchers are totally anti-copy and naturally broken when got submitted.

Sheng-Hao Ma 413 Dec 10, 2022
3D scanning is becoming more and more ubiquitous.

Welcome to the MeshLib! 3D scanning is becoming more and more ubiquitous. Robotic automation, self-driving cars and multitude of other industrial, med

null 121 Dec 31, 2022
This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16.

pc2image This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16. Requisites ROS Kinetic or Melodic Ve

Edison Velasco Sánchez 6 May 18, 2022
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.

fusion-lidar-camera-ROS 一、介绍 本仓库是一个ROS工作空间,其中ws_fusion_camera/src有一个工具包color_pc ws_fusion_camera │ README.md │ └───src │ └───package: c

hongyu wang 23 Dec 7, 2022
This project is used for lidar point cloud undistortion.

livox_cloud_undistortion This project is used for lidar point cloud undistortion. During the recording process, the lidar point cloud has naturally th

livox 74 Dec 20, 2022
A Robust LiDAR-Inertial Odometry for Livox LiDAR

LIO-Livox (A Robust LiDAR-Inertial Odometry for Livox LiDAR) This respository implements a robust LiDAR-inertial odometry system for Livox LiDAR. The

livox 363 Dec 26, 2022
Livox-Mapping - An all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR

Livox-Mapping This repository implements an all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR. The system is developed based

null 257 Dec 27, 2022
mCube's ultra-low-power wake-on-motion 3-axis accelerometer

MC3635 mCube's ultra-low-power wake-on-motion 3-axis accelerometer Based on mCube's Arduino demo driver, this sketch is specific for the MC3635 3-axis

Kris Winer 4 Aug 26, 2022