3D-Lidar Camera Calibration using planar Point to to camera Plane Constraint

Overview

cam_lidar_calib: A ROS package to determine External Calibration b/w a Camera and a 3D-LIDAR

alt text The package is used to calibrate a Velodyne VLP-32 LiDAR with a Basler Camera. The methods used are fairly basic and sould work for any kind of 3D LiDARs and cameras. This software is an implementation of the work presented in the paper titled Extrinsic calibration of a 3d laser scanner and an omnidirectional camera. Here is a Dataset.

Contents

  1. Setup
  2. Calibration
  3. Cross Projection

Setup

It is necessary to have atleast the following installed:

  • ROS
  • Ceres
  • OpenCV (This ships with ROS, but standalone versions can be used)
  • PCL
  • Drivers for camera and LiDAR

ROS is used because it is easy to log data in a nearly synchronized way and visualize it under this framework. For Non-Linear optimization the popular Ceres library is used.

Experimental Setup

alt-text The image above show the sensors used, a Velodyne VLP-32 and a Basler acA2500, A checkerboard with known square dimension is also needed. I have used the checkerboard pattern available here. The default size is too small for our purpose so I used Adobe Photoshop to generate a bigger image with each squares checker pattern sized 0.075 m (7.5 cm). In my experience the bigger the pattern the better but the size must satisfy practical constraints like the field of view of the camera, etc.

Calibration

First, the camera intrinsics needs to be calibrated. I used the ros camera calibrator for the same using the checkboard I mentioned about in the previous section.

For external calibration, the checkboard is moved in the mutual fov of both the camera and the LiDAR. It is worth noting that rotational motion of the checkboard along off normal axes adds more constraints to the optimization procedure than rotation along normal to the checkerboard plane or translations along any direction. Each view, which has significant rotation wrt the previous view adds a unique constraint. As mentioned in the paper linked above, for the given setup, we need atleast 3 non coplanar views of the checkerboard to obtain a unique transformation matrix between the camera and the lidar.

I used functions available in OpenCV to find the checkerboard in the image and to determine the relative transformation between the camera and the checkerboard, this transformation matrix gives us information about the surface normal of the checkerboard plane.

Next, I used PCL to cluster the points lying in the checkerboard plane in the LiDAR frame.

For each view that can add to the constraint, we have one vector normal to the checkerboard plane expressed in the camera frame and several points lying on the checkerboard in the lidar frame. This data is used to form a objective function, the details of which can be found in the aforementioned paper. The Ceres library is used to search for the relative transformation between the camera and the lidar which minimizes this objective function. A demo is shown in the video below. First, we collect data from multiple views and then once we have sufficient information(ambigous) we run the optimization. As mentioned earlier, we need atleast 3 non coplanar views to zero in on a unique solution. Alt text

Cross Projection

Alt text To check the veracity of the external calibration parameter estimation procedure, I use the parameters to project the lidar points on the image and color the point cloud. The results are shown in the video above.

Issues
  • Performance Constraints / Hardware Specs

    Performance Constraints / Hardware Specs

    Do you have a hardware specs used for this pipeline?

    It looks like this process is entirely CPU based.

    Did you downscale the camera images? The camera you provided seems to output at a high resolution.

    I ask because i'm testing this on an OS1-64 and a camera at 1920x1080, and it seems the application just freezes up on me. I'm also running the pipeline on a jetson xavier, but I could switch over to a better computer if need be.

    opened by osholsonel20 1
  • Failed to load camera_in_topic

    Failed to load camera_in_topic

    When I run the programmer,I got this error

    [ERROR] [1646291731.757894933]: Failed to load camera_in_topic
    [ERROR] [1646291731.763121282]: Failed to load lidar_in_topic
    [ERROR] [1646291731.768399153]: Failed to load dx
    [ERROR] [1646291731.771745885]: Failed to load dy
    [ERROR] [1646291731.773568218]: Failed to load checkerboard_rows
    [ERROR] [1646291731.779184635]: Failed to load checkerboard_cols
    [ERROR] [1646291731.784768730]: Failed to load min_points_on_plane
    [ERROR] [1646291731.787355903]: Failed to load num_views
    [ERROR] [1646291731.789130170]: Failed to load no_of_initializations
    [ERROR] [1646291731.790950839]: Failed to load initializations_file
    [ERROR] [1646291731.793802346]: Failed to load result_file
    [ERROR] [1646291731.795394204]: Failed to load result_rpy_file
    [ERROR] [1646291731.796952204]: Failed to load cam_config_file_path
    

    How to fix it?

    opened by Tengyun-Mo 1
  • Is all 6dof fully observable?

    Is all 6dof fully observable?

    I agree that one view is not enough to constrain translation along the plane and rotation along plane normal; but why introducing an extra view constraints rotation along plane normal?

    Can you shed some light here. Thanks.

    opened by Doodle1106 0
  • No license

    No license

    Hi, I would like to use and modify your code, but you have not added a license, so as per GitHub (https://choosealicense.com/no-permission/#for-users), I am not able to.

    Would you be open to adding a license? BSD or MIT would be great.

    Thanks!

    opened by caitlin-woodside 0
  • Issues when I use this package

    Issues when I use this package

    Hi! I used the package and the example data you provided. However, When I launched the launch file: cam_lidar_calib_basler.launch. The error occurred: Failed to load no_of_initialization and Failed to load initializations_file. Could you please tell me how to solve this problem? Thx! Best regards

    opened by yulan0215 1
Owner
Subodh Mishra
PhD Student
Subodh Mishra
LiDAR-Camera Calibration using 3D-3D Point correspondences

ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"

Ankit Dhall 1k Jun 24, 2022
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

PointCloud on Image The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera. Th

Edison Velasco Sánchez 4 Apr 21, 2022
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV Workshop @ CVPR 2021.

MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 43 Jun 18, 2022
Proletariat Graph-Learning Constraint Solver

Introduction Vertexy is a conflict-driven learning constraint solver that is focused on solving problems within graphs. Its primary development goal i

Proletariat Inc. 3 Apr 11, 2022
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Kangcheng LIU 68 Jun 23, 2022
An implementation on Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process.

An implementation on "Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239"

Wangxu1996 31 Jun 28, 2022
A LiDAR point cloud cluster for panoptic segmentation

Divide-and-Merge-LiDAR-Panoptic-Cluster A demo video of our method with semantic prior: More information will be coming soon! As a PhD student, I don'

YimingZhao 55 Jun 8, 2022
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 266 Jun 21, 2022
Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor"

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 174 Jun 23, 2022
IA-LIO-SAM is enhanced LIO-SAM using Intensity and Ambient channel from OUSTER LiDAR.

IA-LIO-SAM Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is

Kevin Jung 70 Jun 26, 2022
NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping by: Alexandre Boulch, Pierre-Alain Langlois, Gilles Puy a

valeo.ai 25 May 24, 2022
A real-time LiDAR SLAM package that integrates FLOAM and ScanContext.

SC-FLOAM What is SC-FLOAM? A real-time LiDAR SLAM package that integrates FLOAM and ScanContext. FLOAM for odometry (i.e., consecutive motion estimati

Jinlai Zhang 11 May 21, 2022
A real-time LiDAR SLAM package that integrates TLOAM and ScanContext.

SC-TLOAM What is SC-TLOAM? A real-time LiDAR SLAM package that integrates TLOAM and ScanContext. TLOAM for odometry. ScanContext for coarse global loc

Jinlai Zhang 3 Sep 17, 2021
Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)

SC-LeGO-LOAM NEWS (Nov, 2020) A Scan Context integration for LIO-SAM, named SC-LIO-SAM (link), is also released. Real-time LiDAR SLAM: Scan Context (1

Giseop Kim 10 May 5, 2022
This work is an expend version of livox_camera_calib(hku-mars/livox_camera_calib), which is suitable for spinning LiDAR。

expend_lidar_camera_calib This work is an expend version of livox_camera_calib, which is suitable for spinning LiDAR。 In order to apply this algorithm

afei 28 Jun 10, 2022
R3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

R3LIVE A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package News [Dec 31, 2021] Release of cod

HKU-Mars-Lab 916 Jun 23, 2022
Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering)

my_detection Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering) 本项目使用设置地面坡度阈值的方法,滤除地面点,使用三维弯曲体素聚类法完成点云的聚类,包围盒参数由Apol

null 8 Jun 7, 2022
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering

Darius Rückert 1.8k Jun 24, 2022
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.

PrimitivesFittingLib An unified library for fitting multiple primitives from 3D point cloud data with both C++&Python API. The supported primitives ty

Yueci Deng 11 Apr 14, 2022