This package estimates the calibration parameters that transforms the camera frame (parent) into the lidar frame (child)

Overview

Camera-LiDAR Calibration

This package estimates the calibration parameters that transforms the camera frame (parent) into the lidar frame (child). We aim to simplify the calibration process by optimising the pose selection process to take away the tedious trial-and-error of having to re-calibrate with different poses until a good calibration is found. We seek to obtain calibration parameters as an estimate with uncertainty that fits the entire scene instead of solely fitting the target, which many existing works struggle with. Our proposed approach overcomes the limitations of existing target-based calibration methods, namely from user error and overfitting of the target. For more details, please take a look at our paper.

We also provide a video tutorial for this package which you can follow alongside this readme.


Left: Our sensor setup at the Australian Centre for Field Robotics (ACFR). Right: Calibration results of this package with an Nvidia gmsl camera to both Baraja Spectrum-Scan™ (top) and Velodyne VLP-16 (bottom). The projection of Baraja Spectrum-Scan™ has some ground points (yellow) on the chessboard due to the difference in perspective of camera and lidar.

Note: In the paper, equation (2) which shows the equation for the condition number has a typo. The correct equation for calculating the condition number is implemented in this repo. The formula is: conditionnum_formula

1. Getting started

1.1 Installation

This package has only been tested in ROS Melodic.

Local ROS

  1. Clone the repository in your catkin_ws/src/ folder
git clone -c http.sslverify=false https://gitlab.acfr.usyd.edu.au/its/cam_lidar_calibration.git
  1. Download ros and python dependencies
sudo apt update && sudo apt-get install -y ros-melodic-pcl-conversions ros-melodic-pcl-ros ros-melodic-tf2-sensor-msgs
pip install pandas scipy
  1. Build the package and source the setup.bash or setup.zsh file.
catkin build cam_lidar_calibration
source ~/catkin_ws/devel/setup.bash 

Docker

  1. Clone the repository in your catkin_ws/src/ folder
git clone -c http.sslverify=false https://gitlab.acfr.usyd.edu.au/its/cam_lidar_calibration.git
  1. Run the docker image (which will be pulled dockerhub). If your computer has a Nvidia GPU, set the cuda flag --cuda on. If you do not have one, set --cuda off.
cd cam_lidar_calibration/docker
./run.sh --cuda on

Once you run this script, the docker container will run and immediately build the catkin workspace and source the setup.bash file. When this is done, you can move on to the Quick start section.

If you'd like to build the image from scratch, a build.sh script is also provided.

Note: If using docker, the ./run.sh mounts your local cam_lidar_calibration folder to /catkin_ws/src/cam_lidar_calibration inside the container. When running the calibration, this would create csv files inside the container under root ownership which is not ideal. However the workaround is to use the following command outside the docker image, which would change ownership of all files in your current folder to be the same as your $USER and $GROUP in the local environment.

sudo chown $USER:$GROUP *

1.2 Quick start

You can verify that this repository runs sucessfully by running this package on our provided quick-start data. If you are using docker, these instructions should be run inside the container.

1. Run the calibration process

This first step takes the saved poses, computes the best sets with the lowest VOQ score.

roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=true

After calibration, the output is saved in the same directory as the imported samples. For this quickstart example, the output is saved in cam_lidar_calibration/data/vlp/.

2. Obtain and assess calibration results

This step gives the estimated calibration parameters by taking a filtered mean of the best sets, and displaying the gaussian fitted histogram of estimated parameters. Additionally, we provide an assessment of the calibration results by computing the reprojection error over all provided data samples and a visualisation (if specified).

To obtain and assess the calibration output, provide the absolute path of the csv output file generated in the first step. The example below uses pre-computed calibration results. You can replace this with your newly generated results in step 1 if you wish. You should see a terminal output with the reprojection errors, along with a gaussian-fitted histogram and a visualisation.

roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true

That's it! If this quick start worked successfully, you can begin using this tool for your own data. If not, please create an issue and we'll aim to resolve it promptly.

2. Calibration with your own data

To use this package with your own data, ensure that your bag file has the following topics:

  • Lidar: 3D pointcloud of point type XYZIR, published as sensor_msgs::PointCloud2. This package relies on the ring value and so if you don't have that, you need to modify your lidar driver to use this package.
  • Monocular camera: an image published as sensor_msgs::Image and the corresponding meta-information topic (sensor_msgs::CameraInfo).

All data and output files will be saved in the cam_lidar_calibration/data/YYYY-MM-DD_HH-MM-SS/ folder.

2.1 Setup of Calibration Target

  1. Prepare a rectangular chessboard printout. The chessboard print used in the paper is an A1 (594 x 841mm) with 95mm squares and 7x5 inner vertices (not the same as the number of grid squares), downloaded from https://markhedleyjones.com/projects/calibration-checkerboard-collection
  2. Firmly attach the chessboard on a rigid, opaque, and rectangular board such that both their centres align (as best as possible) and their edges remain parallel to one another.
  3. Choose a suitable stand that can mount the target with little to no protruding elements from the board's edges.
  4. Rotate the chessboard such that it is in the shape of a diamond (at an angle of 45° with respect to the ground) and mount it on a stand.

In the image below, we show two chessboards rigs that we've used with this package.


Left: chessboard with 8x6 inner vertices and 65mm squares. Right: chessboard with 7x5 inner vertices and 95mm squares.

2.2 Configuration files

The following explains the fields in /cfg/params.yaml

1. Specify the names of your lidar and camera topics. For example, in our case it is:

camera_topic: "/gmsl/A0/image_color"
camera_info: "/gmsl/A0/camera_info"
lidar_topic: "/velodyne/front/points"

2. (optional) Specify the default bounds of the pointcloud filtering. If you are unsure, feel free to skip this step.

3. Input the details about the chessboard target you prepared:

  • pattern_size: these are the inner vertices of the chessboard (not the number of squares; see our chessboards in Section 2.1)
  • square_length (mm): the length of a chessboard square.
  • board_dimension (mm): width and height of the backing board that the chessboard print is mounted on.
  • translation_error: the offset of the chessboard centre from the centre of the backing board (see illustration below).


Example: In this example, the offset is x=10mm, y=30mm, board dimensions are 910x650mm with square lengths of 65mm and pattern size of 8x6 (HxW).

2.3 Capture poses and get the best sets of calibration parameters

1. Launch calibration package

Run the calibration package with the import_samples flag set to false. An rviz window and rqt dynamic reconfigure window should open. If you're using a docker container and RViz does not open, try setting the cuda flag to the opposite of what you used.

roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=false

This process can be done online or offline. If you are offline, make sure to play the rosbag using --pause flag so that rviz can get the /tf topics e.g. rosbag play --pause mybag.bag. If you are running a docker container, you can run the bag file on a separate terminal window outside the docker container.

Make sure to change the image topic to your camera image topic in order to see the video feed. This can be done by editing line 62 of cam_lidar_calibration.rviz or open rviz->panels->display, and change the field that is currently /gmsl/A0/image_color.

2. Isolate the chessboard

Using the rqt_reconfigure window, modify the values of the x,y and z axes limits to such that it only shows the chessboard. If chessboard is not fully isolated, it may affect the plane-fitting of the chessboard and also lead to high board dimension errors.


Left: Unfiltered pointcloud. Right: isolated chessboard pointcloud after setting the values in the rqt_reconfigure. For each chessboard pose, depending on your lidar, you might have to continually tweak the rqt_reconfigure values.

3. Capture poses

Place the chessboard facing the sensor pair and click the capture button. Make sure that the chessboard is correctly outlined with a low board dimension error. If it isn't, then discard and click capture again (or move the board and capture again).

Board errors (in the terminal window): Try to get a board dimension error as close to zero as possible (errors less than 30mm are acceptable). If the board error is too high, then try again in a different position or see below for potential fixes.

  • High board errors can be caused by the chessboard being too close or too far from the lidar. So we recommend moving it a bit closer/further.

  • Low resolution lidars may struggle to capture boards with low error if there are not enough points on the board. For example, for the VLP-16, we require at least 7 rings on the board for a decent capture.

  • If the chessboard is consistently under or overestimated with the same amount of board error, then it could be that the lidar's internal distance estimation is not properly calibrated. Lidars often have a range error of around +/-30mm and this is inconsistent at different ranges. We've provided a param in the run_optimiser.launch file that allows you to apply an offset to this distance estimation. Try to set the offset such that you get the lowest average error for your data (you might need to re-capture a couple times to figure this value). For our VLP-16 we had to set distance_offset_mm=-30. This should be permanently set in the lidar driver once you've finished calibrating.


In the left image above, we show the same pose at 3 different distance offsets. We add a distance offset by converting the cartesian (xyz) coordinates to polar (r, theta) and add a distance offset to the radius r. When we do this, you can think of it like extending/reducing the radius of a circle. Every shape in that new coordinate system is hence enlarged/shrunk. Increasing the distance offset value increases the chessboard area (+100mm is the largest). The right image is the same as the left, just with a different perspective to show that the increase is in both height and width of the chessboard.

Number and variation of poses: We recommend that you capture at least 20 poses (more is better) with at least a 1-2m distance range (from lidar centre) between closest and farthest poses. Below lists some guidelines.

  • Spread the poses out in the calibration range, covering the width of the image field of view. For our specific VLP-16 and Baraja Spectrum Scan lidars, we had a range of 1.7m - 4m and 2.1m - 5m respectively.

  • Have variation in the yaw and pitch of the board as best as you can. This is explained in the following image.


For the bad poses (left), the normals of the board align such that their tips draw out a line and they are all in the same position, thereby giving a greater chance of overfitting the chessboard at that position. For the good poses (right), we see variation in the board orientation and positioning.

4. Get the best sets of calibration parameters

When all poses have been captured, click the optimise button. Note that if you do not click this button, the poses will not be properly saved.

The poses are saved (png, pcd, poses.csv) in the ($cam_lidar_calibration)/data/YYYY-MM-DD_HH-MM-SS/ folder for the reprojection assessment phase (and also if you wish to re-calibrate with the same data). The optimisation process will generate an output file calibration_YYYY-MM-DD_HH-MM-SS.csv in the same folder which stores the results of the best sets.

2.4 Estimating parameters and assessing reprojection error

After you obtain the calibration csv output file, copy-paste the absolute path of the calibration output file after csv:= in the command below with double quotation marks. A histogram with a gaussian fitting should appear. You can choose to visualise a sample if you set the visualise flag. If you wish to visualise a different sample, you can change the particular sample in the assess_results.launch file. The reprojection results are shown in the terminal window.

The final estimated calibration parameters can be found in the terminal window or taken from the histogram plots.

roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true


Output of our calibration pipeline shows a histogram with a gaussian fit and a visualisation of the calibration results with reprojection error.

More Information

The baseline calibration algorithm was from Verma, 2019. You can find their paper, and their publicly available code.

Please cite our work if this package helps with your research.

@article{tsai2021optimising,
  title={Optimising the selection of samples for robust lidar camera calibration},
  author={Tsai, Darren and Worrall, Stewart and Shan, Mao and Lohr, Anton and Nebot, Eduardo},
  journal={arXiv preprint arXiv:2103.12287},
  year={2021}
}
Comments
  • Ouster lidar os1 64

    Ouster lidar os1 64

    Can this package be used for calibration of ouster os2 64 and Ip camera with this package? if it is possible then kindly guide me to complete the procedure I am doing as the research intern at NUST Pakistan. Kindly help me to achieve this task.

    opened by islamtalha01 33
  • Failed to find

    Failed to find "ring"

    Thanks for your work, I am very excited to try it!

    I was trying to start the launch file to do a live calibration, but when I started it, I received an error message related to the lidar. The algorithm was not able to fin the "ring" information from it, even if it is available into the lidar driver (if I open Rviz, I can select the "ring" view)

    Do you have an idea why it is not able to fetch the information correctly?

    Thank you!

    opened by ogamache 8
  • Shift Errors

    Shift Errors

    Hello,

    I have been trying to calibrate VLP32 LIDAR with Camera using different tools and I have came across your paper and I have got the best results so far but I have one issue that has been noticeable using whatever calibration technique.

    The calibration is at its' best on the middle of the image however, whenever it comes near to corner; the shift keeps increasing till it is very significant at the corners.

    I have taken already a lot of samples at the corners and went through all the recommended steps but I am not sure what I might be missing. I have attached two images with the best results I have achieved so far from the tool taking into consideration that I commented the distortion part in the code from the point cloud so I am calibrating distorted images with distorted point cloud. There is a significant shift as shown in the Z direction.

    https://ibb.co/K00Kb6N https://ibb.co/Y8XD170

    Another question what is the procedure regarding the distortion? Is it un-distorted camera + un-distorted LIDAR OR distorted + distorted? When is the distortion taken into consideration at which point?. I have tried un-distorted camera + un-distorted LIDAR but the results I have got had relatively very large error.

    Thanks in advance.

    opened by mserour 5
  • Error while caliberating os1 64

    Error while caliberating os1 64

    Now i am able to extract the checkerboard but i am getting an error due to which i am not able to calibrate. I have attached the error screenshot. Kindly reply as soon as possible. Screenshot from 2021-09-30 04-37-01 Screenshot from 2021-09-30 04-44-20

    opened by islamtalha01 5
  • Several Approaches and could not get results

    Several Approaches and could not get results

    Thanks for your great result and appreciate for your work again.

    I've tried several approaches, tried in ros Melodic for local environment, also on Docker. Finally I tried to make own environment as this blog says. This blog says that there is something to be modified that I tried the same but I could not get the result.

    https://programming.vip/docs/cam-for-joint-calibration-of-lidar-and-camera_-lidar_-calibration.html

    But the same result. No data shown on the rviz. (I modified the frame id also.) These are what I have modified and progressed.

    I am using Ouster OS-1 32 channel.

    1. 'Ring Match Problem I found out that the ring data is not the same as Velodyne and modified the data type as Ouster.

    Modified point.h in ouster_ros official package onto uint16_t

    image

    Ring not match warning not visible anymore.

    2. Topic Name Configuration I've used usb_cam package to publish the camera data on ROS. After that I used image_proc as following command.

    ROS_NAMESPACE=usb_cam rosrun image_proc image_proc

    image

    RQT Graph shows proper data flow for the extraction

    3. Data not shown on rviz

    Two window appears and expected to adjust the thresholds to show only chessboard. But I could not get anymore of progress.

    Can anyone give help for the result? Thanks in advance.

    opened by kimjeff-korea 3
  • Camera normal in z direction

    Camera normal in z direction

    As commented in this line, why is the normal -1 instead of 1 in z?

    In my case, using 1 gives the correct results. And thanks for your user-friendly work!

    opened by Yibin122 3
  • Recording my own data

    Recording my own data

    HI, I want to know that how you recorded the bag file of your lidar stream and camera stream at the same time. I basically want to know that what are the steps to make your own data set for calibration.

    opened by islamtalha01 3
  • error while trying to build the Worspace opencv/cv.hpp

    error while trying to build the Worspace opencv/cv.hpp

    Hi everyone, when i try building my workspace i get this error: fatal error: opencv/cv.hpp: No such file or directory 10 | #include <opencv/cv.hpp> | ^~~~~~~~~~~~~~~

    im using Ubuntu 20.04 and Ros noetic. any idea on how i can solve this Problem please ?

    opencv version is 4.5

    opened by aristow1 2
  • The point cloud is not displayed during the calibration of the ouster radar

    The point cloud is not displayed during the calibration of the ouster radar

    When I use ouster0-64 lidar for calibration, even if the frame_ The ID and radar topics have been modified, and the data type of ring has been modified to uint16_ t. However, the point cloud cannot be displayed in the calibration interface

    opened by wangdongxu12345 2
  • ImportError: No module named pandas

    ImportError: No module named pandas

    When I command roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true, the following error message is displayed.

    error code

    ` ... logging to /home/ecb/.ros/log/32102fc4-1219-11ed-8179-d8bbc1224b6b/roslaunch-ecb-GP76-20675.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.

    started roslaunch server http://ecb-GP76:37649/

    SUMMARY

    PARAMETERS

    • /D: [-0.0540096, -0.0...
    • /K: [1176.93, 0.0, 96...
    • /assess/csv: /home/ecb/catkin_...
    • /assess/visualise: True
    • /assess/visualise_pose_num: 16
    • /camera_info: /gmsl/A0/camera_info
    • /camera_topic: /gmsl/A0/image_color
    • /chessboard/board_dimension/height: 850
    • /chessboard/board_dimension/width: 610
    • /chessboard/pattern_size/height: 7
    • /chessboard/pattern_size/width: 5
    • /chessboard/square_length: 95
    • /chessboard/translation_error/x: 2
    • /chessboard/translation_error/y: 0
    • /distortion_model: fisheye
    • /feature_extraction/x_max: 10.0
    • /feature_extraction/x_min: -10.0
    • /feature_extraction/y_max: 8.0
    • /feature_extraction/y_min: -8.0
    • /feature_extraction/z_max: 5.0
    • /feature_extraction/z_min: -5.0
    • /height: 1208
    • /lidar_topic: /velodyne/front/p...
    • /rosdistro: melodic
    • /rosversion: 1.14.13
    • /use_sim_time: False
    • /visualise_results/csv: /home/ecb/catkin_...
    • /visualise_results/degree: False
    • /visualise_results/rot_binwidth_deg: 0.5
    • /visualise_results/trans_binwidth: 0.01
    • /width: 1920

    NODES / assess (cam_lidar_calibration/assess_node) visualise_results (cam_lidar_calibration/visualise_results.py)

    ROS_MASTER_URI=http://localhost:11311

    process[visualise_results-1]: started with pid [20777] process[assess-2]: started with pid [20778] [ INFO] [1659414159.112580773]: Importing samples from: /home/ecb/catkin_ws/src/cam_lidar_calibration/data/vlp/poses.csv [ INFO] [1659414159.114178533]: 40 samples imported Traceback (most recent call last): File "/home/ecb/catkin_ws/src/cam_lidar_calibration/scripts/visualise_results.py", line 13, in import pandas ImportError: No module named pandas [visualise_results-1] process has died [pid 20777, exit code 1, cmd /home/ecb/catkin_ws/src/cam_lidar_calibration/scripts/visualise_results.py __name:=visualise_results __log:=/home/ecb/.ros/log/32102fc4-1219-11ed-8179-d8bbc1224b6b/visualise_results-1.log]. log file: /home/ecb/.ros/log/32102fc4-1219-11ed-8179-d8bbc1224b6b/visualise_results-1*.log ^C[assess-2] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done `

    Please let me know how to solve the problem.

    opened by masuda607 2
  • RANSAC PCL erro

    RANSAC PCL erro

    image image

    This is the erro happened, it seemed to be the issue of setting threshold. And I found that the value in this project was set to 0.004. Should I change the value? Thanks!

    opened by mendoza-G 2
  • Calibrate LiDAR and camera using files.

    Calibrate LiDAR and camera using files.

    Hello!

    I am a beginner in ROS. There is a question that I want to ask for help. Through the README tutorial, we know how to use this tool to calibrate our own sensor by playing back rosbag. If I have a lot of image and point cloud files, which are .jpg and .pcd files, can I use these files to calibrate the LiDAR and camera using this GitHub tool?

    Thanks for reading patiently!

    I'll keep working on that. I will update if there is any progress. Thank you!

    opened by peter0617ku 2
  • RVIZ shuts down when i capture sample for calibration !

    RVIZ shuts down when i capture sample for calibration !

    every time i want to capture the first sample to start the calibration, Rviz shuts down and i have to force quit it. on the terminal the capture seems to be successful and i get this : --- Sample 1 --- Measured board has: dimensions = 610x850 mm; area = 0.51850 m^2 Distance = 1.88 m Board angles = 87.15,86.83,87.63,86.36 degrees Board area = 0.52148 m^2 (+0.00298 m^2) Board avg height = 857.69mm (+7.69mm) Board avg width = 608.00mm (-2.00mm) Board dim = 602.17,855.47,859.90,613.83 mm Board dim error = 27.03 this the error i get when i force quit rviz :

    [rviz-2] process has died [pid 8354, exit code -9, cmd /opt/ros/melodic/lib/rviz/rviz -d /home//catkin_ws/src/cam_lidar_calibration/rviz/cam_lidar_calibration.rviz __name:=rviz __log:=/home//.ros/log/9cd9ae78-55c8-11ed-880c-d04cc1051f2c/rviz-2.log]. log file: /home/***/.ros/log/9cd9ae78-55c8-11ed-880c-d04cc1051f2c/rviz-2.log

    any help please ?

    opened by aristow1 1
  • How can I use calibration .csv data?

    How can I use calibration .csv data?

    I got a calibration result data on csv file. I want to use them but I don't know how to use them. Is there is an equation that I can cat tranform matrix from camera to lidar?

    opened by jiyou384 2
  • issue with rosbag play

    issue with rosbag play

    Everything works fine until I run rosbag play -l --pause mybag.bag. I am getting the following error. Could you please take a look and reply me back if you know a way around

    issue

    opened by talhasu 6
  • Issue with calibration with my own data

    Issue with calibration with my own data

    Hi, I read your paper "Optimising the selection of samples for robust lidar camera calibration" and found it very interesting. I decided to test the available source code. Calibration with your data worked well. However when I tried calibration with my own data using a robosense RS-LiDAR-16 and an ids UI-5280xCP-M mono camera, I faced an issue. In fact, when it comes to feature extraction, The following error appears : Connection to node=/feature_extraction failed: param client failed (Please find below a screen shot of the problem?)

    I'm trying to understand the cause of this error, but I don't seem to find it. Note1: I can't find "libfeature_extraction.so" in the Lib folder Note2: I didn't use a recorded bag file but I used live data instead Can you please enlighten me on how to solve this problem ?

    I would appreciate any help, Regards, Capture d’écran de 2022-03-16 10-38-58

    opened by sou159 4
Owner
Australian Centre for Field Robotics
Australian Centre for Field Robotics
This repository is used for automatic calibration between high resolution LiDAR and camera in targetless scenes.

livox_camera_calib livox_camera_calib is a robust, high accuracy extrinsic calibration tool between high resolution LiDAR (e.g. Livox) and camera in t

HKU-Mars-Lab 491 Dec 29, 2022
A generic and robust calibration toolbox for multi-camera systems

MC-Calib Toolbox described in the paper "MultiCamCalib: A Generic Calibration Toolbox for Multi-Camera Systems". Installation Requirements: Ceres, Boo

null 204 Jan 5, 2023
the implementations of 'A Flexible New Technique for Camera Calibration' and Bouguet's method

StereoCameraCalibration MonocularCameraCalibration/StereoCameraCalibration/StereoCameraRectification 1、Class "MonocularCameraCalibration" provides the

gtc1072 8 Nov 3, 2022
calibrate a Livox LiDAR and a camera

Livox LiDAR-Camera Calibration This method is from the official method of Livox(https://github.com/Livox-SDK/livox_camera_lidar_calibration) It's just

null 10 Nov 24, 2022
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.

fusion-lidar-camera-ROS 一、介绍 本仓库是一个ROS工作空间,其中ws_fusion_camera/src有一个工具包color_pc ws_fusion_camera │ README.md │ └───src │ └───package: c

hongyu wang 23 Dec 7, 2022
A Gazebo based LiDAR-Camera Data Simulator.

Livox-LiDAR-Camera System Simulator A package to provide plug-in for Livox Series LiDAR. This work is inherited from EpsAvlc and LvFengchi's work: liv

zhijianglu 14 Jan 5, 2023
A Robust LiDAR-Inertial Odometry for Livox LiDAR

LIO-Livox (A Robust LiDAR-Inertial Odometry for Livox LiDAR) This respository implements a robust LiDAR-inertial odometry system for Livox LiDAR. The

livox 363 Dec 26, 2022
Livox-Mapping - An all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR

Livox-Mapping This repository implements an all-in-one and ready-to-use LiDAR-inertial odometry system for Livox LiDAR. The system is developed based

null 257 Dec 27, 2022
Lidar-with-velocity - Lidar with Velocity: Motion Distortion Correction of Point Clouds from Oscillating Scanning Lidars

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. This repository is a barebones implementati

ISEE Research Group 163 Dec 15, 2022
Two alphanumeric LCDs and 2 LED bars to show and manage some in-flight parameters

FS2020-LCD-Panel with Arduino Two alphanumeric LCDs and 2 LED bars to show and manage some in-flight parameters for FS2020. In this project you can pl

null 7 Dec 12, 2022
C-function for traversing files/directories effectively and calling a given function with each encountered file and a void-pointer as parameters

C-function for traversing files/directories effectively and calling a given function with each encountered file and a void-pointer as parameters

null 1 Jun 27, 2022
Jetson Nano camera driver extracted from the jetbot_ros package.

Jetson Cam Jetson Nano camera driver extracted from the jetbot_ros package. Dependencies System Dependencies GStreamer Typically preinstalled in jetso

H S Helson Go 2 Nov 12, 2021
This is the git repository for the FFTW library for computing Fourier transforms (version 3.x), maintained by the FFTW authors.

This is the git repository for the FFTW library for computing Fourier transforms (version 3.x), maintained by the FFTW authors.

FFTW 2.3k Dec 27, 2022
WIP / DIN-rail compatible WiFi security camera with additional features (doorbell detection, magnetic / reed switch door & PIR sensor...) that sends you alerts on Telegram if someone unauthorised breaks into your house.

WIP / DIN-rail compatible WiFi security camera with additional features (doorbell detection, magnetic / reed switch door & PIR sensor...) that sends you alerts on Telegram if someone unauthorised breaks into your house.

François Leparoux 2 Dec 18, 2021
A package to provide plug-in for Livox Series LiDAR.

Livox Laser Simulation A package to provide plug-in for Livox Series LiDAR. Requirements ROS(=Melodic) Gazebo (= 9.x, http://gazebosim.org/) Ubuntu(=1

livox 83 Dec 13, 2022
A ros package for robust odometry and mapping using LiDAR with aid of different sensors

W-LOAM A ros package for robust odometry and mapping using LiDAR with aid of different sensors Demo Video https://www.bilibili.com/video/BV1Fy4y1L7kZ?

Saki-Chen 51 Nov 2, 2022
A ROS package for mobile robot localization with 2D LiDAR

mcl_ros About mcl_ros mcl_ros is a ROS package for mobile robot localization with 2D LiDAR. To implement localization, Monte Carlo localization (MCL)

Naoki Akai 47 Oct 13, 2022
This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16.

pc2image This code converts a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor into a depth image mono16. Requisites ROS Kinetic or Melodic Ve

Edison Velasco Sánchez 6 May 18, 2022
Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras

Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras The pre-print version of our paper is available here. The pre-release code has

HKU-Mars-Lab 244 Dec 24, 2022