LiDAR-Camera Calibration using 3D-3D Point correspondences

Overview

LiDAR-Camera Calibration using 3D-3D Point correspondences

Ankit Dhall, Kunal Chelani, Vishnu Radhakrishnan, KM Krishna

GitHub Actions status GitHub Actions status GitHub Actions status


ROS package to calibrate a camera and a LiDAR.

alt text

The package is used to calibrate a LiDAR (config to support Hesai and Velodyne hardware) with a camera (works for both monocular and stereo).

The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see Usage for a video tutorial. The lidar_camera_calibration/pointcloud_fusion provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and lidar_camera_calibration. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See Fusion using lidar_camera_calibration for results of the point cloud fusion (videos).

For more details please refer to our paper.

Citing lidar_camera_calibration

Please cite our work if lidar_camera_calibration and our approach helps your research.

@article{2017arXiv170509785D,
   author = {{Dhall}, A. and {Chelani}, K. and {Radhakrishnan}, V. and {Krishna}, K.~M.
    },
    title = "{LiDAR-Camera Calibration using 3D-3D Point correspondences}",
  journal = {ArXiv e-prints},
archivePrefix = "arXiv",
   eprint = {1705.09785},
 primaryClass = "cs.RO",
 keywords = {Computer Science - Robotics, Computer Science - Computer Vision and Pattern Recognition},
     year = 2017,
    month = may
}

Contents

  1. Setup and Installation
  2. Future Improvements
  3. Getting Started
  4. Usage
  5. Results and point cloud fusion using lidar_camera_calibration

Setup and Installation

Please follow the installation instructions for your Ubuntu Distrubtion here on the Wiki

Future improvements

As an open-source project, your contributions matter! If you would like to contribute and improve this project consider submitting a pull request.

  • iterative process with weighted average over multiple runs
  • passing Workflows for Kinetic, Melodic and Noetic
  • Hesai and Velodyne LiDAR options (see Getting Started)
  • integrate LiDAR hardware from other manufacturers
  • automate process of marking line-segments
  • Github Workflow with functional test on dummy data
  • support for upcoming Linux Distros

Getting Started

There are a couple of configuration files that need to be specfied in order to calibrate the camera and the LiDAR. The config files are available in the lidar_camera_calibration/conf directory. The find_transform.launch file is available in the lidar_camera_calibration/launch directory.

config_file.txt

1280 720
-2.5 2.5
-4.0 4.0
0.0 2.5
0.05
2
0
611.651245 0.0 642.388357 0.0
0.0 688.443726 365.971718 0.0
0.0 0.0 1.0 0.0
1.57 -1.57 0.0
0

The file contains specifications about the following:

image_width image_height
x- x+
y- y+
z- z+
cloud_intensity_threshold
number_of_markers
use_camera_info_topic?
fx 0 cx 0
0 fy cy 0
0 0 1 0

MAX_ITERS

initial_rot_x initial_rot_y initial_rot_z

lidar_type

x- and x+, y- and y+, z- and z+ are used to remove unwanted points in the cloud and are specfied in meters. The filtred point cloud makes it easier to mark the board edges. The filtered pointcloud contains all points
(x, y, z) such that,
x in [x-, x+]
y in [y-, y+]
z in [z-, z+]

The cloud_intensity_threshold is used to filter points that have intensity lower than a specified value. The default value at which it works well is 0.05. However, while marking, if there seem to be missing/less points on the cardboard edges, tweaking this value will might help.

The use_camera_info_topic? is a boolean flag and takes values 1 or 0(Though you can set it to 1 with using the camera_info topic, but we still recommend you strongly to set it to 0 and then using the calibration file, unless you make sure the camera info topic's value is consistent with calibration file or there is only a pretty small difference between them, otherwise, you won't the result you want). The find_transform.launch node uses camera parameters to process the points and display them for marking. If you wish to use the camera_info topic to read off the parameters, set this to 1. Else, the explicitly provided camera parameters in config_file.txt are used.

MAX_ITERS is the number of iterations, you wish to run. The current pipeline assumes that the experimental setup: the boards are almost stationary and the camera and the LiDAR are fixed. The node will ask the user to mark the line-segments (see the video tutorial on how to go about marking Usage) for the first iteration. Once, the line-segments for each board have been marked, the algorithm runs for MAX_ITERS, collecting live data and producing n=MAX_ITERS sets of rotation and translation in the form of 4x4 matrix. Since, the marking is only done initially, the quadrilaterals should be drawn large enough such that if in the iterations that follow the boards move slightly (say, due to a gentle breeze) the edge points still fall in their respective quadrilaterals. After running for MAX_ITERS number of times, the node outputs an average translation vector (3x1) and an average rotation matrix (3x3). Averaging the translation vector is trivial; the rotations matrices are converted to quaternions and averaged, then converted back to a 3x3 rotation matrix.

initial_rot_x initial_rot_y initial_rot_z is used to specify the initial orientation of the lidar with respect to the camera, in radians. The default values are for the case when both the lidar and the camera are both pointing forward. The final transformation that is estimated by the package accounts for this initial rotation.

lidar_type is used to specify the lidar type. 0 for Velodyne; 1 for Hesai-Pandar40P. Hesai driver by default does not publish wall time as time stamps. To solve this, modify lidarCallback function in /path/to/catkin_ws/src/HesaiLidar-ros/src/main.cc as follows:

void lidarCallback(boost::shared_ptr
   
     cld, double timestamp)
{
    pcl_conversions::toPCL(ros::Time(timestamp), cld->header.stamp);
    sensor_msgs::PointCloud2 output;
    pcl::toROSMsg(*cld, output);
    output.header.stamp = ros::Time::now(); // ADD
    lidarPublisher.publish(output);
}

   

marker_coordinates.txt

The ArUco markers are stuck on the board such that when it is hung from a corner, the ArUco marker is on the left side of the board.

After everything is setup, it should look something like this. Notice how the axis are aligned. y-axis should point outwards, x-axis along the breadth (s2) and z-axis along the length (s1). The markers are also arranged so that the ArUco id are in ascending order. alt text

2
48.4
46.8
4.0
5.0
20.5
49.0
46.8
4.0
5.0
20.5

After sticking the ArUco marker on a planar cardboard, it will look like this.
alt text

The first line specfies 'N' the number of boards being used. Followed by N*5 lines with the following information about the dimensions of the board:

length (s1)
breadth (s2)
border_width_along_length (b1)
border_width_along_breadth (b2)
edge_length_of_ArUco_marker (e)

All dimensions in marker_coordinates.txt are in centimeters.

lidar_camera_calibration.yaml

lidar_camera_calibration:
camera_frame_topic: /frontNear/left/image_raw
camera_info_topic: /frontNear/left/camera_info
velodyne_topic: /velodyne_points

Contains name of camera and velodyne topics that the node will subscribe to.

find_transform.launch

Parameters are required for the aruco_mapping node and need to be specfied here. Ensure that the topics are mapped correctly for the node to function. Other parameters required are:

  • calibration_file(.ini format)
  • num_of_markers
  • marker_size(in meters)

For more information about the aruco_mapping package refer to their documentation.

Usage

A more detailed video tutorial on how to go about using the node can be found at https://youtu.be/SiPGPwNKE-Q.

Before launching the calibration node ensure that the ArUco markers are visible in the camera frame and the markers are arragned in ascending order of their ArUco ids (ArUco ids and their co-ordinate frame can be found/viewed by running the original aruco_mapping package) from left to right as viewed by the camera.

Use the following command to start the calibration process once everything is setup.

roslaunch lidar_camera_calibration find_transform.launch

An initial [R|t] between the camera and the various ArUco markers will be estimated. Following this, a filtered point cloud (according to the specifications in the config_file.txt) will be displayed. The user needs to mark each edge of the rectangular board.

Each board will have 4 line segments and need to be marked from leftmost board to the rightmost board. Marking a line segment is quite straight-forward, one needs to draw a quadrilateral around the line being marked. Click and press a key to confirm the corner of the quadrilateral. Once 4 points are clicked, each followed by a key-press, the program will move on to the next line segment. Continue marking the line segments for all boards until complete. Line segments for each board are to be marked in clock-wise order starting from the top-left.

After marking all the line-segments, the rigid-body transformation between the camera and the LiDAR frame will be displayed.

Intermediate values are logged in conf/points.txt.

points.txt

This contains num_of_sensors*(num_of_markers*points_per_board) 3D points, here, num_of_sensors is fixed to 2 and the points_per_board=4, the four corner points.
So if num_of_markers = 2, then,

  • the first 4 points represent the first board,
  • next 4 points represent the second board,
    both of which are 3D co-ordinates in meters, viewed from the lidar origin.
  • the next 4 points represent the first board,
  • the next 4 points represent the second board,
    both of which are 3D co-ordinates in meters, viewed from the camera origin.
    The points are ordered according to their correspondences, i.e. the second point in the first 8 points has a correspondence with the second point in the last 8 points in this case.

Fusion using lidar_camera_calibration

To verify the method in a more intuitive manner, lidar_camera_calibration was used to fuse point clouds obtained from two stereo cameras.

Manual measurement vs. lidar_camera_calibration

First, we compare the calibration parameters obtained from lidar_camera_calibration against meticulously measured values using tape by a human. The setup looked something like the this:

The fused point cloud obtained when using manual measurements versus when using lidar_camera_calibration is shown in the video. Notice the large translation error, even when the two cameras are kept on a planar surface. Hallucinations of markers, cupboards and carton box (in the background) can be seen as a result of the two point clouds not being aligned properly.

On the other hand, rotation and translation estimated by lidar_camera_calibration almost perfectly fuses the two individual point clouds. There is a very minute translation error (~1-2cm) and almost no rotation error. The fused point cloud is aligned so properly, that one might actually believe that it is a single point cloud, but it actually consists of 2 clouds fused using extrinsic transformation between their sources (the stereo cameras).

The resultant fused point clouds from both manual and lidar_camera_calibration methods can be seen in this video.

Calibrating cameras kept at ~80 degrees

We also wanted to see the potential of this method and used it to calibrate cameras kept at ~80 degrees and almost no overlapping field-of-view. In principle, with a properly designed experimental setup our method can calibrate cameras with zero overlapping field of view.

However, to visualize the fusion, we needed a part to be common in both point clouds. We chose a large checkerboard to be seen in both FOVs, since it can be used to see how well the point clouds have aligned and if the dimensions of the checkerboard squares are known, one can even estimate the translation errors.

The setup for such an experiment looked something like this:

There is very less translation error, about 3-4 cm. Also, the ground planes align properly, at all distances, near and far from the camera, implying that the rotations estimated are correct.

The resultant fused point clouds after extrinsic calibration of stereo cameras kept at ~80 degrees using lidar_camera_calibration can be seen in this video.

We believe, that better intrinsic calibration of the cameras can help drive down the error to about 1 centimeter or even less.

Issues
  • lidar input node

    lidar input node

    Do you happen to have two nodes/code that publish the lidar and camera data to the topics:

    camera_frame_topic: /frontNear/left/image_raw velodyne_topic: /velodyne_points

    Thanks for all the help and the quick responses!

    opened by zachkendrick 16
  • aruco_mapping don't detect the marker

    aruco_mapping don't detect the marker

    @karnikram I thought I have known the usage of the package. Then I stuck in another problem. I change the aruco makers to my own, the aruco_mapping.launch could not detect the marker any more. Would please point out what happened? Is there any requires about the marker selection or any problems with the method I setup the marker boards ? The id of the two markers are 16 and 48 here 3 736

    opened by sangych 15
  • How to use average translation and YPR angles with ROS from LIDAR and Camera data fusion

    How to use average translation and YPR angles with ROS from LIDAR and Camera data fusion

    Hi, I run the package, and got results as shown below, screenshot from 2018-03-30 19-58-31

    I would like to visualize Lidar point clouds in Zed camera frame. So I do a static coordinate transformation with this command

    rosrun tf static_transform_publisher -0.09122 -0.04545 -0.0432 1.48563 -1.5026 0.13215 zed_left_camera velodyne 100

    the first three parameters are x y z for translation and the last three parameters are for yaw pitch roll angles.

    However result look non relevant as shown below

    screenshot from 2018-03-30 20-09-15 What I am doing wrong ? Is there another way to use transformation matrix(4X4) in ROS below are my files and rosbag zed_left_uurmi.ini config_file.txt

    marker_coordinates.txt Rosbag : https://drive.google.com/open?id=1HKqOUeIacSARwwUcCV6CU83fjTWr3OwI

    Update; My setup 29831154_1700629523308503_861922818_o

    opened by jediofgever 11
  • flipped camera points

    flipped camera points

    We have a testing setup as shown in the image and got some results from it. I evaluated the result using the matlab code provided. But seems the result of camera points are flipped. Our velodyne in the car is pointing downwards around 6 degrees, camera is pointing down around 15 degrees. I subscribed to the already rectified image topic. Not sure what could be the reason for this result? averaged transformation T = [ 0.907442 -0.377166 0.185188 -0.656001 -0.372771 -0.926022 -0.0593794 -0.212374 0.193884 -0.0151491 -0.980908 3.65862 0 0 0 1 ]; evaluation setup

    opened by ghost 10
  • Stuck after finding both markers

    Stuck after finding both markers

    I was able to successfully build the code and followed through some of the solutions that other questions answered. At this point, the program is able to find both AruCo markers on the image. But for some reason, it is just stuck there and does not go past it. It keeps printing the location of both markers to screen continuously. The output from the program is shown below. One thing that concerns me is Failed to find match for field 'intensity'. line. I see that intensity filter is being used for the point cloud. But I'm just running ROS Velodyne package VLP16_points.launch file. On Rviz, I am able to see the intensity field under /velodyne_points.

    Any help is much appreciated!

    NODES
      /
        aruco_mapping (aruco_mapping/aruco_mapping)
        find_transform (lidar_camera_calibration/find_transform)
    
    ROS_MASTER_URI=http://localhost:11311
    
    core service [/rosout] found
    process[aruco_mapping-1]: started with pid [24294]
    process[find_transform-2]: started with pid [24296]
    [ INFO] [1505776473.556181936]: Calibration file path: /home/shubangsridhar/shub_src/extrinsic_calibration/src/lidar_camera_calibration/conf/ueye-serialnum-4102778863.ini
    [ INFO] [1505776473.556269936]: Number of markers: 2
    [ INFO] [1505776473.556295936]: Marker Size: 0.226
    [ INFO] [1505776473.556308936]: Type of space: plane
    [ INFO] [1505776473.556347936]: ROI allowed: 0
    [ INFO] [1505776473.556365936]: ROI x-coor: 0
    [ INFO] [1505776473.556379936]: ROI y-coor: 0
    [ INFO] [1505776473.556398936]: ROI width: 32710
    [ INFO] [1505776473.556418936]: ROI height: 4273583
    [ INFO] [1505776473.559131936]: Calibration data loaded successfully
    Size of the frame: 1600 x 1200
    Limits: -4 to 4
    Limits: -4 to 4
    Limits: -4 to 0
    Number of markers: 2
    Intensity threshold (between 0.0 and 1.0): 0.02
    useCameraInfo: 1
    Projection matrix: 
    [805.45886, 0, 800.26105, 0;
      0, 805.26483, 603.10297, 0;
      0, 0, 1, 0]
    MAX_ITERS: 1
    [ INFO] [1505776473.575178936]: Reading CameraInfo from topic
    done1
    25=(310.637,608.945) (218.48,681.984) (128.23,605.684) (227.578,534.446) Txyz=-1.23591 0.010846 1.72566 Rxyz=-0.379636 1.37063 -1.90313 
    582=(899.297,610.282) (838.419,675.12) (771.255,613.907) (832.576,552.011) Txyz=0.0875873 0.0225216 2.00668 Rxyz=-0.804321 1.5226 -2.06702 
    [ INFO] [1505776483.425556935]: First marker with ID: 25 detected
    [ WARN] [1505776483.429582935]: TF to MSG: Quaternion Not Properly Normalized
    [ INFO] [1505776483.491359935]: Camera info received at 1.50578e+09
    [ INFO] [1505776483.491411935]: Velodyne scan received at 1.50578e+09
    [ INFO] [1505776483.491425936]: marker_6dof received at 1.50578e+09
    Failed to find match for field 'intensity'.
    25 -1.23591 0.010846 1.72566 -0.379636 1.37063 -1.90313 582 0.0875873 0.0225216 2.00668 -0.804321 1.5226 -2.06702 
    [ INFO] [1505776483.493985935]: iteration number: 0
    
    ---------Moving on to next marker--------
    582=(899.328,610.288) (838.491,675.034) (771.252,613.848) (832.693,552.059) Txyz=0.0877038 0.0224342 2.0062 Rxyz=-0.80573 1.51786 -2.07075 
    25=(310.461,608.854) (218.511,681.82) (128.524,605.7) (227.515,534.588) Txyz=-1.23841 0.0108164 1.72915 Rxyz=-0.381727 1.37277 -1.90387 
    582=(899.179,610.352) (838.482,675.026) (771.325,613.815) (832.659,552.14) Txyz=0.0877794 0.0225476 2.00906 Rxyz=-0.800976 1.51713 -2.06865 
    25=(310.415,608.931) (218.611,681.7) (128.44,605.721) (227.527,534.674) Txyz=-1.24015 0.0108746 1.73166 Rxyz=-0.374344 1.36729 -1.90307 
    582=(899.201,610.354) (838.438,674.998) (771.341,613.83) (832.609,551.993) Txyz=0.0877532 0.0224742 2.00929 Rxyz=-0.799631 1.52126 -2.06486 
    25=(310.468,608.963) (218.554,681.845) (128.369,605.734) (227.551,534.594) Txyz=-1.23849 0.0109205 1.72927 Rxyz=-0.377402 1.36981 -1.90236 
    582=(899.207,610.279) (838.416,675.121) (771.273,613.924) (832.494,551.981) Txyz=0.0875637 0.0225483 2.00808 Rxyz=-0.800433 1.52453 -2.06427 
    25=(310.473,608.933) (218.484,681.905) (128.375,605.759) (227.554,534.503) Txyz=-1.23726 0.0108846 1.72747 Rxyz=-0.381968 1.37321 -1.90287 
    582=(899.122,610.443) (838.41,675.137) (771.231,613.829) (832.629,552.245) Txyz=0.0875896 0.0227273 2.00819 Rxyz=-0.800913 1.51456 -2.0699 
    25=(310.554,608.924) (218.456,681.872) (128.316,605.737) (227.542,534.537) Txyz=-1.2372 0.0108513 1.72742 Rxyz=-0.379603 1.36957 -1.9054 
    582=(898.964,610.388) (838.418,675.058) (771.61,613.883) (832.442,552.419) Txyz=0.0879597 0.0229165 2.01592 Rxyz=-0.796683 1.51967 -2.06589 
    25=(310.473,608.964) (218.481,681.855) (128.292,605.76) (227.53,534.484) Txyz=-1.23754 0.0108705 1.72781 Rxyz=-0.379375 1.3714 -1.90231 
    582=(898.918,610.433) (838.426,675.075) (771.504,613.949) (832.482,552.386) Txyz=0.0878678 0.0229718 2.0153 Rxyz=-0.796319 1.52035 -2.06511 
    25=(310.593,608.907) (218.481,681.826) (128.269,605.749) (227.561,534.504) Txyz=-1.23702 0.010798 1.72721 Rxyz=-0.378176 1.36781 -1.90667 
    582=(898.723,610.276) (838.547,675.061) (771.739,614.017) (832.356,552.432) Txyz=0.0881584 0.0230551 2.0211 Rxyz=-0.788952 1.52711 -2.05927 
    25=(310.568,608.95) (218.42,681.895) (128.269,605.755) (227.593,534.474) Txyz=-1.23647 0.0108444 1.72637 Rxyz=-0.38093 1.37005 -1.90567
    
    screen shot 2017-09-18 at 7 30 18 pm
    opened by shubang93 10
  • Do you analyse scale also?

    Do you analyse scale also?

    Hi.

    When using VLP-16 and Logitech c930 and they are positioned close to each other, object appear with different sizes in the two sensors. Also when calculating the scales from the transformation matrix or transforming RGB pixels on top of the pointcloud, the scale difference is visible as seen on the Figure with two aruco cartboards.

    image

    Is it correct that scale information is not considered or the problem is probably somewhere else?

    opened by Tamme 9
  • How to get accurate inital ros_x, rot_y, rot_z?

    How to get accurate inital ros_x, rot_y, rot_z?

    The precision of inital ros_x, rot_y, rot_z will impact final rotation matrix.

    qlidarToCamera = Eigen::AngleAxisd(config.initialRot[2], Eigen::Vector3d::UnitZ()) *
                       Eigen::AngleAxisd(config.initialRot[1], Eigen::Vector3d::UnitY()) *
                       Eigen::AngleAxisd(config.initialRot[0], Eigen::Vector3d::UnitX());
    lidarToCamera = qlidarToCamera.matrix();
    Eigen::Matrix3d final_rotation = rotation_avg * lidarToCamera;
    
    opened by oaix 8
  • Calibration is wrong

    Calibration is wrong

    Hello, the results of the provided calibration tool seem to be very off. I tried running it a couple of times but the end result is often as can be seen below:

    faulty result

    To reproduce this result, I am uploading an appropriate bag file and all the needed config files: test.bag.zip conf.zip find_transform.launch.zip zed_left_uurmi.ini.zip

    and the corresponding results I got: results3.txt

    I am thankful for any suggestion on how to improve the result.

    opened by emilaz 7
  • How could I do 2D-3D correspondence calibration?

    How could I do 2D-3D correspondence calibration?

    Hi, I have read your paper, and I wonder could I just do the 2D-3D correspondence calibration without ArUco markers? Is the code for 2D-3D in this repo? Thanks.

    opened by xiaoshuliu 6
  • add hesai 40p support

    add hesai 40p support

    1. Add parser for /pandar_points provided by Hesai Lidar Driver.
    2. Add option in config_file.txt to select between Velodyne lidar and Hesai lidar
    3. Updated README.md. Add a note about fixing synchronization between Hesai lidar driver and aruco mapping node.

    Fixes #42 #48

    opened by ruichen-v 5
  • No response - no windows - no errors

    No response - no windows - no errors

    Hello I trying to run the calibration but no window is popping up. Node launches but the last info message is Reading CameraInfo from configuration file and then nothing.

    Using ROS Indigo with OpenCV 3.4.1

    Any clues?

    Here is the output:

    Size of the frame: 1024 x 512 Limits: -0.56 to 0.5 Limits: 2.7 to 3.2 Limits: -0.99 to -0.43 Number of markers: 2 Intensity threshold (between 0.0 and 1.0): 0.0004 useCameraInfo: 0 Projection matrix: [551.29785, -0.27212, 501.52975, 0; 0, 551.16797, 248.17503, 0; 0, 0, 1, 0] MAX_ITERS: 10 [ INFO] [1528900343.889889061]: Reading CameraInfo from configuration file

    opened by atasoglou 5
  • Using transformation matrix

    Using transformation matrix

    After running and copying transformation matrix from terminal I'm trying to use that transformation matrix to extract color from image and add it to pcd output of velodyne lidar, I used open3d to read point cloud and extract each point which has x and y and z and transformed it with transformation matrix. Range of each point before and after transformation is : x: -6.706142902374268 -> 39.4197998046875 y: -65.62940216064453 -> 6.3055100440979 z: -0.8670438528060913 -> 9.03710079193 But pictures x and y is from 0 to page width pixel , 0 to page height pixel I'm new to this and i don't really know how to use x and y and z after transformation to get that pixel from image, I assumed i should ignore z as the image is 2d.

    opened by Gold3nFox 0
  • TF to MSG: Quaternion Not Properly Normalized ??

    TF to MSG: Quaternion Not Properly Normalized ??

    [ERROR] [1647922884.020933792]: Not able to lookup transform [ WARN] [1647922884.020993380]: TF to MSG: Quaternion Not Properly Normalized Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "camera_0" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (nan nan nan nan) at line 244 in /tmp/binarydeb/ros-melodic-tf2-0.6.5/src/buffer_core.cpp Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "camera_0" from authority "unknown_publisher" because of an invalid quaternion in the transform (nan nan nan nan)

    opened by Wuxinxiaoshifu 1
  • Query related to 2D-3D Correspondense in the listed paper

    Query related to 2D-3D Correspondense in the listed paper

    I wanted to know why didn't RANSAC method work with the 2D-3D correspondence case? I read that it resulted in a huge back projection error. But I am curious about the details

    opened by tejalbarnwal 0
  • unable to fetch package

    unable to fetch package

    I could not properly execute installation commands for Melodic so I retried it with Noetic but I received message as below. Is is ok to ignor these? I got this message when I executed sudo apt-get install -y ros-noetic-nodelet-core

    The following packages were automatically installed and are no longer required: libatomic1:i386 libbsd0:i386 libdrm-amdgpu1:i386 libdrm-intel1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386 libedit2:i386 libelf1:i386 libexpat1:i386 libffi7:i386 libfprint-2-tod1 libgl1:i386 libgl1-mesa-dri:i386 libglapi-mesa:i386 libglvnd0:i386 libglx-mesa0:i386 libglx0:i386 libllvm11:i386 libpciaccess0:i386 libsensors5:i386 libstdc++6:i386 libuvc-dev libuvc0 libvulkan1:i386 libwayland-client0:i386 libx11-6:i386 libx11-xcb1:i386 libxau6:i386 libxcb-dri2-0:i386 libxcb-dri3-0:i386 libxcb-glx0:i386 libxcb-present0:i386 libxcb-randr0:i386 libxcb-sync1:i386 libxcb-xfixes0:i386 libxcb1:i386 libxdamage1:i386 libxdmcp6:i386 libxext6:i386 libxfixes3:i386 libxshmfence1:i386 libxxf86vm1:i386 linux-image-5.4.0-42-generic linux-modules-5.4.0-42-generic linux-modules-extra-5.4.0-42-generic mesa-vulkan-drivers:i386 Use 'sudo apt autoremove' to remove them. The following packages will be upgraded: ros-noetic-nodelet-core 1 upgraded, 0 newly installed, 0 to remove and 508 not upgraded. Need to get 2,504 B of archives. After this operation, 0 B of additional disk space will be used. Err:1 http://packages.ros.org/ros/ubuntu focal/main amd64 ros-noetic-nodelet-core amd64 1.10.1-1focal.20210423.234339 404 Not Found [IP: 64.50.233.100 80] E: Failed to fetch http://packages.ros.org/ros/ubuntu/pool/main/r/ros-noetic-nodelet-core/ros-noetic-nodelet-core_1.10.1-1focal.20210423.234339_amd64.deb 404 Not Found [IP: 64.50.233.100 80] Screenshot from 2021-10-12 15-12-33

    opened by konnoeric 2
  • error: pathspec 'melodic' did not match any file(s) known to git.

    error: pathspec 'melodic' did not match any file(s) known to git.

    I am at an installation step and encountering some error. Please help me. I received this error when I executed git checkout melodic. What is the reason causing this issue?

    error: pathspec 'melodic' did not match any file(s) known to git.

    OS : Ubuntu 18.04 ROS : Melodic

    Screenshot from 2021-10-12 13-37-42

    opened by konnoeric 0
Releases(melodic)
Owner
Ankit Dhall
Machine Learning, Computer Vision, Robotics | ETH-Zurich
Ankit Dhall
LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

hongyu wang 10 Apr 12, 2022
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

PointCloud on Image The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera. Th

Edison Velasco Sánchez 4 Apr 21, 2022
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Kangcheng LIU 68 Jun 23, 2022
An implementation on Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process.

An implementation on "Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239"

Wangxu1996 31 Jun 28, 2022
A LiDAR point cloud cluster for panoptic segmentation

Divide-and-Merge-LiDAR-Panoptic-Cluster A demo video of our method with semantic prior: More information will be coming soon! As a PhD student, I don'

YimingZhao 55 Jun 8, 2022
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 266 Jun 21, 2022
Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor"

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 174 Jun 23, 2022
IA-LIO-SAM is enhanced LIO-SAM using Intensity and Ambient channel from OUSTER LiDAR.

IA-LIO-SAM Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is

Kevin Jung 70 Jun 26, 2022
NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping by: Alexandre Boulch, Pierre-Alain Langlois, Gilles Puy a

valeo.ai 25 May 24, 2022
A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection

A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection, how to set up the GigE camera, and deployment for the DeepStream apps.

NVIDIA AI IOT 7 May 27, 2022
ROS2 packages based on NVIDIA libArgus library for hardware-accelerated CSI camera support.

Isaac ROS Argus Camera This repository provides monocular and stereo nodes that enable ROS developers to use cameras connected to Jetson platforms ove

NVIDIA Isaac ROS 26 Jun 16, 2022
A real-time LiDAR SLAM package that integrates FLOAM and ScanContext.

SC-FLOAM What is SC-FLOAM? A real-time LiDAR SLAM package that integrates FLOAM and ScanContext. FLOAM for odometry (i.e., consecutive motion estimati

Jinlai Zhang 11 May 21, 2022
A real-time LiDAR SLAM package that integrates TLOAM and ScanContext.

SC-TLOAM What is SC-TLOAM? A real-time LiDAR SLAM package that integrates TLOAM and ScanContext. TLOAM for odometry. ScanContext for coarse global loc

Jinlai Zhang 3 Sep 17, 2021
Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)

SC-LeGO-LOAM NEWS (Nov, 2020) A Scan Context integration for LIO-SAM, named SC-LIO-SAM (link), is also released. Real-time LiDAR SLAM: Scan Context (1

Giseop Kim 10 May 5, 2022
This work is an expend version of livox_camera_calib(hku-mars/livox_camera_calib), which is suitable for spinning LiDAR。

expend_lidar_camera_calib This work is an expend version of livox_camera_calib, which is suitable for spinning LiDAR。 In order to apply this algorithm

afei 28 Jun 10, 2022
R3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

R3LIVE A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package News [Dec 31, 2021] Release of cod

HKU-Mars-Lab 916 Jun 23, 2022
Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering)

my_detection Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering) 本项目使用设置地面坡度阈值的方法,滤除地面点,使用三维弯曲体素聚类法完成点云的聚类,包围盒参数由Apol

null 8 Jun 7, 2022
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering

Darius Rückert 1.8k Jun 24, 2022
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.

PrimitivesFittingLib An unified library for fitting multiple primitives from 3D point cloud data with both C++&Python API. The supported primitives ty

Yueci Deng 11 Apr 14, 2022