image_projection is a ROS package to create various projections from multiple calibrated cameras.

Overview

image_projection

Overview

image_projection is a ROS package to create various projections from multiple calibrated cameras. Use cases involve:

  • Rectify a distorted image
  • Create a cylindrical projection (equirectangular, mercator ..) from 360° camera data from multiple cameras
  • Create a perspective (pinhole) projection from fisheye camera data
  • Convert non-linear fisheye data to an ideal fisheye lense ("tru-theta")
  • Define a virtual pan-tilt camera

Available projection targets are:

  • Pinhole / perspective projection
  • Mercator projection
  • Ideal fisheye projection

Projections can be created periodically or on-demand. All projection parameters including image dimensions and position/direction can be configured dynamically at runtime. All cameras have to be calibrated beforehand.

A demo launch configuration is available below.

Author/Maintainer: Martin Oehler
Affiliation: TU Darmstadt, SIM
License: MIT

Examples

Insta 360 Air Low cost 360° camera with two fisheye lenses pointing in opposite directions

Source Images Pinhole projection Mercator projection

Boston Dynamics Spot 5 pinhole cameras, 2 at the front, 2 at the sides and 1 at the rear

Source Images Pinhole projection Mercator projection

Installation

If you haven't already installed it, install ROS. Please use Desktop-Full Install to run the demo. Noetic is officially supported, but Melodic and Kinetic should work as well.

Create a new catkin workspace. Skip this step if you use your existing workspace.

source /opt/ros/noetic/setup.bash
mkdir -p catkin_ws/src
cd catkin_ws/
catkin init

Go into the source directory of your workspace and check out image_projection

cd src/
git clone https://github.com/tu-darmstadt-ros-pkg/image_projection.git

Install dependencies via rosdep and wstool (.rosinstall)

cd image_projection
rosdep install --from-paths . --ignore-src -r -y
wstool init ../.. # not required if already initialized
wstool merge image_projection_https.rosinstall
wstool update

Build

catkin build image_projection_demo

and source the workspace

source ../../devel/setup.bash

Verify your installation by launching the demo (see below).

Getting started

Launching the demo

Make sure to open a new terminal after building the project for the first time.

Start by downloading the demo bag file with

roscd image_projection_demo/bags/
wget https://www.sim.informatik.tu-darmstadt.de/~oehler/insta360_air_robocup2018.bag

Launch the demo with

roslaunch image_projection_demo demo_insta360.launch

An rqt window with the source video and projection examples will open. The rqt_reconfigure GUI on the right side can be used to change projection parameters at runtime.

Setting up your own projection

To use image_projection for your robot, you will need to update camera topics, calibration and projection parameters. You can duplicate the launch and configuration files in the demo and adapt them to your robot.

The main launch file is image_projection_demo/launch/projections.launch. It loads the configuration, calibration and starts up the projection nodelets. Cameras are configured in image_projection_demo/config/camera360_cameras.yaml. This setup uses an extended camera calibration from kalibr. Projections are configured in image_projection_demo/config/projections.yaml.

Lastly, make sure that your cameras are properly calibrated. For calibration instructions, refer to the section "Camera calibration" below.

Camera calibration

The following calibration formats are supported:

  • sensor_msgs/CameraInfo is the ROS standard camera calibration for pinhole cameras. It can be created with camera_calibration. Normally, the calibration file is loaded by the camera driver and published on /[camera_name]/camera_info.

  • kalibr_image_geometry_msgs/ExtendedCameraInfo is an extended calibration format with additional support for fisheye cameras. The format is based on calibrations with kalibr. The extended calibration can be published with the node kalibr_extended_camera_info_publisher/camera_info_publisher_node on topic /[camera_name]/extended_camera_info. You can see an example in image_projection_demo/launch/calibration_publisher.launch. The node needs the calibration as a YAML file, you can see the format in image_projection_demos/config/insta360_3_omni_radtan_calibration/camera360_right.yaml. The information has to be filled in from the kalibr calibration file.

Insights

Method

image_projection projects multiple cameras onto 3D projection surfaces. The shapes of the projection surfaces are parameterized and depend on the chosen projection. A pinhole projection uses a planar projection surfaces, cylindrical projections, e.g. Mercator, use, as the name suggests, a cylindrical surface. The surfaces are generated inside the scene with a pixel grid of the chosen target resolution. Each surface pixel is projected into every camera to retrieve color information. If the projected pixel is valid (inside the image) in more than one camera, the best match is chosen based on a heuristic. Pixels close to the respective image center are preferred.

Initially, a projection map is computed as described above. This step is rather costly and has to be repeated each time parameters are changed. The projection itself is comparatively cheap and implemented as a remap for each camera onto the same target image.

Stitching Seams

Typically, projections with multiple camera sources show stitching seams. These seams are due to the fact, that camera centers to not coincide. With a wider baseline between the two cameras, more artifacts will be visible. Reversely, stitching quality improves if baselines are small. The effect can be mitigated by moving the projection surface to the typical distance of surrounding objects. Another common approach to eliminate stitching seams is blending, which might get integrated in the future. However, this can lead to ghosting artifacts.

Nodes (Nodelets)

All nodes also come in nodelet variants (name in brackets) and are implemented as lazy subscribers.

periodic_image_projection_node (image_projection/PeriodicImageProjectionNodelet)

This node/nodelet periodically publishes a projected image from the latest camera data. All projection parameters can be configured dynamically at runtime. No computation is performed, if the output topic is not subscribed.

Subscribed Topics

  • /[camera_ns]/image_raw (sensor_msgs/Image)

    The source camera images. camera_ns refers to the configured cameras (see ~cameras parameter).

  • /[camera_ns]/camera_info (sensor_msgs/CameraInfo)

    Camera calibration. Make sure that only camera_info or extended_camera_info are published, never both.

  • /[camera_topic]/extended_camera_info (kalibr_image_geometry_msgs/ExtendedCameraInfo)

    Extended calibration info for calibration with kalibr. Make sure that only camera_info or extended_camera_info are published, never both.

  • ~set_pose (geometry_msgs/Pose)

    Set the pose of the virtual sensor frame relative to ~base_frame. Can be used to implement a virtual pan-tilt sensor head by updating the sensor pose based on joystick feedback.

  • /tf and /tf_static (tf2_msgs/TFMessage)

    tf2 is used to retrieve the transformations between each camera's optical frame and the projection base_frame.

Published Topics

  • ~projection (sensor_msgs/Image)

    Projected image.

  • ~camera_info (sensor_msgs/CameraInfo)

    In case of the image_projection_plugins::PinholeProjection plugin, the camera info of the projection is published on this topic if the parameters virtual_sensor_frame and virtual_sensor_optical_frame are set.

Services

  • ~project_pixel_to_ray ([image_projection_msgs/ProjectPixelTo3DRay])

    Can be called with the x/y-coordinates of a pixel in the projection and will return the corresponding 3D ray into the scene in base_frame.

Parameters

  • ~update_rate (double, default: 1.0)

    Publishing frequency of the projected image

  • ~base_frame (string, mandatory)

    Name of the tf frame in which the pose of the projection surface is defined.

  • ~virtual_sensor_frame (string, default: "")

    If set, this frame will be broadcasted to tf at the pose of the virtual sensor frame.

  • ~virtual_sensor_optical_frame (string, default: "")

    If set, this frame will be broadcasted to tf at the pose of the virtual sensor optical frame.

  • ~pose (list[double, default: [0, 0, 0, 0, 0, 0])

    Pose of the virtual sensor frame (also see ~virtual_sensor_frame) relative to ~base_frame. List with 6 entries in the order x, y, z, roll, pitch, yaw.

  • ~projection_type (string, mandatory)

    Name of a projection plugin (see below "Projection plugins")

  • ~projection_parameters/image_width (int, mandatory)

    Image width of the output projection.

  • ~projection_parameters/image_height (int, mandatory)

    Image height of the output projection.

  • ~projection_parameters (dict)

    Place parameters of the projection plugin in this namespace. For info on available parameters, see "Projection plugins" below.

  • ~always_recompute_mapping (bool, default: false)

    Forces a map update in each cycle instead of only when changes occurred. This can be used to achieve a uniform latency of the projection.

  • ~cameras (list[string], mandatory)

    List of camera namespaces to be used for projections. The camera namespace is the base namespace of your image and camera info topic; e,g, if your camera publishes the topics /camera/image_raw and /camera/camera_info, the camera namespace is /camera.

  • [camera_ns]/image_topic (string, default: "image_raw")

    Configure the camera topic for each camera.

  • [camera_ns]/camera_info_topic (string, default: "camera_info")

    Configure the camera info for each camera.

  • [camera_ns]/extended_camera_info_topic (string, default: "extended_camera_info")

    Configure the extended camera info topic for each camera.

image_projection_server_node

Not yet implemented. This node creates projections on-demand as a service.

Projection plugins

Projections are implemented as plugins. The following plugins are available:

image_projection_plugins::PinholeProjection

Creates a virtual perspective (pinhole) camera by creating a 3D plane. The camera points towards the x-axis in the camera frame and towards the z-axis in the optical frame (ROS standard).

Parameters

  • ~projection_parameters/focal_length (double, 1.0)

    Focal length of the simulated pinhole camera in meters.

  • ~projection_parameters/horizontal_fov (double, 90.0)

    Horizontal field of view in degree. Vertical field of view is calculated based on aspect ratio to enforce square pixels.

Published Topics

image_projection_plugins::MercatorProjection

The Mercator projection creates a virtual 3D cylinder projection surface.

Parameters

  • ~projection_parameters/cylinder_radius (double, 1.0)

    Radius of the cylinder in meters.

  • ~projection_parameters/vertical_fov (double, 1.0)

    Vertical field of view of the projection. This determines the height of the cylinder.

image_projection_plugins::IdealFisheyeProjection

Generates an ideal ("tru-theta") fisheye image by creating a 3D sphere. An ideal fisheye image has a linear relationship between pixel distance from image center to corresponding light ray. Most real fisheye lenses do not achieve this.

Parameters

  • ~projection_parameters/sphere_radius (double, 1.0)

    Radius of the sphere in meters.

  • ~projection_parameters/fov (double, 1.0)

    Horizontal and vertical field of view of the fisheye in degree.

Planned features

  • GPU support (OpenGL/CUDA)
  • Image blending
Comments
  • Configuration for 6 cameras

    Configuration for 6 cameras

    Hi @Martin-Oehler thank you for sharing this great development.

    I have try the demo test include in the repository and I could play it correctly, but now, I'm trying to configure it for my project but I don't know exactly how to do it. I also read your paper, but it does not include any example, for example, with the BD Spot.

    My project involves 6 Logitech webcam C270 to create a 360º panoramic view, for launching the cameras I use the usb_cam ros package. All the cameras are correctly launch and name as follows:

    /camera/front/image_raw
    /camera/front/image_raw/compressed
    /camera/front/image_raw/compressed/parameter_descriptions
    /camera/front/image_raw/compressed/parameter_updates
    /camera/front/image_raw/compressedDepth
    /camera/front/image_raw/compressedDepth/parameter_descriptions
    /camera/front/image_raw/compressedDepth/parameter_updates
    /camera/front/image_raw/theora
    /camera/front/image_raw/theora/parameter_descriptions
    /camera/front/image_raw/theora/parameter_updates
    
    /camera/front_left/
    ...
    /camera/front_right/
    ...
    /camera/left
    ...
    /camera/rear
    ...
    /camera/rear_left/
    ...
    /camera/rear_right/
    ...
    /camera/right
    ...
    

    In the config folder I have modify the file 'camera360_cameras.yaml' with the full set of cameras:

    # Used cameras for the projection
    cameras:
      - '/front'
      - '/front_left'
      - '/front_right'
      - '/left'
      - '/rear_left'
      - '/rear_right'
      - '/right'
    
    # Topic setup
    /camera:
        front:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        front_left:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        front_right:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        left:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        rear:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        rear_left:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        rear_right:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
        right:
          image_topic: image_raw
          camera_info_topic: camera_info_not_used
          extended_camera_info_topic: extended_camera_info
    

    But I don't know if I have to modify something in the 'proyections.yaml' or if a I have also to modify something in the 'insta360_3_omni_radtan_calibration' folder, maybe in this folder do I have to include the 6 calibration files for the cameras?

    camera_360_front.yaml
    camera_360_front_left.yaml
    camera_360_front_right.yaml
    camera_360_left.yaml
    camera_360_rear.yaml
    camera_360_rear_left.yaml
    camera_360_rear_right.yaml
    camera_360_right.yaml
    

    Or do I have to modify the 'cameras.yaml' file?

    Also, I think that I should have to add my own *.urdf.xacro file, is it correct?

    Thank you in advance.

    Kinds regards, Jorge

    opened by grafoteka 26
  • Invalid parameter

    Invalid parameter "dist_between_cameras"

    Hi, Thank's a lot for the interesting project. I'm trying to run the demo example. Unfortunatelly when I run roslaunch image_projection_demo demo_insta360.launch I get this error

    Invalid parameter "dist_between_cameras"
    when instantiating macro: camera360 (/opt/ros/melodic/share/hector_sensors_description/urdf/camera360.urdf.xacro)
    in file: /home/tower/catkin_ds/src/image_projection/image_projection_demo/urdf/insta360.urdf.xacro
    RLException: while processing /home/tower/catkin_ds/src/image_projection/image_projection_demo/launch/rosbag.launch:
    while processing /home/tower/catkin_ds/src/image_projection/image_projection_demo/launch/tf.launch:
    Invalid <param> tag: Cannot load command parameter [robot_description]: command [['/opt/ros/melodic/lib/xacro/xacro', '/home/tower/catkin_ds/src/image_projection/image_projection_demo/urdf/insta360.urdf.xacro']] returned with code [2]. 
    
    Param xml is <param command="$(find xacro)/xacro $(find image_projection_demo)/urdf/insta360.urdf.xacro" name="robot_description"/>
    The traceback for the exception was written to the log file
    

    Could you please give me a hint, where the problem can be? Best regards, Igor

    opened by barabun 6
  • Correct parameters for mercator projection

    Correct parameters for mercator projection

    I've been trying to get this package working with 4 pinhole cameras at 90 degrees apart, each 0.5m from the center (facing outwards)
    I've been running the cameras in a unity simulation and it seems no matter what parameters I reconfigure the mercator projection with, the image is never fully showing in rqt (always stretched or cut off at the edges). I can supply a bagfile with the cam feed if anyone is interested in helping out.
    Thanks in advance.

    opened by Adidushi 5
  • How to get the video stream from Insta360?

    How to get the video stream from Insta360?

    Hi,

    I am interested in using this project with my research. However, I would like to know how to get the video stream of Insta360 Air in real-time and forward it to the topic /camera360_nodelet_loader/compressed/image_raw. Does this method support Insta360 ONE X2?

    Thanks!

    opened by KennethYangle 3
  • The line beetwen images

    The line beetwen images

    Hello,

    Thanks for the package! I've got two cameras realsense D455. The result: 2021-07-26 13-44-10 (2)

    Questions:

    1. How to get rid of the line between images to achieve a smooth transition (for example pic.1) from one image to another for different ranges with your package?
    2. Which do you know another method to achieve a smooth transition?

    Pic.1: Снимок экрана от 2021-07-26 16-03-40

    For calibration I've used kalibr. Сameras are located as follows: IMG_20210726_134843 IMG_20210726_161604

    Projections yaml has the following structure:

    $(arg camera_name)_pinhole_front: update_rate: 30 always_recompute_mapping: false base_frame: base_link pose: [0, 0, 0, 0, -0.5, 0] virtual_sensor_frame: pinhole_front_camera_frame virtual_sensor_optical_frame: pinhole_front_optical_frame projection_type: image_projection_plugins::PinholeProjection projection_parameters: image_width: 2000 image_height: 1500 horizontal_fov: 100 focal_length: 1

    TF-tree: frames

    Thanks for your reply!

    opened by JockTokk 2
  • How to increase mapping speed

    How to increase mapping speed

    Hello!

    This project is so cool! I tested my insta360 air in realTime, it works well. But when changing the pinhole view pose, I find it costs around 4000ms on average to create a mapping. Is that normal? And how to increase the speed?

    Thanks a lot!

    opened by ifanma 2
  • Pinhole for 2 monocular cameras IP

    Pinhole for 2 monocular cameras IP

    Hi, I need help to solve a issue with pinhole projection. I am using image_projection package to create pinhole projection from two calibrated cameras, I do not know much about them, simply that they are monocular cameras IP.

    I set up my own projection in this way: 1.- I calibrated the two monocular cameras IP. 2.- I developed a driver (from http://wiki.ros.org/video_stream_opencv) to launch the two cameras. 3.- The calibration files (.yaml) were loaded by the driver, and it seems to work correctly. One example of calibration file is below: image_width: 1280 image_height: 720 camera_name: cameraProbot_front camera_matrix: rows: 3 cols: 3 data: [783.72954, 0. , 616.94934, 0. , 776.53814, 418.06712, 0. , 0. , 1. ] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [-0.430972, 0.308801, -0.011165, -0.000338, 0.000000] rectification_matrix: rows: 3 cols: 3 data: [1., 0., 0., 0., 1., 0., 0., 0., 1.] projection_matrix: rows: 3 cols: 4 data: [711.72192, 0. , 619.78121, 0. , 0. , 719.64154, 414.22166, 0. , 0. , 0. , 1. , 0. ]

    First question I have. If the cameras were fisheye, Would they need to be calibrated with Kalibr? Another doubt for me is about xacro files: now, I am using a generic template (macro) to describe my robot and I am not sure if the template selection to create .xacro could depend on the cameras model used. Would it be a error source for pinhole?

    I updated topics and adapted my projections.launch. Also, I was modifying projections parameters in projections.yaml (basically “horizontal_fov” and “focal_length”) and when run projections.launch none errors appear but pinhole is shown incorrectly.

    Can anybody give me any suggestion for this issue? If anybody needs further information, please let me know.

    Thanks in advance. Raúl

    opened by rmunoz6323 1
  • Top View Projection

    Top View Projection

    Hi @Martin-Oehler thank you for sharing

    I have a question about a top view projection

    My project have 3 pinhole camera and already calibrated.

    I try to make a Top View Projection By use plugin PinholeProjection and this is my setting parameters

    pinhole_front: update_rate: 10 always_recompute_mapping: false base_frame: front_camera pose: [0, 0, 0, 0, 0, 0] virtual_sensor_frame: pinhole_front_camera_frame virtual_sensor_optical_frame: pinhole_front_optical_frame projection_type: image_projection_plugins::PinholeProjection projection_parameters: image_width: 1280 image_height: 720 horizontal_fov: 0 # degree focal_length: 1

    Low Right view pinhole projection

    pinhole_low_right: update_rate: 10 always_recompute_mapping: false base_frame: low_right_camera pose: [0, 0, 0, 0, 0, 0] virtual_sensor_frame: pinhole_low_right_camera_frame virtual_sensor_optical_frame: pinhole_low_right_optical_frame projection_type: image_projection_plugins::PinholeProjection projection_parameters: image_width: 1280 image_height: 720 horizontal_fov: 0 focal_length: 1

    Low Left view pinhole projection

    pinhole_low_left: update_rate: 10 always_recompute_mapping: false base_frame: low_left_camera pose: [0, 0, 0, 0, 0, 0] virtual_sensor_frame: pinhole_low_left_camera_frame virtual_sensor_optical_frame: pinhole_low_left_optical_frame projection_type: image_projection_plugins::PinholeProjection projection_parameters: image_width: 1280 image_height: 720 horizontal_fov: 0 focal_length: 1

    The question is Does PinholeProjection Plugin can do Top-View projection? If yes do i set parameters correctly?

    Thanks You in advance

    Kinds regards, Chin

    opened by rabayjai112 3
Owner
Technische Universität Darmstadt ROS Packages
Technische Universität Darmstadt ROS Packages
This repo contains source code of our paper presented in IROS2021 "Single-Shot is Enough: Panoramic Infrastructure Based Calibration of Multiple Cameras and 3D LiDARs"

Single-Shot is Enough: Panoramic Infrastructure Based Calibration of Multiple Cameras and 3D LiDARs Updates [2021/09/01] first commit, source code of

Alibaba 83 Dec 19, 2022
Raspberry Pi Pico (RP2040) and Micro-ROS (ROS 2) Integration

The Pico is an amazing microcontroller and I couldn't wait for ROS 2 support or Arduino Core, so here is my approach. Once the Arduino Core for RP2040 is out it will be easier to use micro_ros_arduino.

Darko Lukić 19 Jun 19, 2022
`lv_lib_100ask` is a reference for various out of the box schemes based on lvgl library or an enhanced interface for various components of lvgl library.

Introduction lv_lib_100ask is a reference for various out of the box schemes based on lvgl library or an enhanced interface for various components of

100askTeam 34 Dec 15, 2022
Boilerplate to create a project with: STM32 + Ethernet + micro-ROS + FreeRTOS + Arduino + PlatformIO

micro_ros_stm32_template Boilerplate to create a project with: STM32 + Ethernet + micro-ROS + FreeRTOS + Arduino + PlatformIO Default config STM32F407

Husarion 14 Aug 29, 2022
Isaac ROS image_pipeline package for hardware-accelerated image processing in ROS2.

isaac_ros_image_pipeline Overview This metapackage offers similar functionality as the standard, CPU-based image_pipeline metapackage, but does so by

NVIDIA AI IOT 32 Dec 15, 2022
A ros package for robust odometry and mapping using LiDAR with aid of different sensors

W-LOAM A ros package for robust odometry and mapping using LiDAR with aid of different sensors Demo Video https://www.bilibili.com/video/BV1Fy4y1L7kZ?

Saki-Chen 51 Nov 2, 2022
A ROS package for mobile robot localization with 2D LiDAR

mcl_ros About mcl_ros mcl_ros is a ROS package for mobile robot localization with 2D LiDAR. To implement localization, Monte Carlo localization (MCL)

Naoki Akai 47 Oct 13, 2022
ROS package to calibrate the extrinsic parameters between LiDAR and Camera.

lidar_camera_calibrator lidar_camera_calibrator is a semi-automatic, high-precision, feature-based camera and LIDAR extrinsic calibration tool. In gen

Networked Robotics and Sytems Lab 78 Dec 23, 2022
Create a calculator of any kind in any language, create a pr.

calculators Create a calculator of any kind in any language, create a pr. Create a calculator of any type using the programming language of your choic

Akshay Gautam 2 Oct 21, 2022
A ROS based Open Source Simulation Environment for Robotics Beginners

A ROS based Open Source Simulation Environment for Robotics Beginners

Sulegeyixiu 131 Jan 5, 2023
This project contains the main ROS 2 packages of Xiaomi CyberDog®.

Xiaomi CyberDog ROS 2 文档包含简体中文和English 简介 - Introduction 本项目包含小米铁蛋®的ROS 2主要功能包. This project contains the main ROS 2 packages of Xiaomi CyberDog®. 基本信

null 383 Dec 31, 2022
Developing a Drone from scratch in ROS Gazebo

Project Building a drone and developing its control system. Table of Contents Project Table of Contents About The Project Tech Stack File Structure Ge

null 14 Oct 17, 2022
ROS packages for Scout 2.0

ROS packages for Scout 2.0

ROAS 3 Jun 14, 2022
Dobot MG400 callaborative robot with ROS

MG400Robot Chinese version of the README -> please click here Building ubuntu16.04 cd $HOME/catkin_ws/src git clone https://github.com/Dobot-Arm/mg40

Dobot 9 Nov 15, 2022
ROS driver for the ICM-20948

ROS driver for the ICM-20948 1 Installation 1.1 Install ROS package on PC $ cd catkin_ws/src $ git clone https://github.com/Alpaca-zip/icm_20948.git

null 3 Apr 11, 2022
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.

fusion-lidar-camera-ROS 一、介绍 本仓库是一个ROS工作空间,其中ws_fusion_camera/src有一个工具包color_pc ws_fusion_camera │ README.md │ └───src │ └───package: c

hongyu wang 23 Dec 7, 2022
ROS compatible tool to generate Allan Deviation plots

Allan Variance ROS ROS package which loads a rosbag of IMU data and computes Allan Variance parameters The purpose of this tool is to read a long sequ

Oxford Dynamic Robot Systems Group 194 Dec 29, 2022
A driver for u-blox receiver (ZED-F9P) with ros support

ublox_driver Authors/Maintainers: CAO Shaozu (shaozu.cao AT gmail.com) The ublox_driver provides essential functionalities for u-blox GNSS receivers.

HKUST Aerial Robotics Group 50 Dec 30, 2022
Simple ROS mobile robot

Do-it-yourself differential drive ROS robot This project started out when my wife bought me a Raspberry-Pi 4 (w/ 8GB RAM) for my birthday. At the time

Doug Blanding 21 Dec 20, 2022