HybVIO visual-inertial odometry and SLAM system

Overview

HybVIO

A visual-inertial odometry system with an optional SLAM module.

This is a research-oriented codebase, which has been published for the purposes of verifiability and reproducibility of the results in the paper:

  • Otto Seiskari, Pekka Rantalankila, Juho Kannala, Jerry Ylilammi, Esa Rahtu, and Arno Solin (2022). HybVIO: Pushing the limits of real-time visual-inertial odometry. In IEEE Winter Conference on Applications of Computer Vision (WACV). [arXiv pre-print] | [video]

It can also serve as a baseline in VIO and VISLAM benchmarks. The code is not intended for production use and does not represent a particularly clean or simple way of implementing the methods described in the above paper. The code contains numerous feature flags and parameters (see codegen/parameter_definitions.c) that are not used in the HybVIO but may (or may not) be relevant in other scenarios and use cases.

HybVIO EuRoC

Setup

Here are basic instructions for setting up the project, there is some more detailed help included in the later sections (e.g., for Linux).

  • Install CMake, glfw and ffmpeg, e.g., by brew install cmake glfw ffmpeg.
  • Clone this repository with the --recursive option (this will take a while)
  • Build dependencies by running cd 3rdparty/mobile-cv-suite; ./scripts/build.sh
  • Make sure you are using clang to compile the C++ sources (it's the default on Macs). If not default, like on many Linux Distros, you can control this with environment variables, e.g., CC=clang CXX=clang++ ./scripts/build.sh
  • (optional) In order to be able to use the SLAM module, run ./src/slam/download_orb_vocab.sh

Then, to build the main and test binaries, perform the standard CMake routine:

mkdir target
cd target
cmake -DBUILD_VISUALIZATIONS=ON -DUSE_SLAM=ON ..
# or if not using clang by default:
# CC=clang CXX=clang++ cmake ..
make -j6

Now the target folder should contain the binaries main and run-tests. After making changes to code, only run make. Tests can be run with the binary run-tests.

To compile faster, pass -j argument to make, or use a program like ccache. To run faster, check CMakeLists.txt for some options.

Arch Linux

List of packages needed: clang, cmake, ffmpeg, glfw, gtk3

Debian

On Debian Stretch, had to install (some might be optional): clang, libc++-dev, libgtk2.0-dev, libgstreamer1.0-dev, libvtk6-dev, libavresample-dev.

Raspberry Pi/Raspbian

On Raspbian (Pi 4, 8 GiB), had to install at least: libglfw3-dev and libglfw3 (for accelerated arrays) and libglew-dev and libxkbcommon-dev (for Pangolin, still had problems). Also started off with the Debian setup above.

Benchmarking and the main binary

To run benchmarks on EuRoC, TUM and SenseTime datasets and reproduce numbers published in https://arxiv.org/abs/2106.11857, please follow the instructions in https://github.com/AaltoML/vio_benchmark/tree/main/hybvio_runner.

If you want to test the software on individual datasets, e.g. to see various real-time visualizations, you can use the main binary. For example to run an EuRoC dataset, you can do the following:

  1. In vio_benchmark root folder, run python convert/euroc_to_benchmark.py to download and convert the EuRoC data
  2. Symlink that data here: mkdir -p data && cd data && ln -s /path/to/vio_benchmark/data/benchmark .

Then inside the target/ folder use, e.g.:

./main -i=../data/benchmark/euroc-v1-02-medium -p -useStereo

In general, to run the algorithm on recorded data, use ./main -i=path/to/datafolder, where datafolder/ must at the very least contain a data.{jsonl|csv} and data.{mp4|mov|avi}. Such recordings can be created with

Some common arguments to main are:

  • -p: show pose visualization.
  • -c: show video output.
  • -useSlam: Enable SLAM module.
  • -useStereo: Enable stereo.
  • -s: show 3d visualization. Requires -useSlam.
  • -gpu: Enable GPU acceleration

You can get full list of command line options with ./main -help.

Key controls for main

These keys can be used when any of the graphical windows are focused (see commandline/command_queue.cpp for full list).

  • A to pause and toggle step mode, where a key press (e.g., SPACE) processes the next frame.
  • Q or Escape to quit
  • R to rotate camera window
  • The horizontal number keys 1,2,… toggle methods drawn in the pose visualization.

When the command line is focused, Ctrl-C aborts the program.

Copyright

Licensed under GPLv3. For different (commercial) licensing options, contact us at https://www.spectacularai.com/

Comments
  • Parameters?

    Parameters?

    Hi, and thank you for making tis code available. I am building it on windows desktop, visual studio 2019, and after a day or so of tweaking, I have it compiled.

    However, when I run main.exe -i=output p -useStereo The application does not start. ("output" is the dir containing the csv and video files data.csv, data.avi., data2.avi)

    I think this is because i am not passing in parameters correctly.

    I am confused as to how the parameter json is created. The system looks for: std::ifstream cmdParametersFile("../data/cmd.json");

    But how do I create this file and pass in the camera intrinsics etc?

    (I am using Zed2 data recorded with the zed capture application from your readme)

    Thank you!

    opened by antithing 9
  • OpenBLAS compile error on Raspberry

    OpenBLAS compile error on Raspberry

    Hi, I am trying to compile and run HybVIO on a Raspberry Pi 4 with 4 GB of RAM running Debian Bullseye with no desktop environment (Raspberry Pi OS Lite 32-bit).

    These are the steps I followed:

    1. sudo apt update && sudo apt upgrade && sudo apt autoremove
    2. sudo apt install clang libc++-dev libgtk2.0-dev libgstreamer1.0-dev libvtk6-dev libavresample-dev libglfw3-dev libglfw3 libglew-dev libxkbcommon-dev cmake git ffmpeg (I was unable to install glfw package, is it supported on Pi?)
    3. git clone https://github.com/SpectacularAI/HybVIO.git --recursive
    4. cd HybVIO/3rdparty/mobile-cv-suite
    5. ./scripts/build.sh

    And this is the received error:

    CMake Warning (dev) at CMakeLists.txt:135 (if):
      Policy CMP0054 is not set: Only interpret if() arguments as variables or
      keywords when unquoted.  Run "cmake --help-policy CMP0054" for policy
      details.  Use the cmake_policy command to set the policy and suppress this
      warning.
    
      Quoted variables like "HASWELL" will no longer be dereferenced when the
      policy is set to NEW.  Since the policy is not set the OLD behavior will be
      used.
    This warning is for project developers.  Use -Wno-dev to suppress it.
    
    -- Reading vars from /home/pi/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm/KERNEL...
    -- Reading vars from /home/pi/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm/KERNEL.HASWELL...
    CMake Error at cmake/utils.cmake:20 (file):
      file STRINGS file
      "/home/pi/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm/KERNEL.HASWELL"
      cannot be read.
    Call Stack (most recent call first):
      kernel/CMakeLists.txt:16 (ParseMakefileVars)
      kernel/CMakeLists.txt:863 (build_core)
    

    Is it something relative to a specific version of CMake that must be used to compile this project? And furthermore, is there a specific guide available on how to compile HybVIO on Raspberry and parameter optimizations to make it run in real time?

    Thanks

    build-issue 
    opened by GuidoBartoli 7
  • Make error

    Make error

    Hi, I'm also stuck here, the same as closed issure #11 (on Ubuntu18.04). I tried to rm -rf target and using clang-10.0 but still doesn't work. May I ask is there any other way to solve this?

    build-issue 
    opened by FlyCole 5
  • Error when running `make`

    Error when running `make`

    When running make -j6 here:

    mkdir target
    cd target
    cmake -DBUILD_VISUALIZATIONS=ON -DUSE_SLAM=ON ..
    # or if not using clang by default:
    # CC=clang CXX=clang++ cmake ..
    make -j6
    

    I'm having this error:

    /home/user/workspace/HybVIO/src/slam/orb_extractor.cpp: In member function ‘virtual void slam::{anonymous}::OrbExtractorImplementation::detectAndExtract(tracker::Image&, const tracker::Camera&, const std::vector<tracker::Feature>&, slam::KeyPointVector&, std::vector<int>&)’:
    /home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:110:18: warning: missing initializer for member ‘slam::KeyPoint::bearing’ [-Wmissing-field-initializers]
                     });
                      ^
    /home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:110:18: warning: missing initializer for member ‘slam::KeyPoint::descriptor’ [-Wmissing-field-initializers]
    /home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:161:18: error: no matching function for call to ‘std::vector<slam::KeyPoint, Eigen::aligned_allocator<slam::KeyPoint> >::push_back(<brace-enclosed initializer list>)’
                     });
                      ^
    In file included from /usr/include/c++/7/vector:64:0,
                     from /home/user/workspace/HybVIO/src/slam/static_settings.hpp:4,
                     from /home/user/workspace/HybVIO/src/slam/orb_extractor.hpp:4,
                     from /home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:40:
    /usr/include/c++/7/bits/stl_vector.h:939:7: note: candidate: void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = slam::KeyPoint; _Alloc = Eigen::aligned_allocator<slam::KeyPoint>; std::vector<_Tp, _Alloc>::value_type = slam::KeyPoint]
           push_back(const value_type& __x)
           ^~~~~~~~~
    /usr/include/c++/7/bits/stl_vector.h:939:7: note:   no known conversion for argument 1 from ‘<brace-enclosed initializer list>’ to ‘const value_type& {aka const slam::KeyPoint&}’
    /usr/include/c++/7/bits/stl_vector.h:953:7: note: candidate: void std::vector<_Tp, _Alloc>::push_back(std::vector<_Tp, _Alloc>::value_type&&) [with _Tp = slam::KeyPoint; _Alloc = Eigen::aligned_allocator<slam::KeyPoint>; std::vector<_Tp, _Alloc>::value_type = slam::KeyPoint]
           push_back(value_type&& __x)
           ^~~~~~~~~
    /usr/include/c++/7/bits/stl_vector.h:953:7: note:   no known conversion for argument 1 from ‘<brace-enclosed initializer list>’ to ‘std::vector<slam::KeyPoint, Eigen::aligned_allocator<slam::KeyPoint> >::value_type&& {aka slam::KeyPoint&&}’
    src/slam/CMakeFiles/slam.dir/build.make:283: recipe for target 'src/slam/CMakeFiles/slam.dir/orb_extractor.cpp.o' failed
    make[2]: *** [src/slam/CMakeFiles/slam.dir/orb_extractor.cpp.o] Error 1
    CMakeFiles/Makefile2:340: recipe for target 'src/slam/CMakeFiles/slam.dir/all' failed
    make[1]: *** [src/slam/CMakeFiles/slam.dir/all] Error 2
    Makefile:94: recipe for target 'all' failed
    make: *** [all] Error 2
    
    
    build-issue 
    opened by Ador2 5
  • Question about the parameters

    Question about the parameters

    Dear sir, Thanks for the excellent work! i have a question about the parameters "-maxVisualUpdates" and "-maxSuccessfulVisualUpdates",their default value are both 20.When i set them both to 30,the mono vio can also run in realtime in euroc, but the result seems to be the same as the 20's.Is that right?But I think that the more visual updates are done,the better result that we should get. Thanks again!

    math-explainer 
    opened by DrZ-21 4
  • Problem with the dy/dx in the predict function?

    Problem with the dy/dx in the predict function?

    Dear Professor: Sorry for bothering you again. Are there some docs about the dy/dx and the dy/dq in the prediction function?It seems it is different from the EKF Based algorithm.

    math-explainer 
    opened by Gatsby23 4
  • Recreating Online Arxiv Paper Results for TUM-VI

    Recreating Online Arxiv Paper Results for TUM-VI

    Love the paper, thank you so much for putting it and the code out there!

    When I was trying to recreate the paper results, I noticed that my EUROC results match but TUM-VI did not. Looking at the paper, I found: Table B2 online-stereo-Normal SLAM has identical RMSE to postprocess (Table 4)

    I suspect that this is just a typo although I could be wrong here.

    Cheers and all the best!

    algo-performance 
    opened by ArmandB 4
  • cmake problem

    cmake problem

    hello Thank for you sharing so good job I build on ubuntu1804 system When I cmake the project after finished the build.sh file, I encoutered the problem so follow: Selection_144

    build-issue 
    opened by dean1314 4
  • question about a jacobian

    question about a jacobian

    Hi, thanks for your beautiful work! I got a question about the jacobian at: https://github.com/SpectacularAI/HybVIO/blob/main/src/odometry/triangulation.cpp#L963

    In my understanding, its about such a question: P_{point_in_camera} = R^{cam}{world} * P{point_in_world} and R = R(q). then we want to get the jacobian of d(P_{point_in_camera}) / d (q), why this jacobian is related to camera and imu baseline parameter?

    I am trying to make equirectangular images + IMU available on your code, do you think this is a practicable idea?

    math-explainer 
    opened by fushi219 3
  • some question about rolling shutter setting parameter.

    some question about rolling shutter setting parameter.

    Hello, thank you very much for opening up such great work. I'm looking at the details of the code recently. But at present, some problems have been bothering me. How do you deal with the rolling shutter problem? I don't see the relevant parameter settings in your code, such as "rolling_shutter_skew" or "rolling_shutter_readout_time". Thank you again, and look forward to your reply.

    math-explainer 
    opened by chnhs 3
  • Problem with triangulation【More importantly about how to understand PIVO】?

    Problem with triangulation【More importantly about how to understand PIVO】?

    Dear Professor: Recently, I have read the paper 《HybVIO: Pushing the Limits of Real-time Visual-inertial Odometry》and its corresponding code. Thank you for your wonderful work that contributes to the robotics community. However, there are some troubles bothering me a lot. I have noticed that the visual landmark estimation part(https://github.com/SpectacularAI/HybVIO/blob/main/src/odometry/triangulation.cpp#L203) is different from the triangulation part in the original msckf. That's the beauty of the PIVO, your previous paper, Right? However, I don't understand, in the landmark triangulation part, why could you estimate the landmark coordinate jacobian with respect to the camera pose in the trail. To my knowledge, in this part, we would only estimate the landmark position and should calculate the jacobian with the landmark coordinate, like this: image image More precisely, I don't understand how to derive the analytical-formula of the jacobian dE^TE in the triangulation part. I have read your paper《PIVO: Probabilistic Inertial-Visual Odometry for Occlusion-Robust Navigation》 again and again. However, the paper is concise and I am not clever enough to understand why to calculate the jacobian like this. Could you please give me some doc or clues about it? I'm really looking forward for your help, thank you very much. Yours Qi Wu

    math-explainer 
    opened by Gatsby23 2
  • KERNEL.HASWELL file missing?

    KERNEL.HASWELL file missing?

    Platform

    Macbook Air M2

    OS

    Ubuntu 20.04 Docker

    Problem

    I was trying to build dependencies by running

    CC=clang-12 CXX=clang++-12 WITH_OPENGL=OFF BUILD_VISUALIZATIONS=OFF ./scripts/build.sh
    

    and I got the following error:

    -- Reading vars from /HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm64/KERNEL.HASWELL...
    CMake Error at cmake/utils.cmake:20 (file):
      file STRINGS file
      "/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm64/KERNEL.HASWELL"
      cannot be read.
    Call Stack (most recent call first):
      kernel/CMakeLists.txt:16 (ParseMakefileVars)
      kernel/CMakeLists.txt:863 (build_core)
    

    I have looked in the 3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm64/ directory and there was no KERNEL.HASWELL in the directory.

    build-issue 
    opened by bot-lin 2
  • Low overlap stereo cameras

    Low overlap stereo cameras

    Hello, I have a pair of stereo cameras from a VR headset that point at different angles. I've been able to get HybVIO to track stereo features only with -useRectification, but even then it doesn't seem possible for HybVIO to track features on the non-overlapping areas of the images.

    1. Is there a way to avoid having to prerectify the images in this case?
    2. Is it possible to track non-overlapping features in HybVIO?

    Below is a stereo frame example obtained by running this command:

    ./main -i=../data/benchmark/ody-easy -p -c -s -windowResolution=640 -useSlam -useStereo -displayStereoMatching -useRectification

    Over this HybVIO-formatted dataset. The original EuRoC-formatted dataset can be found here.

    2022-06-14-135018_1284x521_scrot

    algo-performance 
    opened by mateosss 2
A 3D DNN-based Metric Semantic Dense Mapping pipeline and a Visual Inertial SLAM system

MSDM-SLAM This repository represnets a 3D DNN-based Metric Semantic Dense Mapping pipeline and a Visual Inertial SLAM system that can be run on a grou

ITMO Biomechatronics and Energy Efficient Robotics Laboratory 11 Jul 23, 2022
C++ Implementation of "An Equivariant Filter for Visual Inertial Odometry", ICRA 2021

EqF VIO (Equivariant Filter for Visual Inertial Odometry) This repository contains an implementation of an Equivariant Filter (EqF) for Visual Inertia

null 60 Nov 15, 2022
Radar SLAM: yeti radar odometry + ScanContext-based Loop Closing

navtech-radar-slam Radar SLAM: yeti radar odometry + ScanContext-based Loop Closing What is Navtech-Radar-SLAM? In this repository, a (minimal) SLAM p

Giseop Kim 84 Dec 22, 2022
[3DV 2021] DSP-SLAM: Object Oriented SLAM with Deep Shape Priors

DSP-SLAM Project Page | Video | Paper This repository contains code for DSP-SLAM, an object-oriented SLAM system that builds a rich and accurate joint

Jingwen Wang 368 Dec 29, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

NVIDIA Isaac ROS 339 Dec 28, 2022
R3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

R3LIVE A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package News [Dec 31, 2021] Release of cod

HKU-Mars-Lab 1.3k Jan 4, 2023
OpenVSLAM: A Versatile Visual SLAM Framework

OpenVSLAM: A Versatile Visual SLAM Framework NOTE: This is a community fork of xdspacelab/openvslam. It was created to continue active development of

null 551 Jan 9, 2023
Implementation of "An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems"

An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems Implementation of "An Analytical Solution to the IMU Initializati

David Zuniga-Noel 94 Nov 23, 2022
Code for "Photometric Visual-Inertial Navigation with Uncertainty-Aware Ensembles" in TRO 2022

Ensemble Visual-Inertial Odometry (EnVIO) Authors : Jae Hyung Jung, Yeongkwon Choe, and Chan Gook Park 1. Overview This is a ROS package of Ensemble V

Jae Hyung Jung 94 Dec 8, 2022
Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.

Cartographer Purpose Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platfo

Cartographer 6.3k Jan 4, 2023
A real-time LiDAR SLAM package that integrates FLOAM and ScanContext.

SC-FLOAM What is SC-FLOAM? A real-time LiDAR SLAM package that integrates FLOAM and ScanContext. FLOAM for odometry (i.e., consecutive motion estimati

Jinlai Zhang 16 Jan 8, 2023
A real-time LiDAR SLAM package that integrates TLOAM and ScanContext.

SC-TLOAM What is SC-TLOAM? A real-time LiDAR SLAM package that integrates TLOAM and ScanContext. TLOAM for odometry. ScanContext for coarse global loc

Jinlai Zhang 3 Sep 17, 2021
Experiments with ORB-SLAM and emscripten

Experiments with ORB-SLAM3 and emscripten Experiments to attempt to get ORB-SLAM3 working with emscripten. Please use the binvoc branch of my own fork

Nick Whitelegg 18 Dec 19, 2022
An implementation of AVP-SLAM and some new contributions

AVP-SLAM-PLUS AVP-SLAM-PLUS is an implementation of AVP-SLAM and some new contributions. Performance of AVP-SLAM-PLUS could be found in video(https://

null 405 Dec 30, 2022
Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)

SC-LeGO-LOAM NEWS (Nov, 2020) A Scan Context integration for LIO-SAM, named SC-LIO-SAM (link), is also released. Real-time LiDAR SLAM: Scan Context (1

Giseop Kim 11 Jul 15, 2022
SPM-SLAM (improved)

Marker-based SLAM 此项目是SPM-SLAM的改进版本,也是一种基于AprilTag或者Aruco类型标记(marker)的SLAM系统,通过在环境中布置不同ID的marker,即可快速实现高精度的相机定位,经过在实验室环境下的测试,可以达到厘米级精度。具体流程可看图1。绿色部分是对

null 47 Nov 21, 2022
Final push for EAO-SLAM-Improve

EAO-SLAM-Improve-Final-Poject This is my final improve for EAO-SLAM, full project can be found at https://pan.baidu.com/s/1Zgat7FRjKEi7cbN3QDtqbA, pas

null 9 Sep 16, 2022
Research on Event Accumulator Settings for Event-Based SLAM

Research on Event Accumulator Settings for Event-Based SLAM This is the source code for paper "Research on Event Accumulator Settings for Event-Based

Robin Shaun 26 Dec 21, 2022
Finds static ORB features in a video(excluding the dynamic objects), typically for a SLAM scenario

static-ORB-extractor : SORBE Finds static ORB features in a video(excluding the dynamic objects), typically for a SLAM scenario Requirements OpenCV 3

null 4 Dec 17, 2022