OpenVSLAM: A Versatile Visual SLAM Framework

Overview

OpenVSLAM: A Versatile Visual SLAM Framework

Wercker Status Documentation Status License Join the community on Spectrum


NOTE: This is a community fork of xdspacelab/openvslam. It was created to continue active development of OpenVSLAM.


Overview

[PrePrint] [YouTube]

OpenVSLAM is a monocular, stereo, and RGBD visual SLAM system. The notable features are:

  • It is compatible with various type of camera models and can be easily customized for other camera models.
  • Created maps can be stored and loaded, then OpenVSLAM can localize new images based on the prebuilt maps.
  • The system is fully modular. It is designed by encapsulating several functions in separated components with easy-to-understand APIs.
  • We provided some code snippets to understand the core functionalities of this system.

OpenVSLAM is based on an indirect SLAM algorithm with sparse features, such as ORB-SLAM, ProSLAM, and UcoSLAM. One of the noteworthy features of OpenVSLAM is that the system can deal with various type of camera models, such as perspective, fisheye, and equirectangular. If needed, users can implement extra camera models (e.g. dual fisheye, catadioptric) with ease. For example, visual SLAM algorithm using equirectangular camera models (e.g. RICOH THETA series, insta360 series, etc) is shown above.

Some code snippets to understand the core functionalities of the system are provided. You can employ these snippets for in your own programs. Please see the *.cc files in ./example directory or check Simple Tutorial and Example.

We provided documentation for installation and tutorial. Please contact us via GitHub issues if you have any questions or notice any bugs about the software.

Motivation

Visual SLAM is regarded as a next-generation technology for supporting industries such as automotives, robotics, and xR. We released OpenVSLAM as an opensource project with the aim of collaborating with people around the world to accelerate the development of this field. In return, we hope this project will bring safe and reliable technologies for a better society.

Installation

Please see Installation chapter in the documentation.

The instructions for Docker users are also provided.

Tutorial

Please see Simple Tutorial chapter in the documentation.

A sample ORB vocabulary file can be downloaded from here. Sample datasets are also provided at here.

If you would like to run visual SLAM with standard benchmarking datasets (e.g. KITTI Odometry dataset), please see SLAM with standard datasets section in the documentation.

Community

If you want to join our Spectrum community, please join from the following link:

Join the community on Spectrum

Currently working on

  • IMU integration
  • Python bindings
  • Implementation of extra camera models
  • Refactoring

Feedbacks, feature requests, and contribution are welcome!

License

2-clause BSD license (see LICENSE)

The following files are derived from third-party libraries.

Please use g2o as the dynamic link library because csparse_extension module of g2o is LGPLv3+.

Contributors

Citation

OpenVSLAM won first place at ACM Multimedia 2019 Open Source Software Competition.

If OpenVSLAM helps your research, please cite the paper for OpenVSLAM. Here is a BibTeX entry:

@inproceedings{openvslam2019,
  author = {Sumikura, Shinya and Shibuya, Mikiya and Sakurada, Ken},
  title = {{OpenVSLAM: A Versatile Visual SLAM Framework}},
  booktitle = {Proceedings of the 27th ACM International Conference on Multimedia},
  series = {MM '19},
  year = {2019},
  isbn = {978-1-4503-6889-6},
  location = {Nice, France},
  pages = {2292--2295},
  numpages = {4},
  url = {http://doi.acm.org/10.1145/3343031.3350539},
  doi = {10.1145/3343031.3350539},
  acmid = {3350539},
  publisher = {ACM},
  address = {New York, NY, USA}
}

The preprint can be found here.

Reference

  • Raúl Mur-Artal, J. M. M. Montiel, and Juan D. Tardós. 2015. ORB-SLAM: a Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics 31, 5 (2015), 1147–1163.
  • Raúl Mur-Artal and Juan D. Tardós. 2017. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Transactions on Robotics 33, 5 (2017), 1255–1262.
  • Dominik Schlegel, Mirco Colosi, and Giorgio Grisetti. 2018. ProSLAM: Graph SLAM from a Programmer’s Perspective. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA). 1–9.
  • Rafael Muñoz-Salinas and Rafael Medina Carnicer. 2019. UcoSLAM: Simultaneous Localization and Mapping by Fusion of KeyPoints and Squared Planar Markers. arXiv:1902.03729.
  • Mapillary AB. 2019. OpenSfM. https://github.com/mapillary/OpenSfM.
  • Giorgio Grisetti, Rainer Kümmerle, Cyrill Stachniss, and Wolfram Burgard. 2010. A Tutorial on Graph-Based SLAM. IEEE Transactions on Intelligent Transportation SystemsMagazine 2, 4 (2010), 31–43.
  • Rainer Kümmerle, Giorgio Grisetti, Hauke Strasdat, Kurt Konolige, and Wolfram Burgard. 2011. g2o: A general framework for graph optimization. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA). 3607–3613.
Comments
  • Continue publishing under 2-clause BSD license

    Continue publishing under 2-clause BSD license

    Check out the work required to continue publishing under the BSD license.

    https://github.com/OpenVSLAM-Community/openvslam/pull/35#issuecomment-786322269

    I think it would be good to clarify that the similarity between OpenVSLAM and ORB_SLAM2 is based on the ORB_SLAM2 paper

    Tasks:

    • [ ] (moved) #239
    • Clarify the scope of contribution for each paper and treat it appropriately in README.md (for moral issue)
      • [x] ORB_SLAM/ORB_SLAM2
      • [x] ProSLAM
      • [x] UcoSLAM
    compliance 
    opened by ymd-stella 36
  • Process died, occurred in commit f42bfcf

    Process died, occurred in commit f42bfcf

    Recently, I pulled the code with the latest version (openvslam commit code: f42bfcfabbaf3e4fd0021ac3e283bb89a4312db9 openvslam_ros commit code: a1e575aa4b811639a545644b438f5bf83cd17684 )

    The updated codes are compiled successfully and installed in the dynamic link library When I run the executable run_slam in ROS2, it works fine temporarily. However, run time errors occur at any random time without outputting any error messages.

    Using gdb prefix, it shows me the following backtrace just before the process dies.

    #0  0x00007ffff7b24b1d in openvslam::data::landmark::erase_observation(std::shared_ptr<openvslam::data::keyframe> const&) () at /usr/local/lib/libopenvslam.so
    #1  0x00007ffff7bc437e in openvslam::optimize::local_bundle_adjuster::optimize(std::shared_ptr<openvslam::data::keyframe> const&, bool*) const () at /usr/local/lib/libopenvslam.so
    #2  0x00007ffff7ae3dde in openvslam::mapping_module::mapping_with_new_keyframe() ()
        at /usr/local/lib/libopenvslam.so
    #3  0x00007ffff7ae4904 in openvslam::mapping_module::run() () at /usr/local/lib/libopenvslam.so
    #4  0x00007ffff6b62de4 in  () at /lib/x86_64-linux-gnu/libstdc++.so.6
    #5  0x00007ffff7243609 in start_thread (arg=<optimized out>) at pthread_create.c:477
    #6  0x00007ffff6850293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
    

    I've experimented on [intel NUC core i7 10gen with 16GB RAM], [RYZEN 7 5800H with 32GB RAM] and they both show the same issue. Is there anyone who solved this problem?

    bug in progress needs reproducer 
    opened by Sang-Beom-Woo 23
  • Clarify the functional correspondence between the ORB_SLAM2 code and OpenVSLAM code and evaluate the similarities

    Clarify the functional correspondence between the ORB_SLAM2 code and OpenVSLAM code and evaluate the similarities

    Continued from https://github.com/OpenVSLAM-Community/openvslam/issues/37.

    Extracted similar modules between ORB_SLAM2 and OpenVSLAM.

    • data modules
      • Frame - data::frame
      • KeyFrame - data::keyframe
      • MapPoint - data::landmark
      • Map - data::map_database
      • KeyFrameDatabase - data::bow_database
    • visualization modules
      • Viewer, MapDrawer, FrameDrawer - pangolin_viewer, publish::*
    • other modules
      • System - system, io::trajectory_io
      • Initializer - solve::*, initialize::*
      • Tracking - tracking_module, module::frame_tracker, module::keyframe_inserter, module::local_map_updater, module::relocalizer, module::initializer
      • LocalMapping - mapping_module, module::local_map_cleaner, module::two_view_triangulator
      • LoopClosing - global_optimization_module, module::loop_bundle_adjuster, module::loop_detector
      • Optimizer - optimize::*
      • ORBextractor - feature::*
      • ORBmatcher - match::*
      • PnPsolver - solve::pnp_solver
      • Sim3Solver - solve::sim3_solver

    Based on these correspondences, similarity should be assessed by making comparisons at a appropriately abstract level. It is important to note that their creative expression may be limited by ideas. (Please refer Merger doctrine.)

    In summary, under efficiency constraints, if different expressions are considered, they should be removed and rewritten from scratch.

    compliance 
    opened by ymd-stella 17
  • Documentation needs to be updated

    Documentation needs to be updated

    Hi, thanks for maintaining the openVSLAM.

    1. Build instruction doesn't build executable

    I try to fresh install using the OpenVSLAM-Community documentation on Installation section, however, I found that it does not build the executable like run_video_slam, etc. This makes it hard to simply follow the tutorial here.

    Shouldn't we update the build instruction to

    cd /path/to/openvslam
    mkdir build && cd build
    cmake \
        -DBUILD_WITH_MARCH_NATIVE=ON \
        -DUSE_PANGOLIN_VIEWER=OFF \
        -DUSE_SOCKET_PUBLISHER=ON \
        -DUSE_STACK_TRACE_LOGGER=ON \
        -DBOW_FRAMEWORK=DBoW2 \
        -DBUILD_TESTS=ON \
        -DBUILD_EXAMPLES=ON \
    
        ..
    make -j4
    

    to also build the examples.

    1. ROS example instruction

    When I followed the build instruction for ROS. It will fail with

    Base path: /home/jy/openvslam_ws
    Source space: /home/jy/openvslam_ws/src
    Build space: /home/jy/openvslam_ws/build
    Devel space: /home/jy/openvslam_ws/devel
    Install space: /home/jy/openvslam_ws/install
    ####
    #### Running command: "cmake /home/jy/openvslam_ws/src -DUSE_PANGOLIN_VIEWER=ON -DUSE_SOCKET_PUBLISHER=OFF -DCATKIN_DEVEL_PREFIX=/home/jy/openvslam_ws/devel -DCMAKE_INSTALL_PREFIX=/home/jy/openvslam_ws/install -G Unix Makefiles" in "/home/jy/openvslam_ws/build"
    ####
    -- Using CATKIN_DEVEL_PREFIX: /home/jy/openvslam_ws/devel
    -- Using CMAKE_PREFIX_PATH: /opt/ros/melodic
    -- This workspace overlays: /opt/ros/melodic
    -- Found PythonInterp: /usr/bin/python2 (found suitable version "2.7.17", minimum required is "2") 
    -- Using PYTHON_EXECUTABLE: /usr/bin/python2
    -- Using Debian Python package layout
    -- Using empy: /usr/bin/empy
    -- Using CATKIN_ENABLE_TESTING: ON
    -- Call enable_testing()
    -- Using CATKIN_TEST_RESULTS_DIR: /home/jy/openvslam_ws/build/test_results
    -- Forcing gtest/gmock from source, though one was otherwise available.
    -- Found gtest sources under '/usr/src/googletest': gtests will be built
    -- Found gmock sources under '/usr/src/googletest': gmock will be built
    -- Found PythonInterp: /usr/bin/python2 (found version "2.7.17") 
    -- Using Python nosetests: /usr/bin/nosetests-2.7
    -- catkin 0.7.29
    -- BUILD_SHARED_LIBS is on
    -- BUILD_SHARED_LIBS is on
    -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    -- ~~  traversing 2 packages in topological order:
    -- ~~  - cv_bridge
    -- ~~  - openvslam_ros
    -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    -- +++ processing catkin package: 'cv_bridge'
    -- ==> add_subdirectory(cv_bridge)
    -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found version "2.7.17") 
    -- Boost version: 1.65.1
    -- Found the following Boost libraries:
    --   python
    -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable version "2.7.17", minimum required is "2.7") 
    -- +++ processing catkin package: 'openvslam_ros'
    -- ==> add_subdirectory(openvslam_ros)
    -- Build type: Release
    -- Address/Memory sanitizer: DISABLED
    -- Stack trace logger: DISABLED
    -- Google Perftools: DISABLED
    -- Architecture-aware optimization (-march=native): DISABLED
    -- C++11 support: OK (-std=c++17)
    CMake Error at openvslam_ros/CMakeLists.txt:124 (find_package):
      By not providing "Findopenvslam.cmake" in CMAKE_MODULE_PATH this project
      has asked CMake to find a package configuration file provided by
      "openvslam", but CMake did not find one.
    
      Could not find a package configuration file provided by "openvslam" with
      any of the following names:
    
        openvslamConfig.cmake
        openvslam-config.cmake
    
      Add the installation prefix of "openvslam" to CMAKE_PREFIX_PATH or set
      "openvslam_DIR" to a directory containing one of the above files.  If
      "openvslam" provides a separate development package or SDK, be sure it has
      been installed.
    
    
    -- Configuring incomplete, errors occurred!
    See also "/home/jy/openvslam_ws/build/CMakeFiles/CMakeOutput.log".
    See also "/home/jy/openvslam_ws/build/CMakeFiles/CMakeError.log".
    Invoking "cmake" failed
    
    

    I think it should be updated such that beginners/ practitioners could seamlessly use it. But since you refactor out the ros package of openVSLAM. I am not sure how to do it.

    documentation 
    opened by surfii3z 12
  • load_map_db() overrides .yaml features

    load_map_db() overrides .yaml features

    Describe the bug

    When I load a map database with SLAM.load_map_database(map_db_path);, I get the following:

    load a orb_params "default ORB feature extraction setting" from JSON
    

    This seems to happen as part of load_map_database(), as the SLAM system then proceeds to parse keyframes and landmarks etc prior to the line [info] startup SLAM system. The parameters appear to be parsed correctly from the .yaml as the expected values are printed during system(). My current thinking is that the ORB settings used for creating the db end up being included as part of the file.

    To Reproduce

    Steps to reproduce the behavior:

    1. Edit your config file to not have default name and/or values for ORB feature extraction
    2. Observe how your desired values are successfully parsed during the ctor for system(const std::shared_ptr& cfg, const std::string& vocab_file_path)
    3. call system::load_map_database(map_db_path)
    4. Observe the message load a orb_params "default ORB feature extraction setting" from JSON, where "default ORB feature extraction setting" was the name of the Feature config used to build the file specified by map_db_path

    Partial debug log is as follows:

    Feature:
      name: Altopack Interior Test
      scale_factor: 1.2
      num_levels: 8
      ini_fast_threshold: 15
      min_fast_threshold: 10
    Mapping:
      baseline_dist_thr_ratio: 0.02
      redundant_obs_ratio_thr: 0.9
    Initializer:
      num_min_triangulated_pts: 100
    PangolinViewer:
      keyframe_size: 1.2
      keyframe_line_width: 1
      graph_line_width: 2
      point_size: 2
      camera_size: 0.8
      camera_line_width: 3
      viewpoint_x: 0.0
      viewpoint_y: -300
      viewpoint_z: -0.1
      viewpoint_f: 2800
    
    [2022-07-27 16:05:53.921] [info] loading ORB vocabulary: /orb_vocab.fbow
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: data::camera_database
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: data::map_database
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: data::bow_database
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: publish::frame_publisher
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: publish::map_publisher
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: module::initializer
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: module::relocalizer
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: tracking_module
    [2022-07-27 16:05:53.939] [debug] CONSTRUCT: mapping_module
    [2022-07-27 16:05:53.939] [debug] load mapping parameters
    [2022-07-27 16:05:53.939] [debug] load monocular mappping parameters
    [2022-07-27 16:05:53.939] [debug] Use baseline_dist_thr_ratio: 0.02
    [2022-07-27 16:05:53.940] [debug] CONSTRUCT: loop_detector
    [2022-07-27 16:05:53.940] [debug] CONSTRUCT: global_optimization_module
    [2022-07-27 16:05:53.940] [debug] load depthmap factor
    [2022-07-27 16:05:53.940] [debug] CONSTRUCT: data::orb_params_database
    [2022-07-27 16:05:53.947] [info] clear map database
    [2022-07-27 16:05:53.947] [info] clear BoW database
    [2022-07-27 16:05:53.947] [info] load the MessagePack file of database from /maps/altopack2.msg
    [2022-07-27 16:05:55.363] [info] decoding 1 camera(s) to load
    [2022-07-27 16:05:55.363] [info] skip the tracking camera "ZED2"
    [2022-07-27 16:05:55.363] [info] decoding 1 orb_params to load
    [2022-07-27 16:05:55.363] [info] load a orb_params "default ORB feature extraction setting" from JSON
    [2022-07-27 16:05:55.502] [info] decoding 104 keyframes to load
    [2022-07-27 16:05:56.416] [info] decoding 40378 landmarks to load
    [2022-07-27 16:05:56.440] [info] registering essential graph
    [2022-07-27 16:05:56.646] [info] registering keyframe-landmark association
    [2022-07-27 16:05:56.931] [info] updating covisibility graph
    [2022-07-27 16:05:57.244] [info] updating landmark geometry
    [2022-07-27 16:05:57.679] [info] startup SLAM system
    

    config file:

    # ZED2
    
    #==============#
    # Camera Model #
    #==============#
    
    Camera:
      name: "ZED2"
      setup: "stereo"
      model: "perspective"
    
      fx: 526.2252197265625
      fy: 526.2252197265625
      cx: 634.2841186523438
      cy: 348.4753112792969
    
      k1: 0.0
      k2: 0.0
      p1: 0.0
      p2: 0.0
      k3: 0.0
    
      fps: 15.0
      cols: 1280
      rows: 720
      focal_x_baseline: 63.147 # fx * 0.12
      depth_threshold: 40
    
      color_order: "Gray"
    
    #=====================#
    # Tracking Parameters #
    #=====================#
    
    Preprocessing:
      max_num_keypoints: 2000
      ini_max_num_keypoints: 4000
    
    #================#
    # ORB Parameters #
    #================#
    
    Feature:
      name: "Altopack Interior Test"
      scale_factor: 1.2
      num_levels: 8
      ini_fast_threshold: 15
      min_fast_threshold: 10
    
    #====================#
    # Mapping Parameters #
    #====================#
    
    Mapping:
      baseline_dist_thr_ratio: 0.02
      redundant_obs_ratio_thr: 0.9
    
    #========================#
    # Initializer Parameters #
    #========================#
    
    Initializer:
      num_min_triangulated_pts: 100
    
    #===========================#
    # PangolinViewer Parameters #
    #===========================#
    
    PangolinViewer:
      keyframe_size: 1.2
      keyframe_line_width: 1
      graph_line_width: 2
      point_size: 2
      camera_size: 0.8
      camera_line_width: 3
      viewpoint_x: 0.0
      viewpoint_y: -300
      viewpoint_z: -0.1
      viewpoint_f: 2800
    

    Expected behavior

    .yaml parameters are not over-written during the map loading process.

    Environment

    • Commit id: 584cff0c84c68d0cd7546d5158a82123fb04d4de
    • Install procedure: docker
    invalid 
    opened by jamestkpoon 6
  • Fix stereo rectifier node loader

    Fix stereo rectifier node loader

    This PR fixes the stereo rectification configuration loading, which is reading the configuration from the wrong YAML node (one level up from where it should).

    opened by glpuga 6
  • Add update_pose() API call

    Add update_pose() API call

    Adding new system.h/system.cc API call for updating camera position by known pose. This is a part required for https://github.com/OpenVSLAM-Community/openvslam_ros/issues/11 ticket to add the support of the /initialpose intentionally set by developer.

    opened by AlexeyMerzlyakov 6
  • Timeline for IMU

    Timeline for IMU

    Thanks to all who contributed to this project. Also thanks for the community fork.

    Is there a timeline for integrating the IMU functionality? Or a WIP branch? There are a number of cameras now equipped with the IMU(intel RealSense, StereoLab's Zed).

    Much thanks Weiwu

    enhancement 
    opened by oscarpang 6
  • System looses tracking when a previous map is loaded

    System looses tracking when a previous map is loaded

    Describe the bug

    You have two videos from the same building. The only differences between the videos is the start and end are from different rooms inside this building. However, the middle part of both videos are from the exact same rooms in the building.

    In one run a map is created and save from one of the videos .

    Then in another run with the other video, the map is loaded into the system. The system is able to initialize as normally but right after the initialization, the system looses the feature tracking. When the video gets to the middle part, the system can recognize features from the map and feature tracking is regained. From this point the tracking and mapping module works perfectly so even when the video gets to the different end-part it keeps performing feature tracking and creating new key frames.

    The value of fixing this bug

    Imagine a robot that often have to move around in the same building - if it could use a previously recorded map its navigation would be much more stable. E.g. if the light in one of the usually corridors is out, the SLAM system would normally lose the tracking if the robot went trough this corridor. However, if the SLAM system had access to a previously map it could recover its location perfectly on the other side of the dark corridor.

    To Reproduce

    In the first run with one of the videos use "SLAM.save_map_database("map_of_stella.msg")" to save the map.

    In the second run with the other video use "SLAM.load_map_database("map_of_stella.msg");" to load the map. (IMPORTANT: do NOT disable mapping module with "SLAM.disable_mapping_module();") You will then see the following error messages "[info] tracking lost: frame xx".

    If you keep letting it run it will continue with "[info] tracking lost within 5 sec after initialization" and the whole system will reset. This resetting will delete the map you just loaded. You can deactivate the resetting but it does not help to solve the problem.

    Suggested solution

    I noticed a part of the problem is in the tracking_module.cc with the following lines:

      // pass all of the keyframes to the mapping module
        assert(!is_stopped_keyframe_insertion_);
        const auto keyfrms = map_db_->get_all_keyframes();
        for (const auto& keyfrm : keyfrms) {
            mapper_->queue_keyframe(keyfrm);
        }
    

    This code loads all the key-frames from the previously map in a randomized order.

    Therefore, I suggest the following code which makes sure that the key frames are loaded in the correct order:

        // pass all of the keyframes to the mapping module
        assert(!is_stopped_keyframe_insertion_);
        auto keyfrms = map_db_->get_all_keyframes();
    
        std::sort(keyfrms.begin(), keyfrms.end(),
                  [&](std::shared_ptr<stella_vslam::data::keyframe>& keyfrm_1,
                      std::shared_ptr<stella_vslam::data::keyframe>& keyfrm_2) { return *keyfrm_1 > *keyfrm_2; });
    

    However, this solution is not enough to solve the problem.

    Environment

    Hardware: [PC] CPU: [AMD Ryzen 7 5800X 8-Core Processor] OS: [Ubuntu 22.04] In my case it is not necessary to processes the video in real time

    bug 
    opened by youknowimcomingwhenyouhearmehumming 5
  • "Tracking" status verification

    Hi. This is regarding a "tracking" status verification which can be useful for the Openvslam_ros's package in order to decide when publishing the pose info. Please, refer to this discussion for further details.

    opened by mirellameelo 5
  • Remove link to spectrum chat?

    Remove link to spectrum chat?

    Hi,

    I glanced through that spectrum and no one's answered anything as far as I could scroll. Maybe we should just put that to rest and remove links to it now that the official maintainers have given up on the project entirely.

    opened by SteveMacenski 5
  • Disconnection of socket_viewer when loading large map

    Disconnection of socket_viewer when loading large map

    Describe the bug

    When loading a large size map, stella_vslam disconnects from socket_viewer. This problem was avoided by setting a limit on the data size to be sent, but it did not work properly in SLAM mode.

    To Reproduce

    Load a large map in Localization mode.

    Expected behavior

    • Map is displayed in socket_viewer in Localization mode when a large map is loaded
    • Map is rendered successfully even in SLAM mode

    Environment

    • SocketViewer is running on Windows10
    • Install procedure: docker
    bug 
    opened by ymd-stella 0
  • Disable viewer at runtime

    Disable viewer at runtime

    What issue is the feature request related to?

    It would be useful to be able to disable viewer at runtime.

    Describe the solution you'd like

    Add a command line argument to start the viewer only when it is true.

    enhancement good first issue 
    opened by ymd-stella 0
  • Unused variables

    Unused variables

    Describe the bug

    Following variables are unused and can be removed.

    frm_obs in lambda thread_right https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/system.cc#L326

    variable ini_extractor_left_ https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/system.h#L242

    variable bow_db_ https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/mapping_module.h#L225

    https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/module/initializer.h#L64

    variable bow_vocab_ https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/module/loop_detector.h#L126

    variable fix_scale_in_Sim3_estimation_ https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/module/loop_detector.h#L137

    variable this https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/stella_vslam/module/local_map_updater.cc#L101

    variable frame_hash_ https://github.com/stella-cv/stella_vslam/blob/a404f57c1999d8e29475b490e5fe26c38469df53/src/socket_publisher/data_serializer.h#L51

    To Reproduce

    The unused variable can be checked by searching them in the vscode or any other editor.

    Expected behavior

    Not Applicable

    Screenshots or videos

    Not Applicable

    Environment

    • Hardware: PC
    • CPU: i7-12700
    • OS: Ubuntu 22.04
    • Commit id: a404f57c1999d8e29475b490e5fe26c38469df53
    bug good first issue 
    opened by mitul93 3
  • Manage previously built maps, merge maps

    Manage previously built maps, merge maps

    Update

    I couldn't reopen the original issue, so I made this new issue. I have tried both disabling the reset and also setting the appropriate time stamp, but it still fails. I have created a link with two video, theirs logs, and the used map of the first video that is loaded into the second video, so you can see the exact error yourself. I have implemented the code lines from "suggested solution" into your code. The link is: https://mab.to/ogkuUgu5z.

    Let me know if you need more from me.

    Thank you so much for responding to these issues - it really makes it so much more fun to work with this repository compare to other SLAM repository :)

    Describe the bug

    You have two videos from the same building. The only differences between the videos is the start and end are from different rooms inside this building. However, the middle part of both videos are from the exact same rooms in the building.

    In one run a map is created and save from one of the videos .

    Then in another run with the other video, the map is loaded into the system. The system is able to initialize as normally but right after the initialization, the system looses the feature tracking. When the video gets to the middle part, the system can recognize features from the map and feature tracking is regained. From this point the tracking and mapping module works perfectly so even when the video gets to the different end-part it keeps performing feature tracking and creating new key frames.

    The value of fixing this bug

    Imagine a robot that often have to move around in the same building - if it could use a previously recorded map its navigation would be much more stable. E.g. if the light in one of the usually corridors is out, the SLAM system would normally lose the tracking if the robot went trough this corridor. However, if the SLAM system had access to a previously map it could recover its location perfectly on the other side of the dark corridor.

    To Reproduce

    In the first run with one of the videos use "SLAM.save_map_database("map_of_stella.msg")" to save the map.

    In the second run with the other video use "SLAM.load_map_database("map_of_stella.msg");" to load the map. (IMPORTANT: do NOT disable mapping module with "SLAM.disable_mapping_module();") You will then see the following error messages "[info] tracking lost: frame xx".

    If you keep letting it run it will continue with "[info] tracking lost within 5 sec after initialization" and the whole system will reset. This resetting will delete the map you just loaded. You can deactivate the resetting but it does not help to solve the problem.

    Suggested solution

    I noticed a part of the problem is in the tracking_module.cc with the following lines:

      // pass all of the keyframes to the mapping module
        assert(!is_stopped_keyframe_insertion_);
        const auto keyfrms = map_db_->get_all_keyframes();
        for (const auto& keyfrm : keyfrms) {
            mapper_->queue_keyframe(keyfrm);
        }
    

    This code loads all the key-frames from the previously map in a randomized order.

    Therefore, I suggest the following code which makes sure that the key frames are loaded in the correct order:

        // pass all of the keyframes to the mapping module
        assert(!is_stopped_keyframe_insertion_);
        auto keyfrms = map_db_->get_all_keyframes();
    
        std::sort(keyfrms.begin(), keyfrms.end(),
                  [&](std::shared_ptr<stella_vslam::data::keyframe>& keyfrm_1,
                      std::shared_ptr<stella_vslam::data::keyframe>& keyfrm_2) { return *keyfrm_1 > *keyfrm_2; });
    

    However, this solution is not enough to solve the problem.

    Environment

    Hardware: [PC] CPU: [AMD Ryzen 7 5800X 8-Core Processor] OS: [Ubuntu 22.04] In my case it is not necessary to processes the video in real time

    enhancement 
    opened by youknowimcomingwhenyouhearmehumming 8
  • Late initialization causes beginning of the trajectory to be missing

    Late initialization causes beginning of the trajectory to be missing

    Describe the bug

    Sometimes the system cannot initialize from the beginning of a video if there are only few features in the images in the first part of the video. Therefore, the system initialized sometimes later - let's say at frame 50. Consequently, the trajectory between frame 0-50 is missing.

    Suggested solution

    This missing start trajectory can often be recovered by using exploiting the fact that the feature criteria for initialization is stricter than for the subsequently tracking. The recovery progress in this case is done by, at some point after the initialization, the video is set to frame 49 and running backwards with frame 48, 47, 46...0.

    This works but there is a twist: the system is not able to handle frames with timestamps from the past. In my case, I use the function feed_monocular_frame(img, timestamp, mask). Instead of using the correct timestamps, when I feed the previous frames (49, 48, 47...0), I have to input timestamps that keep being ahead in time/in the future,

    The problem of using timestamps that are in the past occurs in at least two places in the code:

    • in file tracking_module.cc: if (curr_frm_.timestamp_ < last_reloc_frm_timestamp_ + 1.0) { return false; }

    • keyframe_inserter.cc: bool min_interval_elapsed = false; if (min_interval_ > 0.0) { min_interval_elapsed = last_inserted_keyfrm && last_inserted_keyfrm->timestamp_ + min_interval_ <= curr_frm.timestamp_; min_interval_elapsed = true; }

    Environment

    • Hardware: [PC]
    • CPU: [AMD Ryzen 7 5800X 8-Core Processor]
    • OS: [Ubuntu 22.04]
    • In my case it is not necessary to processes the video in real time
    enhancement 
    opened by youknowimcomingwhenyouhearmehumming 0
[3DV 2021] DSP-SLAM: Object Oriented SLAM with Deep Shape Priors

DSP-SLAM Project Page | Video | Paper This repository contains code for DSP-SLAM, an object-oriented SLAM system that builds a rich and accurate joint

Jingwen Wang 368 Dec 29, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 319 Dec 30, 2022
A 3D DNN-based Metric Semantic Dense Mapping pipeline and a Visual Inertial SLAM system

MSDM-SLAM This repository represnets a 3D DNN-based Metric Semantic Dense Mapping pipeline and a Visual Inertial SLAM system that can be run on a grou

ITMO Biomechatronics and Energy Efficient Robotics Laboratory 11 Jul 23, 2022
A powerful and versatile dynamic instrumentation toolkit.

MIGI Migi(My Ideas Got Incepted) is a powerful and versatile dynamic instrumentation toolkit. How it works By injecting Python scripts into target hos

nomads 5 Oct 22, 2022
Radar SLAM: yeti radar odometry + ScanContext-based Loop Closing

navtech-radar-slam Radar SLAM: yeti radar odometry + ScanContext-based Loop Closing What is Navtech-Radar-SLAM? In this repository, a (minimal) SLAM p

Giseop Kim 84 Dec 22, 2022
A real-time LiDAR SLAM package that integrates FLOAM and ScanContext.

SC-FLOAM What is SC-FLOAM? A real-time LiDAR SLAM package that integrates FLOAM and ScanContext. FLOAM for odometry (i.e., consecutive motion estimati

Jinlai Zhang 16 Jan 8, 2023
A real-time LiDAR SLAM package that integrates TLOAM and ScanContext.

SC-TLOAM What is SC-TLOAM? A real-time LiDAR SLAM package that integrates TLOAM and ScanContext. TLOAM for odometry. ScanContext for coarse global loc

Jinlai Zhang 3 Sep 17, 2021
Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)

SC-LeGO-LOAM NEWS (Nov, 2020) A Scan Context integration for LIO-SAM, named SC-LIO-SAM (link), is also released. Real-time LiDAR SLAM: Scan Context (1

Giseop Kim 11 Jul 15, 2022
SPM-SLAM (improved)

Marker-based SLAM 此项目是SPM-SLAM的改进版本,也是一种基于AprilTag或者Aruco类型标记(marker)的SLAM系统,通过在环境中布置不同ID的marker,即可快速实现高精度的相机定位,经过在实验室环境下的测试,可以达到厘米级精度。具体流程可看图1。绿色部分是对

null 47 Nov 21, 2022
Final push for EAO-SLAM-Improve

EAO-SLAM-Improve-Final-Poject This is my final improve for EAO-SLAM, full project can be found at https://pan.baidu.com/s/1Zgat7FRjKEi7cbN3QDtqbA, pas

null 9 Sep 16, 2022
Research on Event Accumulator Settings for Event-Based SLAM

Research on Event Accumulator Settings for Event-Based SLAM This is the source code for paper "Research on Event Accumulator Settings for Event-Based

Robin Shaun 26 Dec 21, 2022
Experiments with ORB-SLAM and emscripten

Experiments with ORB-SLAM3 and emscripten Experiments to attempt to get ORB-SLAM3 working with emscripten. Please use the binvoc branch of my own fork

Nick Whitelegg 18 Dec 19, 2022
Finds static ORB features in a video(excluding the dynamic objects), typically for a SLAM scenario

static-ORB-extractor : SORBE Finds static ORB features in a video(excluding the dynamic objects), typically for a SLAM scenario Requirements OpenCV 3

null 4 Dec 17, 2022
Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.

Cartographer Purpose Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platfo

Cartographer 6.3k Jan 4, 2023
An implementation of AVP-SLAM and some new contributions

AVP-SLAM-PLUS AVP-SLAM-PLUS is an implementation of AVP-SLAM and some new contributions. Performance of AVP-SLAM-PLUS could be found in video(https://

null 405 Dec 30, 2022
Implementation of "An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems"

An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems Implementation of "An Analytical Solution to the IMU Initializati

David Zuniga-Noel 94 Nov 23, 2022
C++ Implementation of "An Equivariant Filter for Visual Inertial Odometry", ICRA 2021

EqF VIO (Equivariant Filter for Visual Inertial Odometry) This repository contains an implementation of an Equivariant Filter (EqF) for Visual Inertia

null 60 Nov 15, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

NVIDIA Isaac ROS 339 Dec 28, 2022
R3live - A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

R3LIVE A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package News [Dec 31, 2021] Release of cod

HKU-Mars-Lab 1.3k Jan 4, 2023