[3DV 2021] DSP-SLAM: Object Oriented SLAM with Deep Shape Priors

Overview

DSP-SLAM

Project Page | Video | Paper

This repository contains code for DSP-SLAM, an object-oriented SLAM system that builds a rich and accurate joint map of dense 3D models for foreground objects, and sparse landmark points to represent the background. DSP-SLAM takes as input the 3D point cloud reconstructed by a feature-based SLAM system and equips it with the ability to enhance its sparse map with dense reconstructions of detected objects. Objects are detected via semantic instance segmentation, and their shape and pose are estimated using category-specific deep shape embeddings as priors, via a novel second order optimization. Our object-aware bundle adjustment builds a pose-graph to jointly optimize camera poses, object locations and feature points. DSP-SLAM can operate at 10 frames per second on 3 different input modalities: monocular, stereo, or stereo+LiDAR.

More information and the paper can be found at our project page.

Figure of DSP-SLAM

Publication

DSP-SLAM: Object Oriented SLAM with Deep Shape Priors, Jingwen Wang, Martin Rünz, Lourdes Agapito, 3DV '21

If you find our work useful, please consider citing our paper:

@inproceedings{wang2021dspslam,
  author={Jingwen Wang and Martin Rünz and Lourdes Agapito},
  booktitle={2021 IEEE International Conference on 3D Vision (3DV)},
  title={DSP-SLAM: Object Oriented SLAM with Deep Shape Priors},
  year={2021}
}

1. Prerequisites

We have conducted most experiments and testings in Ubuntu 18.04 and 20.04, but it should also be possible to compile in other versions. You also need a powerful GPU to run DSP-SLAM, we have tested with RTX-2080 and RTX-3080.

TL;DR

We provide two building scripts which will install all the dependencies and build DSP-SLAM for you. Jump to here for more details. If you want to have a more flexible installation then please read through this section carefully and refer to those two scripts as guidance.

C++17

We have used many new features in C++17, so please make sure your C++ compiler supports C++17. For g++ versions, we have tested with g++-7, g++-8 and g++-9.

OpenCV

We use OpenCV for image related operations. Please make sure you have at least version 3.2. We have tested with OpenCV 3.4.1.

Eigen3

We use Eigen3 for matrix operations. Please make sure your Eigen3 version is at least 3.4.0. There is known compilation errors for lower versions.

Pangolin

Pangolin is used for visualization the reconstruction result. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.

DBoW2 and g2o (included in Thirdparty folder)

We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.

pybind11 (included in project root directory)

As our shape reconstruction is implemented in Python, we need to enable communication between C++ and Python using pybin11. It is added as a submodule in this project, you just need to make sure you specify option --recursive when cloning the repository.

Python Dependencies

Our prior-based object reconstruction is implemented in Python with PyTorch, which also requires MaskRCNN and PointPillars for 2D and 3D detection.

  • Python3 (tested with 3.7 and 3.8) and PyTorch (tested with 1.10) with CUDA (tested with 11.3 and 10.2)
  • mmdetection and mmdetection3d
  • Others: addict, plyfile, opencv-python, open3d

Compiling and installing mmdetection3d will require nvcc, so you need to make sure the CUDA version installed using conda matches the CUDA installed under your usr/local/cuda-*. e.g. If you have CUDA 10.2 installed under /usr/local/cuda and would like to install PyTorch 1.10, you need to install the prebuilt PyTorch with CUDA 10.2.

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

You can check the supported CUDA version for precompiled packages on the PyTorch website. We have provided two example environment files which have CUDA 10.2/11.3 and PyTorch 1.10 for your reference. If you have CUDA 10.2 or CUDA 11.3 installed in your /usr/local, you can use it to set up your Python environment:

conda env create -f environment.yml
conda activate dsp-slam

Then you will still need to install mmdetection and mmdetection3d mannually. More details instruction can be found here.

2. Building DSP-SLAM

Clone the repository:

git clone --recursive https://github.com/JingwenWang95/DSP-SLAM.git

Building script

For your convenience, we provide a building script build_cuda102.sh and build_cuda113.sh which show step-by-step how DSP-SLAM is built and which dependencies are required. Those scripts will install everything for you including CUDA (version is specified in the script name) and assume you have CUDA driver (support at least CUDA 10.2) and Anaconda installed on your computer. You can select whichever you want. e.g. If you your GPU is RTX-30 series which doesn't support CUDA 10 you can try with the one with CUDA 11.3.

You can simply run:

./build_cuda***.sh --install-cuda --build-dependencies --create-conda-env

and it will set up all the dependencies and build DSP-SLAM for you. If you want to have a more flexible installation (use your own CUDA and Pytorch, build DSP-SLAM with your own version of OpenCV, Eigen3, etc), Those scripts can also provide important guidance for you.

CMake options:

When building DSP-SLAM the following CMake options are mandatory: PYTHON_LIBRARIES, PYTHON_INCLUDE_DIRS, PYTHON_EXECUTABLE. Those must correspond to the same Python environment where your dependencies (PyTorch, mmdetection, mmdetection3d) are installed. Make sure these are correctly specified!

Once you have set up the dependencies, you can build DSP-SLAM:

# (assume you are under DSP-SLAM project directory)
mkdir build
cd build
cmake -DPYTHON_LIBRARIES={YOUR_PYTHON_LIBRARY_PATH} \
      -DPYTHON_INCLUDE_DIRS={YOUR_PYTHON_INCLUDE_PATH} \
      -DPYTHON_EXECUTABLE={YOUR_PYTHON_EXECUTABLE_PATH} \
      ..
make -j8

After successfully building DSP-SLAM, you will have libDSP-SLAM.so at lib folder and the executables dsp_slam and dsp_slam_mono under project root directory.

3. Running DSP-SLAM

Dataset

You can download the example sequences and pre-trained network model weights (DeepSDF, MaskRCNN, PointPillars) from here. It contains example sequences of KITTI, Freiburg Cars and Redwood Chairs dataset.

Run dsp_slam and dsp_slam_mono

After obtaining the 2 binary executables, you will need to suppy 4 parameters to run the program: 1. path to vocabulary 2. path to .yaml config file 3. path to sequence data directory 4. path to save map. Before running DSP-SLAM, make sure you run conda activate dsp-slam to activate the correct Python environmrnt. Here are some example usages:

For KITTI sequence for example, you can run:

./dsp_slam Vocabulary/ORBvoc.bin configs/KITTI04-12.yaml data/kitti/07 map/kitti/07

For Freiburg Cars:

./dsp_slam_mono Vocabulary/ORBvoc.bin configs/freiburg_001.yaml data/freiburg/001 map/freiburg/001

For Redwood Chairs:

./dsp_slam_mono Vocabulary/ORBvoc.bin configs/redwood_09374.yaml data/redwood/09374 map/redwood/09374

Save and visualize map

If you supply a valid path to DSP-SLAM as the 4-th argument, after running the program you should get 3 text files under that directory: Cameras.txt, MapObjects.txt and MapPoints.txt. MapObjects.txt stores the reconstructed object(s) as shape code and 7-DoF pose. Before you can visualize the map, you need to extract meshes from shape codes by running:

python extract_map_objects.py --config configs/config_kitti.json --map_dir map/07 --voxels_dim 64

It will create a new directory under map/07 and stores all the meshes and object poses there. Then you will be able to visualize the reconstructed joint map by running:

python visualize_map.py --config configs/config_kitti.json --map_dir map/07

Then you will be able to view the map in an Open3D window:

Tips

Try python script of single-shot reconstruction first

We provide a Python script reconstruct_frame.py which does 3D object reconstruction from a single frame for KITTI sequences. Running it does not require any C++ stuff. Here is an example usage:

python reconstruct_frame.py --config configs/config_kitti.json --sequence_dir data/kitti/07 --frame_id 100

If you can run it smoothly you will see a Open3D window pop up. The figure below shows an example result:

Run DSP-SLAM with offline detector

If you can successfully build DSP-SLAM but get errors from Python side when running the program, then you can try supplying pre-stored labels and run DSP-SLAM with offline detector. We have provided 2D and 3D labels for KITTI sequence in the data. To run DSP-SLAM with offline mode, you will need to change the field detect_online in the .json config file to false and specify the corresponding label path.

Label format

If you want to create your own labels with your own detectors, you can follow the same format as the labels we provided in the KITTI-07 sequence.

  • 3D labels contains 3D detection boxes under KITTI convention. Each .lbl file consits of a numpy array of size Nx7, where N is the number of objects detected. Each row of the array is a 3D detection box: [x, y, z, w, l, h, ry]. More information about the KITTI coordinate system can be found from mmdetection3d or KITTI website.
  • 2D labels contains MaskRCNN detection boxes and segmentation masks. Each .lbl file consists of of a dictionary with two keys: pred_boxes and pred_maskes. Boxes and masks are stored as numpy array of size Nx4 and NxHxW.

Run DSP-SLAM with mono sequence

If you have problem installing mmdetection3d but can run mmdetection smoothly, then you can start with mono sequences as they only require 2D detector.

4. License

DSP-SLAM includes the third-party open-source software ORB-SLAM2, which itself includes third-party open-source software. Each of these components have their own license. DSP-SLAM also takes a part of code from DeepSDF which is under MIT license: https://github.com/facebookresearch/DeepSDF.

DSP-SLAM is released under GPLv3 license in line with ORB-SLAM2. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.

5. Acknowledgements

Research presented here has been supported by the UCL Centre for Doctoral Training in Foundational AI under UKRI grant number EP/S021566/1. We thank Wonbong Jang and Adam Sherwood for fruitful discussions. We are also grateful to Binbin Xu and Xin Kong for their patient code testing!

Comments
  • why can not reconstruct object?

    why can not reconstruct object?

    After I construct env and complie project , I run "./dsp_slam Vocabulary/ORBvoc.bin configs/KITTI04-12.yaml data/kitti/07 map/kitti/07" and get some files ,then run "python extract_map_objects.py --config configs/config_kitti.json --map_dir map/07 --voxels_dim 64" get some npy/ply files. But when I run "python visualize_map.py --config configs/config_kitti.json --map_dir map/07", I get lots of black point in open3d. How can I do to generate the picture you put int the readme "Save and visualize map"? And the same as freiburg/001. By the way How can I use ./dsp_slam to run freiburg/001? tks so much!

    opened by mundanePeo 9
  • How to prepare datasets for other objects?

    How to prepare datasets for other objects?

    Hi;

    I have two questions related to extending this framework to other objects: (1) How to prepare the data and generate the weights and labels files for other objects (trucks, signs, cyclists, pedestrians) ? (2) What would be the most efficient way to run multiple detections in this case? DSP-SLAM source is built around networks that are trained to work with cars, how efficient would it be to load weights based on detected object to allow it to work with multiple object classes?

    opened by Tariq-Abuhashim 7
  • run dsp_slam_mono but nothing happens

    run dsp_slam_mono but nothing happens

    Hello, I have configed the environment and compile the DSP-SLAM project successfully, then I run the dsp_slam_mono as follows. The programme keeps still, without any information、error inform or exit. Could you suggest any probable causes?

    $ ./dsp_slam_mono Vocabulary/ORBvoc.bin configs/redwood_09374.yaml /media/lj/TOSHIBA/dataset/RedwoodOS/09374 map/redwood/09374
    
    opened by GetOverMassif 6
  • When run dsp_slam, it happens

    When run dsp_slam, it happens "No module named 'skimage'"

    **Amazing work!

    I run dsp_slam,but I meet the problem,I install skimage by "sudo apt-get install python-skimage",but it still exists.**

    (dsp-slam) ubuntu-slam@ubuntuslam-B560M-AORUS-ELITE:~/DSP-SLAM$ ./dsp_slam Vocabulary/ORBvoc.bin configs/KITTI04-12.yaml /home/ubuntu-slam/dataset/KITTI/06 map/kitti/06

    DSP-SLAM: Object Oriented SLAM with Deep Shape Priors. This program comes with ABSOLUTELY NO WARRANTY; This is free software, and you are welcome to redistribute it under certain conditions. See LICENSE.txt.

    Input sensor was set to: Stereo

    Loading ORB Vocabulary. This could take a while... Vocabulary loaded!

    terminate called after throwing an instance of 'pybind11::error_already_set' what(): ModuleNotFoundError: No module named 'skimage'

    At: /home/ubuntu-slam/DSP-SLAM/reconstruct/utils.py(23): (219): _call_with_frames_removed (728): exec_module (677): _load_unlocked (967): _find_and_load_unlocked (983): _find_and_load

    已放弃 (核心已转储)

    opened by HJMGARMIN 6
  • mmdet3d error

    mmdet3d error

    Hello, I run into a problem about mmdetection3d, I don't how to solve it, I have confused for hours.

    `Traceback (most recent call last): File "setup.py", line 248, in zip_safe=False) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/develop.py", line 114, in install_for_development self.run_command('build_ext') File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions build_ext.build_extensions(self) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension depends=ext.depends) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/ccompiler.py", line 574, in compile self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 483, in unix_wrap_single_compile cflags = unix_cuda_flags(cflags) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 450, in unix_cuda_flags cflags + _get_cuda_arch_flags(cflags)) File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1606, in _get_cuda_arch_flags arch_list[-1] += '+PTX'

    IndexError: list index out of range `

    opened by HJMGARMIN 5
  • Stereo-only input support?

    Stereo-only input support?

    Hi, thanks for your nice work and sharing the code! I was quite happy when I see the "stereo streams" are supported in the paper. I could successfully reproduce the result (qualitatively).

    However, it seems that the released code does not support stereo-only input, right? The kitti dataset uses stereo and lidar data at the same time. Do I miss anything or if there do have support? Thank you!

    opened by qinyq 3
  • can not run ./dsp_slam_mono on kitti dataset

    can not run ./dsp_slam_mono on kitti dataset

    anazing work!
    I run

    ./dsp_slam_mono Vocabulary/ORBvoc.bin configs/KITTI00-02.yaml data/kitti/07 map/kitti/07
    

    and it crashed

    Start processing sequence ...
    Images in the sequence: 1101
    
    New Map created with 165 points
    New Keyframe
    /home/jzx/miniconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1634272168290/work/aten/src/ATen/native/TensorShape.cpp:2157.)
      return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
    3D detector takes 0.880130 seconds
    /home/jzx/miniconda3/envs/dsp-slam/lib/python3.7/site-packages/mmdet/datasets/utils.py:69: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
      'data pipeline in your config file.', UserWarning)
    2D detctor takes 0.838410 seconds
    terminate called after throwing an instance of 'pybind11::error_already_set'
      what():  KeyError: 'background_rays'
    
    At:
      /home/jzx/DSP-SLAM/reconstruct/utils.py(84): __missing__
      /home/jzx/miniconda3/envs/dsp-slam/lib/python3.7/site-packages/addict/addict.py(67): __getattr__
    
    [1]    17664 abort (core dumped)  ./dsp_slam_mono Vocabulary/ORBvoc.bin configs/KITTI00-02.yaml data/kitti/07 
    
    opened by jzx-gooner 3
  •  There is no response after I entering the command~

    There is no response after I entering the command~

    Hi,I don't konw why there is no response after typing the command. Here's the command:./dsp_slam_mono Vocabulary/ORBvoc.bin configs/freiburg_001.yaml freiburg_static_cars_52/car001 map/freiburg/001 The dataset i download is the resulting data which is 4.2GB Could you help me,please

    opened by QingWind6 3
  • munmap_chunk(): invalid pointer

    munmap_chunk(): invalid pointer

    Hi, thank you for this nice project. I successfully build dsp_slam with cuda 11.1, pytorch 1.8.2 lts referring to build_cuda113.sh BUT "munmap_chunk(): invalid pointer" error occurred when I tried to run both dsp_slam and dsp_slam_mono. Do you have any ideas about this?

    opened by Taekbum 3
  • The experiment directory does not include specifications file

    The experiment directory does not include specifications file "specs.json"

    Hi, I can't wait to follow this great job. After successfully built DSP-SLAM , I occur a weird problem.

    DSP-SLAM: Object Oriented SLAM with Deep Shape Priors. This program comes with ABSOLUTELY NO WARRANTY; This is free software, and you are welcome to redistribute it under certain conditions. See LICENSE.txt. Input sensor was set to: Stereo Loading ORB Vocabulary. This could take a while... Vocabulary loaded! terminate` called after throwing an instance of 'pybind11::error_already_set' what(): Exception: The experiment directory does not include specifications file "specs.json"

    Please tell me what's going wrong, thanks in advance.

    opened by Leonard-Yao 3
  • Segfault

    Segfault

    Hello, this is a great job. When I run this program, a segfault occurs. I modified the path in config_kitti.json according to the config file in your program, and the environment compiles without any errors. May I ask this What could be the reason?

    opened by xinzhichao 3
  • Aborted (core dumped)

    Aborted (core dumped)

    terminate called after throwing an instance of 'pybind11::error_already_set' what(): AttributeError: module 'skimage.measure' has no attribute 'marching_cubes_lewiner'

    At: /home/shu/DSP-SLAM/reconstruct/utils.py(130): convert_sdf_voxels_to_mesh /home/shu/DSP-SLAM/reconstruct/optimizer.py(218): extract_mesh_from_code Aborted (core dumped)

    Hello author, the above is the error I prompted when running the code. When running the code, the interface will pop up for 5s first, and then it will end, sometimes even shut down.Could you please give me some guidance.

    opened by shuzhangshu 2
  • other category of pre-trained DeepSDF models

    other category of pre-trained DeepSDF models

    Hi, it's a nice work and the reconstruction works very good. It seems that the pertained DeepSDF model only support cars, right? Other vehicle types e.g. truck/bus are not included and thus could not reconstructed. How do I make it support truck/bus reconstruction? Do I need to train the SDF model for these classes? e.g. DeepSDF training Thanks in advance.

    opened by qinyq 0
  • undefined reference to `TIFF****@LIBTIFF_4.0'

    undefined reference to `TIFF****@LIBTIFF_4.0'

    When you meet the following problem, " ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to TIFFOpen@LIBTIFF_4.0' ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference toTIFFGetField@LIBTIFF_4.0' ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to TIFFScanlineSize@LIBTIFF_4.0' ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference toTIFFReadScanline@LIBTIFF_4.0' ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to `TIFFClose@LIBTIFF_4.0' " add "-ltiff" to "targtet_link_libraries" in "DSP-SLAM/CMakeLists.txt". It may be helpful.

    opened by Hello-Water 1
  • How  should i do to repeat your experiment?

    How should i do to repeat your experiment?

    Thanks for your good project! Your work is very great! Could you tell me what should I do if I want to repeat your experiments? The sincerity anticipates your reply!

    opened by mundanePeo 0
Owner
Jingwen Wang
I'm a PhD student at University College London (UCL) working on Object SLAM and 3D Vision.
Jingwen Wang
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Amazon Archives 4.4k Dec 30, 2022
Object Based Generic Perception Object Model

This model is a highly parameterizable generic perception sensor and tracking model. It can be parameterized as a Lidar or a Radar. The model is based on object lists and all modeling is performed on object level.

TU Darmstadt - FZD 5 Jun 11, 2022
Reviatalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation

Reviatalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation This is the implementation of the approach describ

Taosha Fan 47 Nov 15, 2022
NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping by: Alexandre Boulch, Pierre-Alain Langlois, Gilles Puy a

valeo.ai 26 Sep 6, 2022
BladeDISC - BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.

BladeDISC Introduction Overview Features and Roadmap Frontend Framework Support Matrix Backend Support Matrix Deployment Solutions Numbers of Typical

Alibaba 517 Jan 4, 2023
heuristically and dynamically sample (more) uniformly from large decision trees of unknown shape

PROBLEM STATEMENT When writing a randomized generator for some file format in a general-purpose programming language, we can view the resulting progra

John Regehr 4 Feb 15, 2022
DeepI2P - Image-to-Point Cloud Registration via Deep Classification. CVPR 2021

#DeepI2P: Image-to-Point Cloud Registration via Deep Classification Summary Video PyTorch implementation for our CVPR 2021 paper DeepI2P. DeepI2P solv

Li Jiaxin 138 Jan 8, 2023
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 197 Jan 5, 2023
Instagram's experimental performance oriented greenfield implementation of Python.

Welcome to Skybison! Skybison is experimental performance-oriented greenfield implementation of Python 3.8. It contains a number of performance optimi

Meta Experimental 288 Jan 3, 2023
2021/3/30 ~ 2021/7/12 に行われる企画「競プロ典型 90 問」の問題・解説・ソースコードなどの資料をアップロードしています。

競プロ典型 90 問 日曜を除く毎朝 7:40 に競プロやアルゴリズムの教育的な問題を Twitter(@e869120)に投稿する企画です。 本企画は、2021 年 3 月 30 日から 7 月 12 日まで行われる予定です。 企画の目的 「競プロ典型 90 問」は、競プロ初級者から中上級者(レー

Masataka Yoneda 709 Dec 29, 2022
OpenVSLAM: A Versatile Visual SLAM Framework

OpenVSLAM: A Versatile Visual SLAM Framework NOTE: This is a community fork of xdspacelab/openvslam. It was created to continue active development of

null 551 Jan 9, 2023
Radar SLAM: yeti radar odometry + ScanContext-based Loop Closing

navtech-radar-slam Radar SLAM: yeti radar odometry + ScanContext-based Loop Closing What is Navtech-Radar-SLAM? In this repository, a (minimal) SLAM p

Giseop Kim 84 Dec 22, 2022
A real-time LiDAR SLAM package that integrates FLOAM and ScanContext.

SC-FLOAM What is SC-FLOAM? A real-time LiDAR SLAM package that integrates FLOAM and ScanContext. FLOAM for odometry (i.e., consecutive motion estimati

Jinlai Zhang 16 Jan 8, 2023
A real-time LiDAR SLAM package that integrates TLOAM and ScanContext.

SC-TLOAM What is SC-TLOAM? A real-time LiDAR SLAM package that integrates TLOAM and ScanContext. TLOAM for odometry. ScanContext for coarse global loc

Jinlai Zhang 3 Sep 17, 2021
Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)

SC-LeGO-LOAM NEWS (Nov, 2020) A Scan Context integration for LIO-SAM, named SC-LIO-SAM (link), is also released. Real-time LiDAR SLAM: Scan Context (1

Giseop Kim 11 Jul 15, 2022
SPM-SLAM (improved)

Marker-based SLAM 此项目是SPM-SLAM的改进版本,也是一种基于AprilTag或者Aruco类型标记(marker)的SLAM系统,通过在环境中布置不同ID的marker,即可快速实现高精度的相机定位,经过在实验室环境下的测试,可以达到厘米级精度。具体流程可看图1。绿色部分是对

null 47 Nov 21, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 319 Dec 30, 2022
Final push for EAO-SLAM-Improve

EAO-SLAM-Improve-Final-Poject This is my final improve for EAO-SLAM, full project can be found at https://pan.baidu.com/s/1Zgat7FRjKEi7cbN3QDtqbA, pas

null 9 Sep 16, 2022