Square Root Bundle Adjustment for Large-Scale Reconstruction

Overview

RootBA: Square Root Bundle Adjustment

Project Page | Paper | Poster | Video | Code

teaser image

Table of Contents

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{demmel2021rootba,
 author = {Nikolaus Demmel and Christiane Sommer and Daniel Cremers and Vladyslav Usenko},
 title = {Square Root Bundle Adjustment for Large-Scale Reconstruction},
 booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2021}
}

Note: The initial public release in this repository corresponds to the code version evluated in the CVPR'21 paper, after refactoring and cleanup. Except for minor numerical differences, the results should be reproducible on comparable hardware. As the code evolves, runtime differences might become larger.

Dependencies

The following describes the needed dependencies in general, followed by concrete instructions to install them on Linux or macOS.

Toolchain

  • C++17 compiler
  • CMake 3.13 or newer

Included as submodule or copy

See the external folder and the scripts/build-external.sh script.

The following libraries are submodules:

Some external libraries have their source copied directly as part of this repository, see the external/download_copied_sources.sh script:

Externally supplied

The following dependencies are expected to be supplied externally, e.g. from a system-wide install:

  • TBB

    Note: You can control the location where TBB is found by setting the environment variable TBB_ROOT, e.g. export TBB_ROOT=/opt/intel/tbb.

  • glog

  • BLAS and LAPACK routines are needed by SuiteSparse, and optionally used by Eigen and Ceres directly for some operations.

    On UNIX OSes other than macOS we recommend ATLAS, which includes BLAS and LAPACK routines. It is also possible to use OpenBLAS. However, one needs to be careful to turn off the threading inside OpenBLAS as it conflicts with use of threads in RootBA and also Ceres. For example, export OPENBLAS_NUM_THREADS=1.

    MacOS ships with an optimized LAPACK and BLAS implementation as part of the Accelerate framework. The Ceres build system will automatically detect and use it.

Python

Python dependencies are needed for scripts and tools to generate config files, run experiments, plot results, etc. For generating result tables and plots you additionally need latexmk and a LaTeX distribution.

Developer Tools

These additional dependencies are useful if you plan to work on the code:

  • ccache helps to speed up re-compilation by caching the compilation results for unchanged translation units.
  • ninja is an alternative cmake generator that has better parallelization of your builds compared to standard make.
  • clang-format version >= 10 is used for formatting C++ code.
  • clang-tidy version >= 12 is used to style-check C++ code.
  • yapf is used for formatting Python code.

There are scripts to help apply formatting and style checks to all source code files:

  • scripts/clang-format-all.sh
  • scripts/clang-tidy-all.sh
  • scripts/yapf-all.sh

Installing dependencies on Linux

Ubuntu 20.04 and newer are supported.

Note: Ubuntu 18.04 should also work, but you need to additionally install GCC 9 from the Toolchain test builds PPA.

Toolchain and libraries

# for RootBA and Ceres
sudo apt install \
    libgoogle-glog-dev \
    libgflags-dev \
    libtbb-dev \
    libatlas-base-dev \
    libsuitesparse-dev
# for Pangolin GUI
sudo apt install \
    libglew-dev \
    ffmpeg \
    libavcodec-dev \
    libavutil-dev \
    libavformat-dev \
    libswscale-dev \
    libavdevice-dev \
    libjpeg-dev \
    libpng-dev \
    libtiff5-dev \
    libopenexr-dev

To get a recent version of cmake you can easily install it from pip.

sudo apt install python3-pip
python3 -m pip install --user -U cmake

# put this in your .bashrc to ensure cmake from pip is found
export PATH="~/.local/bin:$PATH"

Python (optional)

Other python dependencies (for tools and scripts) can also be installed via pip.

python3 -m pip install --user -U py_ubjson matplotlib numpy munch scipy pylatex toml

For generating result tables and plots you additionally need latexmk and a LaTeX distribution.

sudo apt install texlive-latex-extra latexmk

Developer tools (optional)

For developer tools, you can install ninja and ccache from apt:

sudo apt install ccache ninja-build

You can install yapf from pip:

python3 -m pip install --user -U yapf

You can install clang-format from apt:

Note: on 18.04 you need to install clang-format version 10 or newer from the llvm website):

sudo apt install clang-format

For clang-tidy you need at least version 12, so even on Ubuntu 20.04 you need to get it from the llvm website.

Installing depedencies on macOS

We support macOS 10.15 "Catalina" and newer.

Note: We have not yet tested this codebase on M1 macs.

Toolchain and libraries

Install Homebrew, then use it to install dependencies:

brew install cmake glog gflags tbb suitesparse
brew install glew ffmpeg libjpeg libpng libtiff

Python (optional)

Python dependencies (for tools and scripts) can be installed via pip after installing python 3 from homebrew.

brew install python
python3 -m pip install --user -U py_ubjson matplotlib numpy munch scipy pylatex toml

For generating result tables and plots you additionally need latexmk and a LaTeX distribution.

brew install --cask mactex

Developer tools (optional)

Developer tools can be installed with homebrew.

brew install ccache ninja clang-format clang-tidy yapf

Building

Build dependencies

./scripts/build-external.sh [BUILD_TYPE]

You can optionally pass the cmake BUILD_TYPE used to compile the third party libraries as the first argument. If you don't pass anything the deafult is Release. This build script will use ccache and ninja automaticaly if they are found on PATH.

Note: The build-external.sh build script will init, synchronize and update all submodules, so usually you don't have to worry about submodules. For example, you don't have to run git submodule update --recursive manually when the submodules were updated upstream, as long as you run the build-external.sh script. But there is a small caveat, should you ever want to update a submodule yourself (e.g. update Eigen to a new version). In that case you need to commit that change before running this script, else the script will revert the submodule back to the committed version.

Build RootBA option a)

Use the build script.

./scripts/build-rootba.sh [BUILD_TYPE]

You can optionally pass the cmake BUILD_TYPE used to compile RootBA as the first argument. If you don't pass anything the default is Release. The cmake build folder is build, inside the project root. This build script will use ccache and ninja automaticaly if they are found on PATH.

Build RootBA option b)

Manually build with the standard cmake workflow.

mkdir build && cd build
cmake ..
make -j8

The cmake project will automatically use ccache if it is found on PATH (unless you override by manually specifying CMAKE_C_COMPILER_LAUNCHER/CMAKE_CXX_COMPILER_LAUNCHER). To use ninja instead of make, you can use:

cmake .. -G Ninja
ninja

CMake Options

You can set the following options when calling cmake. For setting option OPTION to a value of VALUE, add the command line argument -DOPTION=VALUE to the cmake call above.

  • ROOTBA_DEVELOPER_MODE: Presets for convenience during development. If enabled, the binaries are not placed in the cmake's default location in the cmake build folder, but instead inside the source folder, in /bin. Turn off if you prefer to work directly in multiple build folders at the same time. Default: ON
  • ROOTBA_ENABLE_TESTING: Build unit tests. Default: ON
  • ROOTBA_INSTANTIATIONS_DOUBLE: Instantiate templates with Scalar = double. If disabled, running with config option use_double = true will cause a runtime error. But, disabling it may reduce compile times and memory consumption during compilation significantly. While developing, we recommend leaving only one of ROOTBA_INSTANTIONS_DOUBLE or ROOTBA_INSTANTIATIONS_FLOAT enabled, not both. Default: ON
  • ROOTBA_INSTANTIATIONS_FLOAT: Instantiate templates with Scalar = float. If disabled, running with config option use_double = false will cause a runtime error. But, disabling it may reduce compile times and memory consumption during compilation significantly. While developing, we recommend leaving only one of ROOTBA_INSTANTIONS_DOUBLE or ROOTBA_INSTANTIATIONS_FLOAT enabled, not both. Default: ON
  • ROOTBA_INSTANTIATIONS_STATIC_LMB: Instatiate statically sized specializations for small sized landmark blocks. If disabled, all sizes use the dymanically sized implementation, which depending on the problem, might have slightly higher runtime (maybe around 10%). But, disabling it might reduce compile times and memory consumption during compilation significantly. We recommend turning this off during development. Default: ON
  • BUILD_SHARED_LIBS: Build all rootba modules as shared libraries (see the cmake documentation). Default: ON

Running Unit Tests

Unit tests are implemented with the GoogleTest framework and can be run with CMake's ctest command after compilation.

cd build
ctest

BAL Problems

In the "Bundle Adjustment in the Large" (BAL) problem formulation cameras are represented as world-to-cam poses and points as 3D points in world frame, and each camera has its own set of independent intrinsics, using the "Snavely projection" function with one focal length f and two radial distortion parameters k1 and k2. This is implemented in the BalProblem class. Besides the BAL format, we also implement a reader for "bundle" files, but the internal representation is the same.

Note: In our code we follow the convention that the positive z-axis points forward in camera viewing direction. Both BAL and bundle files specify the projection function assuming the negative z-axis pointing in viewing direction. We convert to our convention when reading the datasets.

For testing and development, two example datasets from BAL are included in the data/rootba/bal folder:

data/rootba/bal/ladybug/problem-49-7776-pre.txt
data/rootba/bal/final/problem-93-61203-pre.txt

We moreover include a download-bal-problems.sh script to conveniently download the BAL datasets. See the batch evaluation tutorial below for more details.

Additionally, we provide a mirror of BAL and some additional publicly available datasets: https://gitlab.vision.in.tum.de/rootba/rootba_data

Please refer to the README files in the corresponding folders of that repository for further details on the data source, licensing and any preprocessing we applied. Large files in that repository are stored with Git LFS. Beware that the full download including LFS objects is around 15GB.

The tutorial examples below assume that the data is found in a rootba_data folder parallel to the source folder, so if you decide to clone the data git repository, you can use:

cd ..
git clone https://gitlab.vision.in.tum.de/rootba/rootba_data.git

Testing Bundle Adjustment

Visualization of BAL Problems

With a simple GUI application you can visualize the BAL problems, including 3D camera poses and landmark positions, as well as feature detections and landmark reprojections.

./bin/bal_gui --input data/rootba/bal/final/problem-93-61203-pre.txt

Plots

Running Bundle Adjustment

The main executable to run bundle adjustment is bal. This implements bundle adjustment in all evaluated variants and can be configured from the command line and/or a rootba_config.toml file.

There are also three additional variants, bal_qr, bal_sc and bal_ceres, which override the solver_type option accordingly. They can be useful during development, since they only link the corresponding modules and thus might have faster compile times.

For example, you can run the square root solver with default parameters on one of the included test datasets with:

./bin/bal --input data/rootba/bal/ladybug/problem-49-7776-pre.txt

This generates a ba_log.json file with per-iteration log information that can be evaluated and visualized.

Config Options

Options can be configured in a rootba_config.toml configuration file or from the command line, where the command line takes precedence.

The --help command line argument provides comprehensive documentation of available options and you can generate a config file with default values with:

./bin/bal --dump-config --config /dev/null > rootba_config.toml

For futher details and a discussion of the options corresponding to the evaluated solver variants from the CVPR'21 paper see Configuration.md.

Visualization of Results

The different variants of bundle adjustment all log their progress to a ba_log.json or ba_log.ubjson file. Some basic information can be displayed with the plot-logs.py script:

./scripts/plot-logs.py ba_log.json

You can also pass multiple files, or folders, which are searched for ba_log.json and ba_log.ubjson files. In the plots, the name of the containing folder is used as a label for each ba_log.json file.

Let's run a small example and compare solver performance:

mkdir -p ../rootba_testing/qr32/
mkdir -p ../rootba_testing/sc64/
./bin/bal -C ../rootba_testing/qr32/ --no-use-double --input ../../rootba/data/rootba/bal/ladybug/problem-49-7776-pre.txt
./bin/bal -C ../rootba_testing/sc64/ --solver-type SCHUR_COMPLEMENT --input ../../rootba/data/rootba/bal/ladybug/problem-49-7776-pre.txt
./scripts/plot-logs.py ../rootba_testing/

On this small example problem both solvers converge to the same cost and are similarly fast:

Plots

Batch Evaluation

For scripts to run systematic experiments and do more sophisticated analysis of the generated log files, please follow the Batch Evaluation Tutorial.

This also includes instructions to reproduce the results presented in the CVPR'21 paper.

PDF Preview

Repository Layout

The following gives a brief overview over the layout of top-level folders in this repository.

  • bin: default destination for compiled binaries
  • build: default cmake build folder
  • ci: utilities for CI such as scripts and docker files
  • cmake: cmake utilities and find modules; note in particular SetupDependencies.cmake, which sets up cmake targets for third-party libraries
  • data: sample datasets for testing
  • docs: documentation beyond the main README, including resources such as images
  • examples: example config files
  • external: third-party libraries included as submodules or copies; also build and install folders generated by the build-external.sh scripts
  • python: Python module for plotting and generating result tables from batch experiments.
  • scripts: various utility scripts for building, developing, running experiments and plotting results
  • src: this contains the implementation, including headers, source files, and unit tests.
  • test: additional tests

Code Layout

The main modules in the src folder are as follows.

Corresponding header and source files are found in the same folder with extension .hpp and .cpp. If there are corresponding unit tests they are found in the same folder with a .test.cpp file extension.

  • app: executables
  • rootba: libraries
    • bal: data structures for the optimization state; options; common utilities and logging
    • ceres: everything related to our implementation with Ceres
    • cg: custom CG implementation including data strcutures for pre-conditioners
    • cli: common utils for command line parsing and automatically registering options with the command line
    • options: generic options framework
    • pangolin: everything related to the GUI implementation
    • qr: custom QR solver main implementation details
    • sc: custom SC solver main implementation details
    • solver: custom Levenberg-Marquardt solver loop and interface to the QR and SC implementations
    • util: generic utilities

License

The code of the RootBA project is licensed under a BSD 3-Clause License.

Parts of the code are derived from Ceres Solver. Please also consider licenses of used third-party libraries. See ACKNOWLEDGEMENTS.

Comments
  • CANNOT add more cli paramters

    CANNOT add more cli paramters

    Hi, I've been trying to add another cli paramter to set groundtruth file for system benmarking. But it cannot work.

      VISITABLE_META(std::string, input, help("input dataset file to load"));
      VISITABLE_META(std::string, ground_truth, help("input ground_truth file to load"));
      VISITABLE_META(DatasetType, input_type,
                     init(DatasetType::AUTO).help("type of dataset to load"));
    

    It's mentioned that I have to add it to docstring, so I add it to configuration.md also. Still, it cannot work. After some debuging, it returned false in following code. But I don't understand why it failed.

      // parse arguments
      if (!parse(argc, argv, cli)) {
        auto executable_name = std::filesystem::path(argv[0]).filename();
        auto fmt = doc_formatting{}.doc_column(22);
        auto filter = param_filter{}.has_doc(tri::either);
        if (!application_summary.empty()) {
          std::cout << application_summary << "\n\n";
        }
    
        std::cout<<"ffffff\n";
        std::cout << "SYNOPSIS:\n"
                  << usage_lines(cli, executable_name) << "\n\n"
                  << "OPTIONS:\n"
                  << documentation(cli, fmt, filter) << '\n';
        return false;
      }
    
    

    So could you give me some hints about how to make it work?

    opened by BayRanger 7
  • Why inverse the coordinate to add pertubation

    Why inverse the coordinate to add pertubation

    Hi Nikolaus,

    In the code file bal_problem.cpp, there is a inversion transformation to add perturbation, so as to add noise in camera 2 world coordinate I think, My question is that is it necessary to do the transformation? What is the motivation to do this step?

    Bests

    Reference code

    if (rotation_sigma > 0 || translation_sigma > 0) {
        for (auto& cam : cameras_) {
          // perturb camera center in world coordinates
          if (translation_sigma > 0) {
            SE3 T_w_c = cam.T_c_w.inverse();
            T_w_c.translation() += perturbation<Scalar, 3>(translation_sigma, eng);
            cam.T_c_w = T_w_c.inverse();
          }
          // local rotation perturbation in camera frame
          if (rotation_sigma > 0) {
            cam.T_c_w.so3() =
                SO3::exp(perturbation<Scalar, 3>(rotation_sigma, eng)) *
                cam.T_c_w.so3();
          }
        }
      }
    
    opened by BayRanger 2
  • QR decomposition via Givens roation vs Householder transform?

    QR decomposition via Givens roation vs Householder transform?

    Hi,

    Thanks for your excellent work.

    In your works, you use the householder transform to perform QR decomposition. While in MSCKF related literatures, they usually use Givens rotation.

    After some search, I realize that they have the same level of accuracy (both are better than Schmidt orthogonalization). So why you use householder transform instead of Givens rotation. Is Householder transform really faster than Givens rotation?

    Best, Deshun

    opened by hitdshu 2
  • Some questions about the implementation and comparison to other solvers

    Some questions about the implementation and comparison to other solvers

    Hi,

    Thanks for your very excellent work. I have some questions about the current implementation and the performance.

    1. rootba/ceres use the jacobian squared sum to scale the problem data before solve them. But in g2o and srrg2_solver, they seem to not scale them at all. Are these necessary and to what extent are they necessary?

    2. It is understood that trust region(dogleg) type algorithm is more efficient than LM algorithm, as it only needs to factor the big sparse matrix only once in a iteration. However, almost all solvers use LM in their default settings. For ceres, the dogleg method is actually slower than LM in my experiments. Could you please give some reasons about this?

    3. In a recent paper, https://github.com/srrg-sapienza/srrg2_solver , the author there tests several solvers, and one can conclude that ceres is very unefficient from their experiments. With single thread, rootba might have similar performance to ceres and hence might be inefficient as well..... It would be great if you could provide some clues...

    Thanks again for your excellent works, like rootba/basalt, etc...

    Have a nice day, Deshun

    opened by hitdshu 2
  • Question

    Question

    Hi Nikolaus,

    Thanks for open-sourcing the code.

    Would you please explain how rootba method is different from DENSE_QR solver in Ceres? I presume this one avoids the computation of the Hessian and not scalable for large problems, similar to rootba.

    opened by melhashash 2
  • Modifying reprojection error

    Modifying reprojection error

    Hello, I would like to use your Bundle Adjustment solution with a different projection error based on another research in my field of study. The parameters of the camera pose do not change, only the projection mapping. Is there a way to add this kind of modification? Regards, Zachi

    opened by ShtainZ 1
  • Could you help me figure out why my examination result shows rootba is slow than ceres and schur complement

    Could you help me figure out why my examination result shows rootba is slow than ceres and schur complement

    I am reading your paper <Square Root Bundle Adjustment for Large-Scale Reconstruction, CVPR2021>. Your idea of using QR decomposition instead of traditional Schur Complement is awesome. I have run your source code rootba. The result image is shown in the end of the issue. From the picture, we can see QR-32(single precision QR in rootba) is slow than ceres and schur complement. I was puzzle about it. Could you help me figure out it?

    #!/usr/bin/env bash
    
    MY_EXAM_DATA_FOLDER="./rootba_testing_data_thread16"
    declare -a my_exames=("qr32" "qr64" "sc64" "sc32" "ceres")
    for i in "${my_exames[@]}"
    do
        mkdir -p $MY_EXAM_DATA_FOLDER/$i
    done
    
    DATA_ROOT_PATH=/home/shaoping/readcode/rootba/data
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/qr32/ --num-threads 0 --no-debug --no-use-double --use-householder-marginalization --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/qr64/ --num-threads 0 --no-debug --use-double --use-householder-marginalization --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/sc64/ --num-threads 0 --no-debug --solver-type SCHUR_COMPLEMENT  --use-double  --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/sc32/ --num-threads 0 --no-debug --solver-type SCHUR_COMPLEMENT --no-use-double  --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/ceres/ --num-threads 0 --no-debug --solver-type CERES --use-double  --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    
    ./scripts/plot-logs.py $MY_EXAM_DATA_FOLDER
    

    图片

    opened by varyshare 8
Owner
Nikolaus Demmel
Nikolaus Demmel
This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

null 33 Jun 27, 2021
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Kangcheng LIU 80 Dec 28, 2022
[NeurIPS 2021 Spotlight] Learning to Delegate for Large-scale Vehicle Routing

Learning to Delegate for Large-scale Vehicle Routing This directory contains the code, data, and model for our NeurIPS 2021 Spotlight paper Learning t

null 45 Dec 24, 2022
Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"

Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"

Beidi Chen 1k Dec 25, 2022
An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit

DREAMPlaceFPGA An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit. This work leverages the open-source A

Rachel Selina Rajarathnam 25 Dec 5, 2022
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. [Paper] [Supplementary Material] Langlois, P. A., Boulch, A

null 84 Dec 31, 2022
3D reconstruction with L515, VINS-RGBD and voxblox

3D-Recon: VINS-RGBD voxblox Recently we are trying to create dataset for 3D perception, so we need to create a 3D scanned environment like meta Replic

Ran Cheng 32 Nov 27, 2022
Root shell PoC for CVE-2021-3156

CVE-2021-3156 Root shell PoC for CVE-2021-3156 (no bruteforce) For educational purposes etc. Tested on Ubuntu 20.04 against sudo 1.8.31 All research c

CptGibbon 119 Jan 3, 2023
An Aspiring Drop-In Replacement for NumPy at Scale

Legate NumPy Legate NumPy is a Legate library that aims to provide a distributed and accelerated drop-in replacement for the NumPy API on top of the L

Legate 501 Dec 26, 2022
Multi-Scale Representation Learning on Proteins

Multi-Scale Representation Learning on Proteins (Under Construction and Subject to Change) Pending: Update links for dataset. This is the official PyT

Vignesh Ram Somnath 27 Dec 12, 2022
Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large scales

Fairring (FAIR + Herring): a faster all-reduce TL;DR: Using a variation on Amazon’s "Herring" technique, which leverages reduction servers, we can per

Meta Research 46 Nov 24, 2022
A program developed using MPI for distributed computation of Histogram for large data and their performance anaysis on multi-core systems

mpi-histo A program developed using MPI for distributed computation of Histogram for large data and their performance anaysis on multi-core systems. T

Raj Shrestha 2 Dec 21, 2021
heuristically and dynamically sample (more) uniformly from large decision trees of unknown shape

PROBLEM STATEMENT When writing a randomized generator for some file format in a general-purpose programming language, we can view the resulting progra

John Regehr 4 Feb 15, 2022
John Walker 24 Dec 15, 2022
SIMD (SSE) implementation of the infamous Fast Inverse Square Root algorithm from Quake III Arena.

simd_fastinvsqrt SIMD (SSE) implementation of the infamous Fast Inverse Square Root algorithm from Quake III Arena. Why Why not. How This video explai

Liam 7 Oct 4, 2022
ldd as a tree with an option to bundle dependencies into a single folder

libtree A tool that: ?? turns ldd into a tree ☝️ explains why shared libraries are found and why not ?? optionally deploys executables and dependencie

Harmen Stoppels 1.6k Dec 31, 2022
Large scale embeddings on a single machine.

Marius Marius is a system under active development for training embeddings for large-scale graphs on a single machine. Training on large scale graphs

Marius 106 Dec 29, 2022
This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

null 33 Jun 27, 2021
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Kangcheng LIU 80 Dec 28, 2022
A header-only C++ library for large scale eigenvalue problems

NOTE: Spectra 1.0.0 is released, with a lot of API-breaking changes. Please see the migration guide for a smooth transition to the new version. NOTE:

Yixuan Qiu 609 Jan 2, 2023