nanoflann: a C++11 header-only library for Nearest Neighbor (NN) search with KD-trees

Overview

nanoflann

nanoflann

Build Status

1. About

nanoflann is a C++11 header-only library for building KD-Trees of datasets with different topologies: R2, R3 (point clouds), SO(2) and SO(3) (2D and 3D rotation groups). No support for approximate NN is provided. nanoflann does not require compiling or installing. You just need to #include <nanoflann.hpp> in your code.

This library is a fork of the flann library (git) by Marius Muja and David G. Lowe, and born as a child project of MRPT. Following the original license terms, nanoflann is distributed under the BSD license. Please, for bugs use the issues button or fork and open a pull request.

Cite as:

@misc{blanco2014nanoflann,
  title        = {nanoflann: a {C}++ header-only fork of {FLANN}, a library for Nearest Neighbor ({NN}) with KD-trees},
  author       = {Blanco, Jose Luis and Rai, Pranjal Kumar},
  howpublished = {\url{https://github.com/jlblancoc/nanoflann}},
  year         = {2014}
}

1.1. Obtaining the code

  • Easiest way: clone this GIT repository and take the include/nanoflann.hpp file for use where you need it.
  • macOS users can install nanoflann with Homebrew with:
    $ brew install brewsci/science/nanoflann
    or
    $ brew tap brewsci/science
    $ brew install nanoflann
  • Linux users can install it with Linuxbrew with: brew install homebrew/science/nanoflann
  • List of stable releases. Check out the CHANGELOG

Although nanoflann itself doesn't have to be compiled, you can build some examples and tests with:

sudo apt-get install build-essential cmake libgtest-dev libeigen3-dev
mkdir build && cd build && cmake ..
make && make test

1.2. C++ API reference

  • Browse the Doxygen documentation.

  • Important note: If L2 norms are used, notice that search radius and all passed and returned distances are actually squared distances.

1.3. Code examples

1.4. Why a fork?

  • Execution time efficiency:

    • The power of the original flann library comes from the possibility of choosing between different ANN algorithms. The cost of this flexibility is the declaration of pure virtual methods which (in some circumstances) impose run-time penalties. In nanoflann all those virtual methods have been replaced by a combination of the Curiously Recurring Template Pattern (CRTP) and inlined methods, which are much faster.
    • For radiusSearch(), there is no need to make a call to determine the number of points within the radius and then call it again to get the data. By using STL containers for the output data, containers are automatically resized.
    • Users can (optionally) set the problem dimensionality at compile-time via a template argument, thus allowing the compiler to fully unroll loops.
    • nanoflann allows users to provide a precomputed bounding box of the data, if available, to avoid recomputation.
    • Indices of data points have been converted from int to size_t, which removes a limit when handling very large data sets.
  • Memory efficiency: Instead of making a copy of the entire dataset into a custom flann-like matrix before building a KD-tree index, nanoflann allows direct access to your data via an adaptor interface which must be implemented in your class.

Refer to the examples below or to the C++ API of nanoflann::KDTreeSingleIndexAdaptor<> for more info.

1.5. What can nanoflann do?

  • Building KD-trees with a single index (no randomized KD-trees, no approximate searches).
  • Fast, thread-safe querying for closest neighbors on KD-trees. The entry points are:
  • Working with 2D and 3D point clouds or N-dimensional data sets.
  • Working directly with Eigen::Matrix<> classes (matrices and vectors-of-vectors).
  • Working with dynamic point clouds without a need to rebuild entire kd-tree index.
  • Working with the distance metrics:
    • R^N: Euclidean spaces:
      • L1 (Manhattan)
      • L2 (squared Euclidean norm, favoring SSE2 optimization).
      • L2_Simple (squared Euclidean norm, for low-dimensionality data sets like point clouds).
    • SO(2): 2D rotational group
      • metric_SO2: Absolute angular diference.
    • SO(3): 3D rotational group (better suppport to be provided in future releases)
      • metric_SO3: Inner product between quaternions.
  • Saves and load the built indices to disk.
  • GUI based support for benchmarking multiple kd-tree libraries namely nanoflann, flann, fastann and libkdtree.

1.6. What can't nanoflann do?

  • Use other distance metrics apart from L1, L2, SO2 and SO3.
  • Support for SE(3) groups.
  • Only the C++ interface exists: there is no support for C, MATLAB or Python.
  • There is no automatic algorithm configuration (as described in the original Muja & Lowe's paper).

1.7. Use in your project via CMake

You can directly drop the nanoflann.hpp file in your project. Alternatively, the CMake standard method is also available:

  • Build and "install" nanoflann. Set CMAKE_INSTALL_PREFIX to a proper path and then execute make install (Linux, OSX) or build the INSTALL target (Visual Studio).
  • Then, add something like this to the CMake script of your project:
# Find nanoflannConfig.cmake:
find_package(nanoflann)

add_executable(my_project test.cpp)

# Make sure the include path is used:
target_link_libraries(my_project nanoflann::nanoflann)

2. Any help choosing the KD-tree parameters?

2.1. KDTreeSingleIndexAdaptorParams::leaf_max_size

A KD-tree is... well, a tree :-). And as such it has a root node, a set of intermediary nodes and finally, "leaf" nodes which are those without children.

Points (or, properly, point indices) are only stored in leaf nodes. Each leaf contains a list of which points fall within its range.

While building the tree, nodes are recursively divided until the number of points inside is equal or below some threshold. That is leaf_max_size. While doing queries, the "tree algorithm" ends by selecting leaf nodes, then performing linear search (one-by-one) for the closest point to the query within all those in the leaf.

So, leaf_max_size must be set as a tradeoff:

  • Large values mean that the tree will be built faster (since the tree will be smaller), but each query will be slower (since the linear search in the leaf is to be done over more points).
  • Small values will build the tree much slower (there will be many tree nodes), but queries will be faster... up to some point, since the "tree-part" of the search (logarithmic complexity) still has a significant cost.

What number to select really depends on the application and even on the size of the processor cache memory, so ideally you should do some benchmarking for maximizing efficiency.

But to help choosing a good value as a rule of thumb, I provide the following two benchmarks. Each graph represents the tree build (horizontal) and query (vertical) times for different leaf_max_size values between 1 and 10K (as 95% uncertainty ellipses, deformed due to the logarithmic scale).

  • A 100K point cloud, uniformly distributed (each point has (x,y,z) float coordinates):

perf5_1e5pts_time_vs_maxleaf

  • A ~150K point cloud from a real dataset (scan_071_points.dat from the Freiburg Campus 360 dataset, each point has (x,y,z) float coordinates):

perf5_1e5pts_time_vs_maxleaf_real_dataset

So, it seems that a leaf_max_size between 10 and 50 would be optimum in applications where the cost of queries dominates (e.g. ICP). At present, its default value is 10.

2.2. KDTreeSingleIndexAdaptorParams::checks

This parameter is really ignored in nanoflann, but was kept for backward compatibility with the original FLANN interface. Just ignore it.


3. Performance

3.1. nanoflann: faster and less memory usage

Refer to the "Why a fork?" section above for the main optimization ideas behind nanoflann.

Notice that there are no explicit SSE2/SSE3 optimizations in nanoflann, but the intensive usage of inline and templates in practice turns into automatically SSE-optimized code generated by the compiler.

3.2. Benchmark: original flann vs nanoflann

The most time-consuming part of many point cloud algorithms (like ICP) is querying a KD-Tree for nearest neighbors. This operation is therefore the most time critical.

nanoflann provides a ~50% time saving with respect to the original flann implementation (times in this chart are in microseconds for each query):

perf3_query

Although most of the gain comes from the queries (due to the large number of them in any typical operation with point clouds), there is also some time saved while building the KD-tree index, due to the templatized-code but also for the avoidance of duplicating the data in an auxiliary matrix (times in the next chart are in milliseconds):

perf4_time_saved

These performance tests are only representative of our testing. If you want to repeat them, read the instructions in perf-tests


4. Other KD-tree projects

  • FLANN - Marius Muja and David G. Lowe (University of British Columbia).
  • FASTANN - James Philbin (VGG, University of Oxford).
  • ANN - David M. Mount and Sunil Arya (University of Maryland).
  • libkdtree++ - Martin F. Krafft & others.

Note: The project logo is due to CedarSeed

Comments
  • 1.4.0 Installs pkgconfig and cmake files into wrong locations

    1.4.0 Installs pkgconfig and cmake files into wrong locations

    Before they were installed into: `` lib/cmake/nanoflann/nanoflannConfig.cmake lib/cmake/nanoflann/nanoflannConfigVersion.cmake lib/cmake/nanoflann/nanoflannTargets.cmake libdata/pkgconfig/nanoflann.pc

    
    Now they are installed into:
    

    share/cmake/nanoflann/nanoflannConfig.cmake share/cmake/nanoflann/nanoflannConfigVersion.cmake share/cmake/nanoflann/nanoflannTargets.cmake share/pkgconfig/nanoflann.pc

    
    pkg-config doesn't find nanoflann. cmake files that are installed by nanoflann into share/cmake are the only application specifix cmake files there, so this appears to be a wrong location too.
    opened by yurivict 13
  • KDTreeSingleIndexDynamicAdaptor is mush slower than KDTreeSingleIndexAdaptor

    KDTreeSingleIndexDynamicAdaptor is mush slower than KDTreeSingleIndexAdaptor

    Hi, I'm using nanoflann to construct a class called KdTreeFLANN in order to replace pcl::KdTreeFLANN. It is almost the same as the practice in https://github.com/laboshinl/loam_velodyne. It is based on nanoflann::KDTreeSingleIndexAdaptor, so it can't add/remove points dynamicly. Then I noticed nanoflann::KDTreeSingleIndexDynamicAdaptor which support add/remove points.So I change my class to support adding points dynamicly by nanoflann::KDTreeSingleIndexDynamicAdaptor. However, I found that it consumes about 2.5x time compared with the one before changing using the same point cloud data. Is it common or am I missing something or using anying wrongly?

    opened by getupgetup 13
  • flann submodule

    flann submodule

    Hi!

    It seems like recently, flann was added as a submodule to this repo. It's not used in the nanoflann.hpp header, but I found some uses of flann headers in the benchmark folders. Now as a user of just the nanoflann header, included via submodule in my own repo, it's quite annoying that now with every clone, the flann submodule is fetched through nanoflann. Even more so since it's completely unnecessary for the nanoflann library. I know that repositories can be cloned without --recursive and then transitive submodules will not be fetched but unfortunately this is very often not an option as other repos actually do need the submodules that they include.

    I would like to propose to move the benchmark (or whichever part uses flann) to a different repo (it can be linked in the README.md here), so that users including the nanoflann repo into their repos via submodule, aren't fetching the flann code too. After all, the main reason to use nanoflann, is not to have to use flann. For me, this is currently a reason not to upgrade nanoflann.

    opened by patrikhuber 12
  • Fails to compile with apple clang

    Fails to compile with apple clang

    It seems clang doesn't like variadic args after default values.

    In file included from /Users/ajx/Repos/kmeans/build/nanoflann/examples/pointcloud_custom_metric.cpp:32:
    /Users/ajx/Repos/kmeans/build/nanoflann/include/nanoflann.hpp:1344:70: error: missing default argument on parameter 'args'
            const KDTreeSingleIndexAdaptorParams& params = {}, Args&&... args)
                                                                         ^
    /Users/ajx/Repos/kmeans/build/nanoflann/examples/pointcloud_custom_metric.cpp:102:18: note: in instantiation of function template specialization
          'nanoflann::KDTreeSingleIndexAdaptor<My_Custom_Metric_Adaptor<double, PointCloud<double>, double, unsigned int>, PointCloud<double>, 3,
          unsigned int>::KDTreeSingleIndexAdaptor<const double &>' requested here
        my_kd_tree_t index(3 /*dim*/, cloud, {10 /* max leaf */}, myMetricParam);
    

    I'm using

    clang++ --version                                                                                                                          (base) 
    Apple clang version 12.0.0 (clang-1200.0.32.29)
    Target: x86_64-apple-darwin19.6.0
    Thread model: posix
    InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
    
    bug 
    opened by alecjacobson 6
  • Segmentation fault with duplicate points

    Segmentation fault with duplicate points

    I recently encountered issues when trying to incorporate this library into an SPH code, and in trying to debug (The reason for the debug is also dubious: I realised I had to search the square of the radius I was interested in), I created some duplicate points. When I try to perform a radius search on one of the points, the library gives a segmentation fault. Whilst it's perhaps bad to have duplicate points in the space, it is possible in SPH simulations to have points overlap (normally when the simulation goes berserk), and I don't feel it's a robust handling of what is likely a common scenario. Surely the library should return it as any other neighbour, with a distance of 0.0. I am using the KDTreeVectorofVectorsAdaptor for a std::vectorEigen::Vector2d data structure, should that have any influence.

    opened by jammy4536 6
  • bug report: memory corruption

    bug report: memory corruption

    in the constructor of KDTreeSingleIndexAdaptor, you expect inputData lifetime should be longer rather then Adaptor itself, but it is not always true and user of the code can easily break it.

        KDTreeSingleIndexAdaptor(
            const Dimension dimensionality, const DatasetAdaptor& inputData,
            const KDTreeSingleIndexAdaptorParams& params = {}, Args&&... args)
            : dataset(inputData),
              index_params(params),
              distance(inputData, std::forward<Args>(args)...)
        {
    

    for exmple, calling this would cause indefined behaviour:
    ModelKDTreeIndex index(2, std::vector {}, nanoflann::KDTreeSingleIndexAdaptorParams(10));

    opened by AndreyMlashkin 5
  • Storage container within RadiusResultSet

    Storage container within RadiusResultSet

    Would it be possible to replace the m_indices_dists container within RadiusResultSet of type std::vector<std::pair<IndexType, DistanceType>> with something that interoperates better with C? Currently, any attempt to use nanoflann from a language other than C++ will probably have to make a copy of the results.

    The KNNResultSet is much easier to interoperate with since it simply uses two arrays as storage, i.e.

        IndexType*    indices;
        DistanceType* dists;
    

    One can easily pass a contiguous array from C, Python, or Fortran.

    I understand the motivation for using std::vector because in the radius search we generally don't know the number of points that will be found in a query, hence the need for a dynamic container. One simple solution would be to simply replace std::pair with a struct:

    struct { 
    IndexType idx; 
    DistanceType dist;
    };
    

    Then we can recover a C-interoperable array of structs via m_indices_dists.data().

    enhancement 
    opened by ivan-pi 5
  • Several fixes for IndexType which are not of integral types

    Several fixes for IndexType which are not of integral types

    Hi,

    I have attempted to use nanoflann for data structures where the access to the elements is not provided using indexes of integral types, or more efficiently provided by other accessors. While doing so I have found certain issues, most of which arose due to mixing up the type of the values stored in KDTreeBaseClass::vind and the type with which vind is accessed.

    In many places this worked fine, as the values stored in vind were typically of integral types well, since they are used to quantify indices in an array or a similar data structure. However, with these fixes nanoflann can be used with data structures that provide access to its elements using other types of accessors (such as pointers for example).

    Here is a quick summary of the changes:

    • Metrics adaptors (L1_Adaptor, L2_Adaptor, ...) did not have a template parameter for IndexType. They expected b_idx to be of size_t, and by doing so casted implicitly to size_t, if possible.
    • In several places, IndexType was used to store the location in vind in which the IndexType is stored, whereas the type for the argument of std::vector<IndexType>::operator[] has nothing to do with IndexType itself. E.g. KDTreeBaseClass::Node::node_type, KDTreeBaseClass::divideTree.
    • Since the argument of std::vector<IndexType>::operator[] is not a template parameter, uint64_t was used
    • As the IndexType is not necessarily an index, I renamed it to AccessorType.

    I built and ran all tests as well as the examples (all build targets), each of which passed.

    I hope you find these changes useful and will merge them into your repository. If there is anything that I should change, don't hesitate to let me know.

    opened by dav1d-wright 5
  • Replace M_PI with a constexpr equivalent

    Replace M_PI with a constexpr equivalent

    This pull request addresses issue #96.

    When working with MSVC, _USE_MATH_DEFINES must be defined before the first inclusion of <cmath> for M_PI to be available for use. This requirement is problematic for projects that use <cmath> as the first inclusion of <cmath> isn't always obvious or easy to track down and defining _USE_MATH_DEFINES globally leads to redefinition warnings. This commit addresses the issue by replacing all uses of M_PI with a constexpr value for pi defined in the nanoflann namespace.

    opened by cmorrison31 5
  • Unused private field

    Unused private field

    Compiling with flann I get the following warning (translated to error by -Werror)

    nanoflann.hpp:468:11: error: private field 'blocksize' is not used [-Werror,-Wunused-private-field]
                     size_t  blocksize;
                             ^
    
    opened by vidstige 5
  • implicit signed/unsigned conversion and truncations

    implicit signed/unsigned conversion and truncations

    The code is littered with problematic conversions from unsigned to signed integers and truncations, because size_t has 64 bit length and int hasn't.

    Some of them are issues due to the fact that the Template parameter IndexType is not used where it ought to be, this is relative easy to fix (await pull request).

    But a big problem here is that the Dimension is of type int instead of type IndexType, so if you set IndexType to size_t and have a int dimension you will get buried in hundreds of warnings. Solutiuon would be to change the Template argument order and make the dim paramter of type IndexSize, but this is a serious Interface change and needs disuccion.

    Any ideas how to fix this?

    opened by pizzard 5
  • Is there a way to completely remove point?

    Is there a way to completely remove point?

    Hello, I wonder if there is a way to completely remove point instead of just recording its index as "-1", otherwise the memory usage will increase over time.

    opened by Liming-Cheng 0
  • Support for range-based search?

    Support for range-based search?

    Does nanoflann have any facilities for finding all points within a box? To give a 2D example, we may be looking to list all $(x,y)$ points so that $x \in [x_1, x_2]$ and $y \in [y_1, y_2]$.

    Motivation: If my understanding of $k$-d trees is correct, the natural way to search is precisely in a box. My application requires listing all points within a given region of a non-trivial shape. I can determine the bounding box of this region, so one approach is to list all points within the box, then test each of these points for inclusion within the region. With radiusSearch, I can implement the same approach, but use a bounding disk instead of a bounding box. However, if the underlying mechanism operates with a bounding box anyway, I am wondering if using boxes would be faster. My shape tends to be long and narrow, and when it happens to be aligned with the axes, the area of its bounding box will be much smaller than that of its bounding disk. This translates into having to test fewer points.

    opened by szhorvat 1
  • Non consistent Output of Neighbor Search

    Non consistent Output of Neighbor Search

    Hi thanks for the great work. I encountered a potential bug within the lib. I want to filter my point cloud based on how many Neighbors a point has. For this i firstly used the PCL kdTreeFLANN implementation (see if flann_lib_ == 0). In comparison i wanted to try your Implementation (see if flann_lib_==1). To bring first things out of the way, i think i am right with squaring the radius in your library in comparison to the pcl implementation?

    if(flann_lib_ == 0){
        // init. kd search tree
        KdTreePtr kd_tree_(new pcl::KdTreeFLANN<pcl::PointXYZI>());
        kd_tree_->setInputCloud(input_cloud);
        // Go over all the points and check which doesn't have enough neighbors
        // perform filtering
        for (pcl::PointCloud<pcl::PointXYZI>::iterator it = input_cloud->begin();
            it != input_cloud->end(); ++it) {
          float x_i = it->x;
          float y_i = it->y;
          double intes = it->intensity;
          float range_i = sqrt(pow(x_i, 2) + pow(y_i, 2));
          float search_radius_dynamic =
              radius_multiplier_ * azimuth_angle_ * 3.14159265359 / 180 * range_i;
              
          if (search_radius_dynamic < min_search_radius_) {
            search_radius_dynamic = min_search_radius_;
          }
    
          std::vector<int> pointIdxRadiusSearch;
          std::vector<float> pointRadiusSquaredDistance;
          if (intensity_ && intes > th_intensity_){
            filtered_cloud.push_back(*it);
          }else{
            int neighbors =
              kd_tree_->radiusSearch(*it, search_radius_dynamic, pointIdxRadiusSearch,
                                    pointRadiusSquaredDistance,min_neighbors_);
            if (neighbors >= min_neighbors_) { filtered_cloud.push_back(*it);}
          }
        }
      }
      else if(flann_lib_ == 1) {
        // init. kd search tree
        nanoflann::KdTreeFLANN<pcl::PointXYZI> kd_tree_flann_;
        kd_tree_flann_.setInputCloud(input_cloud);
    
        for (pcl::PointCloud<pcl::PointXYZI>::iterator it = input_cloud->begin();
            it != input_cloud->end(); ++it) {
          float x_i = it->x;
          float y_i = it->y;
          double intes = it->intensity;
          float range_i = sqrt(pow(x_i, 2) + pow(y_i, 2));
          float search_radius_dynamic =
              radius_multiplier_ * azimuth_angle_ * 3.14159265359 / 180 * range_i;
              
          if (search_radius_dynamic < min_search_radius_) {
            search_radius_dynamic = min_search_radius_;
          }
    
          std::vector<int> pointIdxRadiusSearch;
          std::vector<float> pointRadiusSquaredDistance;
          if (intensity_ && intes > th_intensity_){
            filtered_cloud.push_back(*it);
          }else{
            int neighbors =
              kd_tree_flann_.radiusSearch(*it, pow(search_radius_dynamic,2), pointIdxRadiusSearch,
                                    pointRadiusSquaredDistance);
            if (neighbors >= min_neighbors_) { filtered_cloud.push_back(*it);}
          }
        }
      }
    

    For both implementations i made a short video of a visualization of the output point cloud via Rviz. As you might noticed i have an intesity filter which overwrites the the radius Search. Thats why i colored the pointcloud based on the intensity. Yellow points are put through with the intensity filter and red points are the points which have not enough intensity and are filtered by the radius search. PCL Implementation shows a consistent point cloud output https://www.youtube.com/watch?v=a8EuP8oOWDg

    With the NanoFlann implementation the red points are "flickering" in some point cloud outputs there are included in some not. And within this flicker it is also not consistent and there are some strange "sharp" edges which look as a dimension filter would be applied. The fuction returns 0 as a result for found Neighbors in the flicker cases. https://youtu.be/AOaBHOvWsnI

    Actually i don't have an idea what could cause this behavior maybe you have an idea. First i saw this on the commit from April updated the code to the current commit and in both cases this happens.

    Greets Sven

    opened by wienans 1
  • Parallel KD-Tree construction

    Parallel KD-Tree construction

    Hey thanks for the great project! I am wondering if there is any easy way (or forked project) to enable parallelism for KD-Tree construction with nanoflann? Or do you have plans to integrate this feature in the future?

    opened by B1ueber2y 1
  • can you explain ADDpoint function??

    can you explain ADDpoint function??

    inline bool addPoint(DistanceType dist, IndexType index) { CountType i; for (i = count; i > 0; --i) { #ifdef NANOFLANN_FIRST_MATCH // If defined and two points have the same // distance, the one with the lowest-index will be // returned first. if ((dists[i - 1] > dist) || ((dist == dists[i - 1]) && (indices[i - 1] > index))) { #else if (dists[i - 1] > dist) { #endif if (i < capacity) { dists[i] = dists[i - 1]; indices[i] = indices[i - 1]; } } else break; } if (i < capacity) { dists[i] = dist; indices[i] = index; } if (count < capacity) count++;

    // tell caller that the search shall continue
    return false;
    // return true; 
    

    }


    if i turn it to false, time cost reduced , but result poor, but if it to true, result quality going up but time cost increased.

    can you explain why??

    opened by leechangyo 0
Releases(v1.4.3)
  • v1.4.3(Jul 23, 2022)

    nanoflann 1.4.3: Released Jul 24, 2022

    • Added flag SkipInitialBuildIndex to allow not wasting time building a tree when it will be loaded from a file later on (PR #171).
    • Mark all constructors explicit, to avoid unintended creation of temporary objects (Issue #179).
    • BUGFIX: avoid potential index out of bounds in KDTreeSingleIndexDynamicAdaptor (PR #173)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.2(Jan 11, 2022)

  • v1.4.1(Jan 6, 2022)

    nanoflann 1.4.1: Released Jan 6, 2022

    • Fix incorrect install directory for cmake target & config files.
    • Do not install example binaries with make install.
    • Provide working examples for cmake and pkgconfig under examples/example_* directories.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Jan 2, 2022)

    nanoflann 1.4.0: Released Jan 2, 2022

    • nanoflann::KDTreeSingleIndexAdaptor() ctor now forwards additional parameters to the metric class, enabling custom dynamic metrics.
    • Add and apply a .clang-format file (same one than used in MOLAorg/MOLA projects).
    • Examples: clean up and code modernization.
    • CMake variables prefixed now with NANOFLANN_ for easier integration of nanoflann as a Git submodule.
    • Fixes for IndexType which are not of integral types PR #154
    • save/load API upgraded from C FILE* to C++ file streams (By Dominic Kempf, Heidelberg University, PR).
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(Nov 5, 2020)

  • v1.3.1(Oct 11, 2019)

    nanoflann 1.3.1: Released Oct 11, 2019

    • Fixed bug in KDTreeSingleIndexDynamicAdaptor. See: https://github.com/jlblancoc/nanoflann/commit/a066148517d16c173954dcde13c1527481b9fad3
    • Fix build in XCode.
    • Simplify CMakeLists for Eigen example (requires Eigen3Config.cmake now)
    • Avoid setting cmake global executable build path
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Aug 28, 2018)

    Changelog:

    • Instructions for make install for Linux and Windows (Closes #87).
    • Fix all (?) MSVC conversion warnings (Closes: #95).
    • Avoid need for _USE_MATH_DEFINES in MSVC (Closes: #96)
    • Eigen::Matrix datasets: now uses std::cref() to store a reference to matrix.
    • GSOC2017 contributions by Pranjal Kumar Rai:
      • Support for dynamic datasets.
      • Support for non-Euclidean spaces: SO(2), SO(3)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.3(Dec 20, 2016)

    nanoflann 1.2.3: Released Dec 20, 2016

    • Fixed: split plane now correctly chooses the dimensions with the largest span. Should lead to more optimal trees.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.2(Nov 10, 2016)

  • v1.2.1(Jun 2, 2016)

    nanoflann 1.2.1: Released Jun 1, 2016

    • Fix potential compiler warnings if IndexType is signed.
    • New unit tests comparing the results to those of brute force search.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(May 16, 2016)

    Changes:

    • Fixed the potential of crashes (and minor performance optimization): many classes constructors get const ref arguments but stored const values.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.9(Oct 2, 2015)

    nanoflann 1.1.9: Released Oct 2, 2015

    Changes:

    • Added KDTreeSingleIndexAdaptor::radiusSearchCustomCallback() (Based on a suggestion by Yannick Morin-Rivest)
    • Better documentation in class headers.
    • Cleanup of unused code.
    • Parameter KDTreeSingleIndexAdaptorParams::dim has been removed since it was redundant.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.8(May 1, 2014)

    nanoflann 1.1.8: Released May 2, 2014

    • Created hidden constructors in nanoflann class, to disallow unintentional copies which will corrupt the internal pointers.
    • Fixed crash if trying to build an index of an empty dataset.
    Source code(tar.gz)
    Source code(zip)
Owner
Jose Luis Blanco-Claraco
Robotics and AI software developer. Lead developer of @MRPT since 2005. Associate Professor at University of Almería in sunny Spain :es: ☀ 🏄
Jose Luis Blanco-Claraco
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Dec 31, 2022
Dynamic AABB trees in C++ with support for periodic systems.

AABB.cc Copyright © 2016-2018 Lester Hedges Released under the Zlib license. About A C++ implementation of a dynamic bounding volume hierarchy (BVH) u

Lester Hedges 279 Dec 28, 2022
heuristically and dynamically sample (more) uniformly from large decision trees of unknown shape

PROBLEM STATEMENT When writing a randomized generator for some file format in a general-purpose programming language, we can view the resulting progra

John Regehr 4 Feb 15, 2022
Low dependency(C++11 STL only), good portability, header-only, deep neural networks for embedded

LKYDeepNN LKYDeepNN 可訓練的深度類神經網路 (Deep Neural Network) 函式庫。 輕量,核心部份只依賴 C++11 標準函式庫,低相依性、好移植,方便在嵌入式系統上使用。 Class diagram 附有訓練視覺化 demo 程式 訓練視覺化程式以 OpenCV

Lin Kao-Yuan 44 Nov 7, 2022
Faiss is a library for efficient similarity search and clustering of dense vectors.

Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy.

Facebook Research 18.7k Jan 1, 2023
Open-source vector similarity search for Postgres

Open-source vector similarity search for Postgres

Andrew Kane 712 Jan 7, 2023
This code accompanies the paper "Human-Level Performance in No-Press Diplomacy via Equilibrium Search".

Diplomacy SearchBot This code accompanies the paper "Human-Level Performance in No-Press Diplomacy via Equilibrium Search". A very brief orientation:

Facebook Research 34 Dec 20, 2022
Ncnn version demo of [CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search (ncnn) The official implementation by pytorch: ht

null 34 Dec 26, 2022
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tobias Hermann 926 Dec 30, 2022
A header-only C++ library for deep neural networks

MiniDNN MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional

Yixuan Qiu 336 Dec 22, 2022
HackySAC is a C++ header only library for model estimation using RANSAC.

HackySAC HackySAC is a C++ header only library for model estimation using RANSAC. Available under the MIT license. Examples Minimal working example fo

Jonathan Broere 1 Oct 10, 2021
Cranium - 🤖 A portable, header-only, artificial neural network library written in C99

Cranium is a portable, header-only, feedforward artificial neural network library written in vanilla C99. It supports fully-connected networks of arbi

Devin Soni 543 Dec 25, 2022
nanoPGO: A header-only library for Pose-Graph-Optimization in SE(2).

nanoPGO nanoPGO: A header-only library for Pose-Graph-Optimization in SE(2). 1. Description This repo is an implementation of 2D Pose Graph Optimizati

道锋 3 Jul 7, 2022
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

tiny-dnn 5.6k Dec 31, 2022
Simple C++ one-header library for the creation of animated GIFs from image data.

gif-h This one-header library offers a simple, very limited way to create animated GIFs directly in code. Those looking for particular cleverness are

Charlie Tangora 423 Dec 26, 2022
CubbyDNN - Deep learning framework using C++17 in a single header file

CubbyDNN CubbyDNN is C++17 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and I

Chris Ohk 30 Aug 16, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

dgSPARSE 59 Dec 5, 2022
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library

Build Status Travis CI VM: Linux x64: Raspberry Pi 3: Jetson TX2: Backstory I set to build ccv with a minimalism inspiration. That was back in 2010, o

Liu Liu 6.9k Jan 6, 2023
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

NetEase Youdao 179 Dec 20, 2022