libnabo  1.1.2
libnabo

from http://github.com/ethz-asl/libnabo by Stéphane Magnenat (http://stephane.magnenat.net), ASL-ETHZ, Switzerland (http://www.asl.ethz.ch)

libnabo is a fast K Nearest Neighbour library for low-dimensional spaces. It provides a clean, legacy-free, scalar-type–agnostic API thanks to C++ templates. Its current CPU implementation is strongly inspired by ANN, but with more compact data types. On the average, libnabo is 5% to 20% faster than ANN.

libnabo depends on Eigen, a modern C++ matrix and linear-algebra library. libnabo works with either version 2 or 3 of Eigen.

Compilation

libnabo uses CMake as build system. The complete compilation process depends on the system you are using (Linux, Mac OS X or Windows). You will find a nice introductory tutorial in this you tube video: http://www.youtube.com/watch?v=CLvZTyji_Uw.

Prerequisites

If your operating system does not provide it, you must get Eigen. Eigen only needs to be downloaded and extracted.

Compilation options

libnabo provides the following compilation options, available through CMake :

  • SHARED_LIBS (boolean, default: false): if true, build a shared library, otherwise build a static library

You specify them with a command-line tool, ccmake, or with a graphical tool, cmake-gui. Please read the CMake documentation for more information.

Quick compilation and installation under Unix

Under Unix, assuming that Eigen is installed system-wide, you can compile (with optimisation and debug information) and install libnabo in /usr/local with the following commands run in the top-level directory of libnabo's sources:

SRC_DIR=`pwd`
BUILD_DIR=${SRC_DIR}/build
mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR}
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ${SRC_DIR}
# if Eigen is not available system-wide, run at that point:
# cmake-gui .
# cmake-gui allows you to tell the location of Eigen
make
sudo make install

These lines will compile libnabo in a build sub-directory and therefore keep your source tree clean. Note that you could compile libnabo anywhere you have write access, such as in /tmp/libnabo. This out-of-source build is a nice feature of CMake. If Eigen is not installed system-wide, you might have to tell CMake where to find them (using ccmake or cmake-gui).

You can generate the documentation by typing:

make doc

Usage

libnabo is easy to use. For example, assuming that you are working with floats and that you have a point set M and a query point q, you can find the K nearest neighbours of q in M :

// This example is in the public domain
#include "nabo/nabo.h"
int main()
{
using namespace Nabo;
using namespace Eigen;
// 100 points in 3D
MatrixXf M = MatrixXf::Random(3, 100);
// 1 query points
VectorXf q = VectorXf::Random(3);
// create a kd-tree for M, note that M must stay valid during the lifetime of the kd-tree
// look for the 5 nearest neighbour of a the single-point query
const int K = 5;
VectorXi indices(K);
VectorXf dists2(K);
nns->knn(q, indices, dists2, K);
// cleanup kd-tree
delete nns;
return 0;
}
public interface
Namespace for Nabo.
Definition: brute_force_cpu.cpp:41
NearestNeighbourSearch< float > NNSearchF
nearest neighbour search with scalars of type float
Definition: nabo.h:446
static NearestNeighbourSearch * createKDTreeLinearHeap(const CloudType &cloud, const Index dim=std::numeric_limits< Index >::max(), const unsigned creationOptionFlags=0, const Parameters &additionalParameters=Parameters())
Create a nearest-neighbour search, using a kd-tree with linear heap, good for small k (~up to 30)
Definition: nabo.cpp:166

In this example, M is an Eigen (refering to the software, not to the math) matrix (column major, float) and q is an Eigen vector (float). The results indices and dists2 are Eigen vectors of indices and squared distances refering to the columns of M.

Here is a slightly more complex example:

// This example is in the public domain
#include "nabo/nabo.h"
#include <iostream>
int main()
{
using namespace Nabo;
using namespace Eigen;
using namespace std;
// 100 points in 3D
MatrixXf M = MatrixXf::Random(3, 100);
// 50 query points
MatrixXf q = MatrixXf::Random(3, 50);
// Create a kd-tree for M, note that M must stay valid during the lifetime of the kd-tree.
// The output of the query are a matrix of indices to columns of M and
// a matrix of squared distances corresponding to these indices.
// These matrix must have the correct size when passed to the search function.
MatrixXi indices;
MatrixXf dists2;
// Look for the 5 nearest neighbours of each query point,
// We do not want approximations but we want to sort by the distance,
indices.resize(5, q.cols());
dists2.resize(5, q.cols());
nns->knn(q, indices, dists2, 5, 0, NNSearchF::SORT_RESULTS);
// Look for the 3 nearest neighbours of each query point, use the data themselves for the query.
// We do not want approximations but we want to sort by the distance,
// and we want to allow self-match.
indices.resize(3, M.cols());
dists2.resize(3, M.cols());
nns->knn(M, indices, dists2, 3, 0, NNSearchF::SORT_RESULTS | NNSearchF::ALLOW_SELF_MATCH);
// Make sure that the closest return points correspond to the points from M.
for (int i = 0; i < 100; ++i)
{
// The query is the data itself and we allow self-match.
if (indices.coeff(0, i) != i)
cerr << "Oups, something went wrong: " << indices.coeff(0, i) << " " << i << endl;
}
// Now look for the 2 nearset neighbours of each query point.
// We do allow 10% approximation but do not want to allow self-match.
// We do not care about sorting either.
indices.resize(2, M.cols());
dists2.resize(2, M.cols());
nns->knn(M, indices, dists2, 2, 0.1, 0);
for (int i = 0; i < 50; ++i)
{
// The query is the data itself but we forbide self-match.
if (indices.coeff(0, i) == i)
cerr << "Oups, something went wrong: " << indices.coeff(0, i) << " " << i << endl;
}
// Cleanup the kd-tree.
delete nns;
return 0;
}
@ ALLOW_SELF_MATCH
allows the return of the same point as the query, if this point is in the data cloud; forbidden by de...
Definition: nabo.h:310
@ SORT_RESULTS
sort points by distances, when k > 1; do not sort by default
Definition: nabo.h:311

Note that the matrix-based interface for query is more efficient than the vector-based one, because some sanity checks can be done only once. Therefore, if you have multiple points to query, we warmly suggest to pass them as a matrix instead of calling knn() multiple times.

Construction parameters

The following additional construction parameters are available in KDTREE_ algorithms:

  • bucketSize (unsigned): bucket size, defaults to 8

Unit testing

The distribution of libnabo integrates a unit test module, based on CTest. Just type:

make test

...in the build directory to run the tests. Their outputs are available in the Testing directory. These consist of validation and benchmarking tests. If ANN is detected when compiling libnabo, make test will also perform comparative benchmarks.

Citing libnabo

If you use libnabo in the academic context, please cite this paper that evaluates its performances in the contex of ICP:

@article{elsebergcomparison,
title={Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration},
author={Elseberg, J. and Magnenat, S. and Siegwart, R. and N{\"u}chter, A.},
journal={Journal of Software Engineering for Robotics (JOSER)},
pages={2--12},
volume={3},
number={1},
year={2012},
issn={2035-3928}
}

Bug reporting

Please use github's issue tracker to report bugs.

License

libnabo is released under a permissive BSD license.

Faq

ANN

libnabo differs from ANN on the following points:

API

  • templates for scalar type
  • self-match option as execution-time (instead of compile-time) parameter
  • range search instead of radius search
  • Eigen library for vector and matrixes
  • reentrant

    limitations

  • only euclidean distance
  • only KD-tree, no BD-tree
  • only ANN_KD_SL_MIDPT splitting rules

    implementation

  • optional O(log(n)) tree heap instead of O(n) vector heap
  • compact memory representation, one memory allocation for all nodes, 5-fold smaller memory footprint compared than ANN
  • implicit reference to left child (always next node in array)
  • do not store bounds in nodes (that is, I do it like in ANN's article instead of like in ANN's source code)

    performances

  • about 5% to 20% faster than ANN (both -O3 -NDEBUG), probably due to the smaller memory footprint
  • clearly memory-bound, neither OpenMP nor std::thread improve performances

References