C++ tensors with broadcasting and lazy computing

Overview

xtensor

Appveyor Azure Coverity Documentation Binder Join the Gitter Chat

Multi-dimensional arrays with broadcasting and lazy computing.

Introduction

xtensor is a C++ library meant for numerical analysis with multi-dimensional array expressions.

xtensor provides

  • an extensible expression system enabling lazy broadcasting.
  • an API following the idioms of the C++ standard library.
  • tools to manipulate array expressions and build upon xtensor.

Containers of xtensor are inspired by NumPy, the Python array programming library. Adaptors for existing data structures to be plugged into our expression system can easily be written.

In fact, xtensor can be used to process NumPy data structures inplace using Python's buffer protocol. Similarly, we can operate on Julia and R arrays. For more details on the NumPy, Julia and R bindings, check out the xtensor-python, xtensor-julia and xtensor-r projects respectively.

xtensor requires a modern C++ compiler supporting C++14. The following C++ compilers are supported:

  • On Windows platforms, Visual C++ 2015 Update 2, or more recent
  • On Unix platforms, gcc 4.9 or a recent version of Clang

Installation

Package managers

If you are using Conan to manage your dependencies, merely add xtensor/[email protected]/public-conan to your requires, where x.y.z is the release version you want to use. Please file issues in conan-xtensor if you experience problems with the packages. Sample conanfile.txt:

[requires]
xtensor/[email protected]/public-conan

[generators]
cmake

We also provide a package for the conda package manager:

conda install -c conda-forge xtensor

Install from sources

xtensor is a header-only library.

You can directly install it from the sources:

cmake -D CMAKE_INSTALL_PREFIX=your_install_prefix
make install

Trying it online

You can play with xtensor interactively in a Jupyter notebook right now! Just click on the binder link below:

Binder

The C++ support in Jupyter is powered by the xeus-cling C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.

xeus-cling

Documentation

For more information on using xtensor, check out the reference documentation

http://xtensor.readthedocs.io/

Dependencies

xtensor depends on the xtl library and has an optional dependency on the xsimd library:

xtensor xtl xsimd (optional)
master ^0.7.0 ^7.4.8
0.23.1 ^0.7.0 ^7.4.8
0.23.0 ^0.7.0 ^7.4.8
0.22.0 ^0.6.23 ^7.4.8
0.21.10 ^0.6.21 ^7.4.8
0.21.9 ^0.6.21 ^7.4.8
0.21.8 ^0.6.20 ^7.4.8
0.21.7 ^0.6.18 ^7.4.8
0.21.6 ^0.6.18 ^7.4.8
0.21.5 ^0.6.12 ^7.4.6
0.21.4 ^0.6.12 ^7.4.6
0.21.3 ^0.6.9 ^7.4.4
0.21.2 ^0.6.9 ^7.4.4
0.21.1 ^0.6.9 ^7.4.2
0.21.0 ^0.6.9 ^7.4.2

The dependency on xsimd is required if you want to enable SIMD acceleration in xtensor. This can be done by defining the macro XTENSOR_USE_XSIMD before including any header of xtensor.

Usage

Basic usage

Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.

#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xview.hpp"

xt::xarray<double> arr1
  {{1.0, 2.0, 3.0},
   {2.0, 5.0, 7.0},
   {2.0, 5.0, 7.0}};

xt::xarray<double> arr2
  {5.0, 6.0, 7.0};

xt::xarray<double> res = xt::view(arr1, 1) + arr2;

std::cout << res;

Outputs:

{7, 11, 14}

Initialize a 1-D array and reshape it inplace.

#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

xt::xarray<int> arr
  {1, 2, 3, 4, 5, 6, 7, 8, 9};

arr.reshape({3, 3});

std::cout << arr;

Outputs:

{{1, 2, 3},
 {4, 5, 6},
 {7, 8, 9}}

Index Access

#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

xt::xarray<double> arr1
  {{1.0, 2.0, 3.0},
   {2.0, 5.0, 7.0},
   {2.0, 5.0, 7.0}};

std::cout << arr1(0, 0) << std::endl;

xt::xarray<int> arr2
  {1, 2, 3, 4, 5, 6, 7, 8, 9};

std::cout << arr2(0);

Outputs:

1.0
1

The NumPy to xtensor cheat sheet

If you are familiar with NumPy APIs, and you are interested in xtensor, you can check out the NumPy to xtensor cheat sheet provided in the documentation.

Lazy broadcasting with xtensor

Xtensor can operate on arrays of different shapes of dimensions in an element-wise fashion. Broadcasting rules of xtensor are similar to those of NumPy and libdynd.

Broadcasting rules

In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other.

For example, if A has shape (2, 3), and B has shape (4, 2, 3), the result of a broadcasted operation with A and B has shape (4, 2, 3).

   (2, 3) # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result

The same rule holds for scalars, which are handled as 0-D expressions. If A is a scalar, the equation becomes:

       () # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result

If matched up dimensions of two input arrays are different, and one of them has size 1, it is broadcast to match the size of the other. Let's say B has the shape (4, 2, 1) in the previous example, so the broadcasting happens as follows:

   (2, 3) # A
(4, 2, 1) # B
---------
(4, 2, 3) # Result

Universal functions, laziness and vectorization

With xtensor, if x, y and z are arrays of broadcastable shapes, the return type of an expression such as x + y * sin(z) is not an array. It is an xexpression object offering the same interface as an N-dimensional array, which does not hold the result. Values are only computed upon access or when the expression is assigned to an xarray object. This allows to operate symbolically on very large arrays and only compute the result for the indices of interest.

We provide utilities to vectorize any scalar function (taking multiple scalar arguments) into a function that will perform on xexpressions, applying the lazy broadcasting rules which we just described. These functions are called xfunctions. They are xtensor's counterpart to NumPy's universal functions.

In xtensor, arithmetic operations (+, -, *, /) and all special functions are xfunctions.

Iterating over xexpressions and broadcasting Iterators

All xexpressions offer two sets of functions to retrieve iterator pairs (and their const counterpart).

  • begin() and end() provide instances of xiterators which can be used to iterate over all the elements of the expression. The order in which elements are listed is row-major in that the index of last dimension is incremented first.
  • begin(shape) and end(shape) are similar but take a broadcasting shape as an argument. Elements are iterated upon in a row-major way, but certain dimensions are repeated to match the provided shape as per the rules described above. For an expression e, e.begin(e.shape()) and e.begin() are equivalent.

Runtime vs compile-time dimensionality

Two container classes implementing multi-dimensional arrays are provided: xarray and xtensor.

  • xarray can be reshaped dynamically to any number of dimensions. It is the container that is the most similar to NumPy arrays.
  • xtensor has a dimension set at compilation time, which enables many optimizations. For example, shapes and strides of xtensor instances are allocated on the stack instead of the heap.

xarray and xtensor container are both xexpressions and can be involved and mixed in universal functions, assigned to each other etc...

Besides, two access operators are provided:

  • The variadic template operator() which can take multiple integral arguments or none.
  • And the operator[] which takes a single multi-index argument, which can be of size determined at runtime. operator[] also supports access with braced initializers.

Performances

Xtensor operations make use of SIMD acceleration depending on what instruction sets are available on the platform at hand (SSE, AVX, AVX512, Neon).

xsimd

The xsimd project underlies the detection of the available instruction sets, and provides generic high-level wrappers and memory allocators for client libraries such as xtensor.

Continuous benchmarking

Xtensor operations are continuously benchmarked, and are significantly improved at each new version. Current performances on statically dimensioned tensors match those of the Eigen library. Dynamically dimension tensors for which the shape is heap allocated come at a small additional cost.

Stack allocation for shapes and strides

More generally, the library implement a promote_shape mechanism at build time to determine the optimal sequence type to hold the shape of an expression. The shape type of a broadcasting expression whose members have a dimensionality determined at compile time will have a stack allocated sequence type. If at least one note of a broadcasting expression has a dynamic dimension (for example an xarray), it bubbles up to the entire broadcasting expression which will have a heap allocated shape. The same hold for views, broadcast expressions, etc...

Therefore, when building an application with xtensor, we recommend using statically-dimensioned containers whenever possible to improve the overall performance of the application.

Language bindings

xtensor-python

The xtensor-python project provides the implementation of two xtensor containers, pyarray and pytensor which effectively wrap NumPy arrays, allowing inplace modification, including reshapes.

Utilities to automatically generate NumPy-style universal functions, exposed to Python from scalar functions are also provided.

xtensor-julia

The xtensor-julia project provides the implementation of two xtensor containers, jlarray and jltensor which effectively wrap julia arrays, allowing inplace modification, including reshapes.

Like in the Python case, utilities to generate NumPy-style universal functions are provided.

xtensor-r

The xtensor-r project provides the implementation of two xtensor containers, rarray and rtensor which effectively wrap R arrays, allowing inplace modification, including reshapes.

Like for the Python and Julia bindings, utilities to generate NumPy-style universal functions are provided.

Library bindings

xtensor-blas

The xtensor-blas project provides bindings to BLAS libraries, enabling linear-algebra operations on xtensor expressions.

xtensor-io

The xtensor-io project enables the loading of a variety of file formats into xtensor expressions, such as image files, sound files, HDF5 files, as well as NumPy npy and npz files.

Building and running the tests

Building the tests requires the GTest testing framework and cmake.

gtest and cmake are available as packages for most Linux distributions. Besides, they can also be installed with the conda package manager (even on windows):

conda install -c conda-forge gtest cmake

Once gtest and cmake are installed, you can build and run the tests:

mkdir build
cd build
cmake -DBUILD_TESTS=ON ../
make xtest

You can also use CMake to download the source of gtest, build it, and use the generated libraries:

mkdir build
cd build
cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../
make xtest

Building the HTML documentation

xtensor's documentation is built with three tools

While doxygen must be installed separately, you can install breathe by typing

pip install breathe sphinx_rtd_theme

Breathe can also be installed with conda

conda install -c conda-forge breathe

Finally, go to docs subdirectory and build the documentation with the following command:

make html

License

We use a shared copyright model that enables all contributors to maintain the copyright on their contributions.

This software is licensed under the BSD-3-Clause license. See the LICENSE file for details.

Comments
  • WIP: tiny_array

    WIP: tiny_array

    Preliminary version of the tiny_array class for code discussion.

    Design goals:

    • provide an efficient and convenient implementation for shape objects to be used by xtensor and xarray
    • provide a unified API for statically and dynamically allocated tiny_arrays and array views
    • support other common uses of small arrays, e.g. structured element types of xtensor such as RGB values
    • provide a rich set of arithmetic and algebraic functions (e.g. I expect big simplifications in shape broadcasting calculations)

    Possible points for discussion:

    • Should tiny_array use xexpressions or explicit function implementations? At present, I implemented the second option, because my benchmark experiments showed that xexpressions are 15x slower. The code for my xexpression-based tiny_array variant as used in the benchmarks is here: https://paper.dropbox.com/doc/tiny_array_experiment.hpp-hWOZ8nFciecCAEc0Alzre?_tk=share_copylink. Maybe there is a better implementation I didn't see...
    • Should tiny_array expressions use eager or lazy evaluation? The experience of myself and my colleagues here in the lab suggests that eager evaluation is preferable, and that's what I implemented ATM.
    • Should the index type be signed or unsigned? I vote for a signed type. An important use case is an array that represents a filter kernel. The kernel's origin is usually in the array center, and the kernel indices run from -r to r. This can only be achieved with signed index types.
    opened by ukoethe 44
  • Prerequisites for tiny_array/shape PR

    Prerequisites for tiny_array/shape PR

    This PR adds two files and corresponding tests:

    • xconcepts.hpp: concept checking macro XTENSOR_REQUIRE, some traits classes
    • xmathutil.hpp: namespace xt::cmath, additional algebraic functions

    It also extends xexception.hpp with two new assertion macros and moves numeric_constants from xmath.hpp to xmathutil.hpp.

    The most controversial aspect of the PR is probably the norm() function. It actually returns the norm, whereas std::norm() computes the squared norm. IMHO, this decision of the C++ standard makes no sense at all. Nonetheless, xtensor reproduces this behavior in its xexpressions, and one can argue that consistency with the C++ standard is more important than meeting the user's intuitions about a function's effect. What's your opinion? If you want me to rename my norm(), what's a sensible name?

    opened by ukoethe 43
  • `reshape_view()` with column major xarrays has changed

    `reshape_view()` with column major xarrays has changed

    Update: ignore all of this and skip to the next comment

    I am on the current master branch of everything required for xtensor-r.

    After installing all of those master versions, all of my rray tests broke if they involved broadcasting with 3D structures (this is normally where I see differences between row and col major ideas).

    I think something has changed, and my guess is that it is in xtensor (as opposed to xtensor-r).

    Here is an example, do the cout statements look right to you? The R results are definitely different now.

    // [[Rcpp::depends(xtensor)]]
    // [[Rcpp::plugins(cpp14)]]
    
    #include <xtensor/xarray.hpp>
    #include <xtensor-r/rarray.hpp>
    #include <xtensor/xio.hpp>
    #include <Rcpp.h>
    
    // [[Rcpp::export]]
    SEXP test_add_rarray() {
      xt::rarray<int> x =
        {{{1, 5}, {3, 7}},
         {{2, 6}, {4, 8}}};
    
      xt::rarray<int> y =
        {{1, 2}};
    
      y = xt::transpose(y);
    
      xt::rarray<int> res = x + y;
    
      Rcpp::Rcout << "x " << std::endl << x << std::endl;
      Rcpp::Rcout << "y " << std::endl << y << std::endl;
      Rcpp::Rcout << "res " << std::endl << res << std::endl;
    
      return res;
    }
    
    // [[Rcpp::export]]
    SEXP test_x() {
      xt::rarray<int> x =
        {{{1, 5}, {3, 7}},
         {{2, 6}, {4, 8}}};
    
      return x;
    }
    
    // [[Rcpp::export]]
    SEXP test_y() {
      xt::rarray<int> y =
        {{1, 2}};
    
      y = xt::transpose(y);
    
      return y;
    }
    
    Rcpp::sourceCpp("~/Desktop/test.cpp")
    
    # just to see what they look like on the R side
    test_x()
    #> , , 1
    #> 
    #>      [,1] [,2]
    #> [1,]    1    3
    #> [2,]    2    4
    #> 
    #> , , 2
    #> 
    #>      [,1] [,2]
    #> [1,]    5    7
    #> [2,]    6    8
    
    test_y()
    #>      [,1]
    #> [1,]    1
    #> [2,]    2
    
    # this doesn't look right!
    test_add_rarray()
    #> x 
    #> {{{1, 5},
    #>   {3, 7}},
    #>  {{2, 6},
    #>   {4, 8}}}
    #> y 
    #> {{1},
    #>  {2}}
    #> res 
    #> {{{ 2,  6},
    #>   { 5,  9}},
    #>  {{ 3,  7},
    #>   { 6, 10}}}
    #> , , 1
    #> 
    #>      [,1] [,2]
    #> [1,]    2    5
    #> [2,]    3    6
    #> 
    #> , , 2
    #> 
    #>      [,1] [,2]
    #> [1,]    6    9
    #> [2,]    7   10
    
    # I expect
    y_bcast <- array(rep(1:2, 4), c(2, 2, 2))
    test_x() + array(rep(1:2, 4), c(2, 2, 2))
    #> , , 1
    #> 
    #>      [,1] [,2]
    #> [1,]    2    4
    #> [2,]    4    6
    #> 
    #> , , 2
    #> 
    #>      [,1] [,2]
    #> [1,]    6    8
    #> [2,]    8   10
    

    Created on 2019-05-13 by the reprex package (v0.2.1.9000)

    opened by DavisVaughan 39
  • Apparent segfault on basic operation.

    Apparent segfault on basic operation.

    Not sure if it is a bug or feature. This compiles but crashes at runtime on my machine

    xt::xtensor<float, 2> m1 = { { 1, 2 }, { 3, 4 } } ; xt::xtensor<float, 2> m2 = xt::zeros({1,1}) ; xt::xtensor<float, 2> m3 = m1 - m2 ;

    Bug 
    opened by chavid 31
  • Assignment of xt::zeros needs major speed-up

    Assignment of xt::zeros needs major speed-up

    I benchmarked resetting an xarray to zero and found that array = xt::zeros<float>(shape) is 27x slower than std::fill(), whereas I expected to see no difference. Here are the numbers for gcc-7: (Edit: I added results for dynamic view, which are even worse - 54x slower.) (Edit2: in releases 0.10 to 0.14, array = xt::zeros<float>(shape) was "only" 16x slower.)

    ----------------------------------------------------------------------
    Benchmark                               Time           CPU Iterations
    ----------------------------------------------------------------------
    init_memset<float>                  70685 ns      70658 ns       9878
    xarray_init_std_fill<float>         67681 ns      67683 ns      10334
    xarray_init_zeros<float>          1868781 ns    1866331 ns        372
    dynamic_view_init_zeros<float>    3712534 ns    3712598 ns        188
    

    Code: (BTW, why is straightforward array = 0 not supported?)

        constexpr int SIZE = 1 << 20;
    
        template <class V>
        void init_memset(benchmark::State& state)
        {
            std::vector<V> array(SIZE);
    
            for (auto _ : state)
            {
                std::memset(array.data(), 0, SIZE*sizeof(V));
                benchmark::DoNotOptimize(array.data());
            }
        }
        BENCHMARK_TEMPLATE(init_memset, float);
    
        template <class V>
        void xarray_init_std_fill(benchmark::State& state)
        {
            auto array = xt::xarray<V>::from_shape({SIZE});
    
            for (auto _ : state)
            {
                std::fill(array.begin(), array.end(), V());
                benchmark::DoNotOptimize(array.raw_data());
            }
        }
        BENCHMARK_TEMPLATE(xarray_init_std_fill, float);
    
        template <class V>
        void xarray_init_zeros(benchmark::State& state)
        {
            auto array = xt::xarray<V>::from_shape({SIZE});
    
            for (auto _ : state)
            {
                array = xt::zeros<V>({SIZE});
                benchmark::DoNotOptimize(array.raw_data());
            }
        }
        BENCHMARK_TEMPLATE(xarray_init_zeros, float);
    
        template <class V>
        void dynamic_view_init_zeros(benchmark::State& state)
        {
            auto array = xt::xarray<V>::from_shape({SIZE});
            auto view  = xt::dynamic_view(data, xt::slice_vector{xt::all()});
    
            for (auto _ : state)
            {
                view = xt::zeros<V>({SIZE});
                benchmark::DoNotOptimize(array.raw_data());
            }
        }
        BENCHMARK_TEMPLATE(dynamic_view_init_zeros, float);
    
    opened by ukoethe 29
  • Consider following the upstream guidelines for googletest integration

    Consider following the upstream guidelines for googletest integration

    xtensor currently relies on the detection of a compiled version of googletest via find_package(GTest).

    The upstream google guidelines do not recommend this setup. Instead client project should either vendor a copy of googletest or fetch it via ExternalProject_Add, prior to building the googletest libraries alongside the project.

    This would allow Linux users like myself to be able to run the tests, since most Linux distributions only package the source for googletest as a result of these guidelines.

    Thanks.

    Enhancement 
    opened by ghisvail 28
  • Segfault (error reading variable)

    Segfault (error reading variable)

    Am I doing the template wrong?

    My header

    #include <xtensor/xarray.hpp>
    #include <xtensor/xio.hpp>
    
    template<typename E1>
    auto logsumexp1(const E1& e1) {
        auto amax = xt::amax(e1)();
        return amax + xt::log(xt::sum(xt::exp(e1 - amax)));
    }
    
    template<typename E1>
    auto f(const E1& e1, double C) {
        return logsumexp1(e1 + C);
    }
    

    my cpp

    int main( int argc, char* argv[] ) {
        const double C = 0;
        xt::xtensor<double, 1> ev{-0.042808, -0.042504, -0.043407, -0.047227};
        auto a = logsumexp1(ev);
        std::cout << a << std::endl;
        auto b = f(ev, C);
        std::cout << b << std::endl;
        return 0;
    }
    
    ➜  build ../bin/test 
     1.34231 
    [1]    5517 segmentation fault (core dumped)  ../bin/test
    

    gdb

    rogram received signal SIGSEGV, Segmentation fault.
    0x00000000004049f0 in xt::detail::plus::operator()<double, double> (this=0x7fffffffd728, [email protected]: -0.042807999999999999, [email protected]: <error reading variable>)
        at include/xtensor/xoperation.hpp:72
    72	        BINARY_OPERATOR_FUNCTOR(plus, +);
    (gdb) where
    #0  0x00000000004049f0 in xt::detail::plus::operator()<double, double> (this=0x7fffffffd728, [email protected]: -0.042807999999999999, [email protected]: <error reading variable>)
        at include/xtensor/xoperation.hpp:72
    #1  0x000000000041896d in xt::xfunction_stepper<xt::detail::plus, xt::xtensor_container<xt::uvector<double, std::allocator<double> >, 1ul, (xt::layout_type)1, xt::xtensor_expression_tag> const&, xt::xscalar<double const&> >::deref_impl<0ul, 1ul> (this=0x7fffffffd308) at local/include/xtensor/xfunction.hpp:1148
    #2  0x0000000000412ec1 in xt::xfunction_stepper<xt::detail::plus, xt::xtensor_container<xt::uvector<double, std::allocator<double> >, 1ul, (xt::layout_type)1, xt::xtensor_expression_tag> const&, xt::xscalar<double const&> >::operator* (this=0x7fffffffd308) at include/xtensor/xfunction.hpp:1141
    #3  0x0000000000422102 in xt::xfunction_stepper<xt::detail::minus, xt::xfunction<xt::detail::plus, xt::xtensor_container<xt::uvector<double, std::allocator<double> >, 1ul, (xt::layout_type)1, xt::xtensor_expression_tag> const&, xt::xscalar<double const&> > const&, xt::xscalar<double const&> >::deref_impl<0ul, 1ul> (this=0x7fffffffd2f8) at include/xtensor/xfunction.hpp:1148
    #4  0x0000000000421a47 in xt::xfunction_stepper<xt::detail::minus, xt::xfunction<xt::detail::plus, xt::xtensor_container<xt::uvector<double, std::allocator<double> >, 1ul, (xt::layout_type)1, xt::xtensor_expression_tag> const&, xt::xscalar<double const&> > const&, xt::xscalar<double const&> >::operator* (this=0x7fffffffd2f8) at include/xtensor/xfunction.hpp:1141
    #5  0x00000000004214d1 in xt::xfunction_stepper<xt::math::exp_fun, xt::xfunction<xt::detail::minus, xt::xfunction<xt::detail::plus, xt::xtensor_container<xt::uvector<double, std::allocator<double> >, 1ul, (xt::layout_type)1, xt::xtensor_expression_tag> const&, xt::xscalar<double const&> > const&, xt::xscalar<double const&> > >::deref_impl<0ul> (this=0x7fffffffd2f0)
        at include/xtensor/xfunction.hpp:1148
    #6  0x0000000000420b27 in xt::xfunction_stepper<xt::math::exp_fun, xt::xfunction<xt::detail::minus, xt::xfunction<xt::detail::plus, xt::xtensor_container<xt::uvector<double, std::allocator<double> >, 1ul, (xt::layout_type)1, xt::xtensor_expression_tag> const&, xt::xscalar<double const&> > const&, xt::xscalar<double const&> > >::operator* (this=0x7fffffffd2f0)
        at include/xtensor/xfunction.hpp:1141
    
    
    
    opened by colinfang 27
  • WIP xdynview

    WIP xdynview

    This PR implements a dynamic view type, which is represented by a std::vector of variants.

    (This is more because I want to see if it builds on the other platforms).

    opened by wolfv 24
  • Pretty Printing

    Pretty Printing

    This PR adds pretty printing, much like NumPy -- aligning floating point numbers on the . etc.

    E.g.

    	xt::xarray<double> rn = xt::random::rand<double>({3, 3}, 0, 1000);
    	xt::xarray<double> brn = xt::random::rand<double>({3, 3}, 0, 10000);
    	xt::xarray<double> z({3, 3}, 0);
    	z(1, 1) = 0.1;
    	rn(1, 1) = 0;
    	// rn(1, 2) = 10e10;
    	rn(1, 2) = -10e5;
    	xt::xarray<bool> rb = xt::random::randint<int>({3, 3}, 0, 2);
    	xt::xarray<uint> ri = xt::random::randint<int>({3, 3}, 0, 50);
    	ri(1, 2) = 10e7;
    	pretty_print(rn);
    	pretty_print(brn);
    	pretty_print(rb);
    	pretty_print(ri);
    	pretty_print(z);
    
    {{  1.35477004e+02,   8.35008590e+02,   9.68867771e+02},
     {  2.21034043e+02,   0.00000000e+00,  -1.00000000e+06},
     {  1.88381976e+02,   9.92881302e+02,   9.96461326e+02}}
    {{ 9676.9494  ,  7258.3896  ,  9811.0969  },
     { 1098.6175  ,  7981.0586  ,  2970.2945  },
     {   47.834844,  1124.6452  ,  6397.6336  }}
    {{ true,  true,  true},
     { true,  true,  true},
     {false, false,  true}}
    {{       10,        46,        34},
     {       33,        19, 100000000},
     {       37,        37,        23}}
    {{ 0. ,  0. ,  0. },
     { 0. ,  0.1,  0. },
     { 0. ,  0. ,  0. }}
    
    opened by wolfv 22
  • [FEATURE PROPOSAL] half float class

    [FEATURE PROPOSAL] half float class

    Hello. I'd like to propose you to integrate one of my repositories go get along with numpy.16. It follows IEEE-754 and can be used like that:

    #include <iostream>
    #include <xtensor/xarray.hpp>
    #include <xtensor/xio.hpp>
    #include "half.hpp"
    
    int main(int, char**) {
        using half_float::half;
        xt::xarray<half> test = xt::arange<half>((half)0, (half)10, (half)1);
        std::cout << test << std::endl;
    }
    

    It could be integrated pretty easily, I think. It also has fp16-instruction usage for speed and convenient conversion from other types.

    Feature Request 
    opened by 0xBYTESHIFT 20
  • transpose(E && e) matches too greedily

    transpose(E && e) matches too greedily

    I'm making good progress deriving my own array classes from xview_semantic (thanks for suggesting this approach, @SylvainCorlay!), but encountered a problem that I cannot solve on my own. I declare the array view and transpose function like this:

    namespace xvigra
    {
        template <index_t N, class T>
        class view_nd
        : public xt::xiterable<view_nd<N, T>>
        , public xt::xview_semantic<view_nd<N, T>>
        {
            ...
        };
    
        template <index_t N, class T>
        auto transpose(view_nd<N, T> const & array);
    }
    

    Unfortunately, I cannot call my transpose function:

    view_nd<2, float> v = ...;
    auto t = transpose(v);  // calls xt::transpose()
    

    Since my class is derived from xt::xview_semantic, namespace xt participates in name lookup, and xt::transpose(), whose argument is a universal reference, is a better match than xvigra::transpose(). The straightforward idea to add a concept check for xt::is_xexpression to xtensor's function does not work, because view_nd fulfills the concept as well. What can I do to get my function called?

    opened by ukoethe 20
  • Tentative to add clang-format

    Tentative to add clang-format

    @tdegeus @JohanMabille This is a tentative to apply clang-format. The important point is to define what we want, we can fix tests/checks if broken afterwards.

    See Clang-format options (CTRL-F is your friend) for options descriptions.

    Currently non-uniform rules

    Unfortunately, many formatting aspects were not uniform in the code base, so I picked one. Please do comment somewhere in the PR code to say if you refer otherwise.

    Access modifiers

    @JohanMabille has opinions on this. With `clang-format>13', we should have all the control we need. I used

    EmptyLineAfterAccessModifier: Always
    EmptyLineBeforeAccessModifier: Always
    

    Clash with other pre-commit hooks

    @tdegeus I think that the comment formatting (for box drawing) clashes with other hooks.

    Line length

    I used 110 as it seems to maintain the layout reasonably.

    SpaceBeforeParen

    Not sure what you prefer for SpaceBeforeParen (if(cond) vs if (cond)...)

    Function parameters line continuation

    One change I made that was maybe not the widey used in the code is about function parameters line breaks (AlignAfterOpenBracket), but I strongly believe this is a more readable option.

    Often one can see the Align mode

    auto long_var_name = long_func_name(long_param_name_1,
                                        ...,
                                        long_param_name_2);
    

    The problem are:

    • it creates different level of indentation throughout the file
    • it need to reformat all lines if the variable or function name is changed,
    • it may break the parameters even more because they are already far to the right, decreasing readability, or forcing to use temporary variables.

    Instead I suggest to follow Black style, with BlockIndent option.

    auto long_var_name = long_func_name(
        long_param_name_1,
        ...,
        long_param_name_2
    );
    

    (same logic applies to function declaration).

    Experimental options

    InsertBrace

    For always using braces with if, for (an industry recommended best-practice) to reduce errors.

    QualifierAlignment

    East-const (the best :sweat_smile:) and west-const debate.

    Where to get clang-format

    I know of two pre-commits hooks:

    • https://github.com/pre-commit/mirrors-clang-format uses clang-format packaged as a wheel and is standalone
    • https://github.com/pocc/pre-commit-hooks uses an externally provided clang-format, which is available in conda-forge. I like this option because there can be some differences between the different versions of clang-format (for a fixed config) so this make sure my editor and pre-commit agree.
    opened by AntoinePrv 3
  • Add xt::quantile

    Add xt::quantile

    Checklist

    • [x] The title and commit message(s) are descriptive.
    • [ ] Small commits made to fix your PR have been squashed to avoid history pollution.
    • [ ] Tests have been added for new features or bug fixes.
    • [x] API of new functions and classes are documented.

    Description

    This adds xt::quantile for computing an array of quantile.

    This is currently work in progress and missing

    • [x] Non axis overload
    • [x] Add enum method overload (as in NumPy)
    • [x] Fixing function with xt::xarray
    • [ ] Adding extended tests
    • [x] Adding proper documentation

    The implementation can be improved in many ways

    • [ ] Using fancy indexing / keep with dynamic index list #2589
    • [ ] Using xt::swapaxes #2613
    • [ ] Reproducing bugs around not using necessary calls to xt::eval
    • [ ] Making sure probas is always xtensor-capable.
    • [ ] Fixing -Wconversion issues https://github.com/xtensor-stack/xtl/pull/261
    opened by AntoinePrv 0
  • Add `xt::swapaxes` and `xt::moveaxis`

    Add `xt::swapaxes` and `xt::moveaxis`

    Relevant NumPy doc

    • https://numpy.org/doc/stable/reference/generated/numpy.swapaxes.html
    • https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html
    opened by AntoinePrv 0
  • Visual Studio 2019/2022 Errors

    Visual Studio 2019/2022 Errors

    I'm trying to compile the latest master with Visual Studio 2019/2022 x64 and I get a whole slew of template errors. I'm using /std:c++latest, so it should be running the newest version of C++ it supports.

    These errors do not show up if I use clang-cl instead of cl with the same installation.

    This seems to be similar to Issue 375

    Some examples:

    C:\xtensor\include\xtensor\xstorage.hpp(1331): error C2903: 'rebind': symbol is neither a class template nor a function template
    C:\xtensor\include\xtensor\xcontainer.hpp(122): error C2955: 'xt::svector': use of class template requires template argument list
      C:\xtensor\include\xtensor\xtensor_forward.hpp(48): note: see declaration of 'xt::svector'
    

    I can post any information you need (including the full error list if you really want it).

    Perhaps VS2019 and VS2022 could get added to the CI as well?

    Let me know how I can help!

    opened by WhoBrokeTheBuild 0
  • Having scalar overload e.g. `xt::exp`

    Having scalar overload e.g. `xt::exp`

    There are case that I have a double and that I want to avoid passing by a 0d xtensor object to avoid additional overhead. It would be amazing if in those cases one could still use math functions line xt::exp, etc. It would allow me to get away with one templated function only

    opened by tdegeus 0
  • How to

    How to "translate" the following function from Python using xtensor

    I have the following function that I need to use in a program:

    def gen_choose(n, r):
        return np.prod(np.arange(n, n - r, -1)) / math.factorial(r)
    

    That code is in python, and I have tried translating it into C++ as follows:

    double gen_choose(unsigned int n, unsigned int r){
        auto x = xt::prod(xt::arange(n, n - r, -1)) / factorial(r);
        return x;
    };
    

    Now, I'm aware that I could use a template or overloading in order to have the function return different data types. My issue is that no matter what data type I use (in this case double) I get the following error for the corresponding data type:

    "no suitable conversion function from "xt::xfunction<xt::detail::divides, xt::xreducer<xt::xreducer_functors<xt::detail::multiplies, xt::const_value, xt::detail::multiplies>, xt::xgenerator<xt::detail::arange_generator<unsigned int, unsigned int, int>, unsigned int, std::array<size_t, 1Ui64>>, std::array<size_t, 1Ui64>, xt::reducer_options<std::_Vbase, std::tuplext::evaluation_strategy::lazy_type>>, xt::xscalarstd::_Vbase>" to "double" exists"

    I also know that the xt::prod() function returns an xreducer, but if I'm not mistaken, for my application it should return a double or int regardless of which n or r I input. So I would like some tips on how to implement this properly. Thank you.

    opened by Batres3 1
Owner
Xtensor Stack
Data structures for data sciences
Xtensor Stack
C++ class for creating and computing arbitrary-length integers

BigNumber BigNumber is a C++ class that allows for the creation and computation of arbitrary-length integers. The maximum possible length of a BigNumb

Limeoats 143 Dec 14, 2022
Easy-to-use Scientific Computing library in/for C++ available for Linux and Windows.

Matrix Table of Contents Installation Development 2.1. Linux 2.2. Windows Benchmarking Testing Quick Start Guide 5.1. Initializers 5.2. Slicing 5.3. P

mlcpp 24 Nov 22, 2022
Jing-Kalk is a beautifully designed calculator that conforms to the JingOS style and Integrating the interactive experience of pad and PC.

Jing-Kalk Jing-Kalk is based on Kalk gitlab. Jing-Kalk is a beautifully designed calculator that conforms to the JingOS style and Integrating the inte

JingOS 42 Nov 26, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Dec 31, 2022
MIRACL Cryptographic SDK: Multiprecision Integer and Rational Arithmetic Cryptographic Library is a C software library that is widely regarded by developers as the gold standard open source SDK for elliptic curve cryptography (ECC).

MIRACL What is MIRACL? Multiprecision Integer and Rational Arithmetic Cryptographic Library – the MIRACL Crypto SDK – is a C software library that is

MIRACL 527 Jan 7, 2023
a lean linear math library, aimed at graphics programming. Supports vec3, vec4, mat4x4 and quaternions

linmath.h -- A small library for linear math as required for computer graphics linmath.h provides the most used types required for programming compute

datenwolf 729 Jan 9, 2023
tiny recursive descent expression parser, compiler, and evaluation engine for math expressions

TinyExpr TinyExpr is a very small recursive descent parser and evaluation engine for math expressions. It's handy when you want to add the ability to

Lewis Van Winkle 1.2k Jan 6, 2023
nml is a simple matrix and linear algebra library written in standard C.

nml is a simple matrix and linear algebra library written in standard C.

Andrei Ciobanu 45 Dec 9, 2022
Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations.

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations.

null 2.3k Dec 30, 2022
C++ Mathematical Expression Parsing And Evaluation Library

C++ Mathematical Expression Toolkit Library Documentation Section 00 - Introduction Section 01 - Capabilities Section 02 - Example Expressions

Arash Partow 445 Jan 4, 2023
libmpc++ is a C++ header-only library to solve linear and non-linear MPC

libmpc++ libmpc++ is a C++ library to solve linear and non-linear MPC. The library is written in modern C++17 and it is tested to work on Linux, macOS

Nicola Piccinelli 46 Dec 20, 2022
A lightweight, minimal and customisable maths library for C99

Small Maths Library A lightweight, minimal and customisable maths library for C99, generated by Lua. Generating Requires Lua 5.3. lua sml.lua Generat

null 5 May 6, 2022
C++ Matrix -- High performance and accurate (e.g. edge cases) matrix math library with expression template arithmetic operators

Matrix This is a math and arithmetic matrix library. It has stood many years of performing in mission critical production for financial systems. It ha

Hossein Moein 71 Oct 29, 2022
Header only, single file, simple and efficient C++ library to compute the signed distance function to a triangle mesh

TriangleMeshDistance Header only, single file, simple and efficient C++11 library to compute the signed distance function to a triangle mesh. The dist

Interactive Computer Graphics 100 Dec 28, 2022
Library for nonconvex constrained optimization using the augmented Lagrangian method and the matrix-free PANOC algorithm.

alpaqa Alpaqa is an efficient implementation of the Augmented Lagrangian method for general nonlinear programming problems, which uses the first-order

OPTEC 21 Dec 9, 2022
A simple C++ complex & real matrix library, with matrix inversion, left division and determinant calculation

NaiveMatrixLib 帆帆的简易矩阵计算库 A simple C++ stdlib-based complex & real matrix library, with matrix inversion, left division (A\b) and determinant calculat

FerryYoungFan 50 Dec 28, 2022
A work-in-progress C++20/23 header-only maths library for game development, embedded, kernel and general-purpose that works in constant context.

kMath /kmæθ/ A work-in-progress general-purpose C++20/23 header-only maths library that works in constant context Abstract The kMath Project aims to p

The λ Project 13 Sep 5, 2022
Kraken is an open-source modern math library that comes with a fast-fixed matrix class and math-related functions.

Kraken ?? Table of Contents Introduction Requirement Contents Installation Introduction Kraken is a modern math library written in a way that gives ac

yahya mohammed 24 Nov 30, 2022
Earrings broadcasting the anarchist FAQ

Anarchist FAQ earrings! This project was made by me in about 10 hours (using a modified ESP8266WebServer template), feel free to make your own! (it wi

Astra 16 Jan 7, 2022
A demonstration of implementing, and using, a "type safe", extensible, and lazy iterator interface in pure C99.

c-iterators A demonstration of implementing, and using, a "type safe", extensible, and lazy iterator interface in pure C99. The iterable is generic on

Chase 69 Jan 2, 2023