automatic differentiation made easier for C++

Overview

Overview

autodiff is a C++17 library that uses modern and advanced programming techniques to enable automatic computation of derivatives in an efficient and easy way.

Demonstration

Consider the following function f(x, y, z):

double f(double x, double y, double z)
{
    return (x + y + z) * exp(x * y * z);
}

which we use use to evaluate variable u = f(x, y, z):

double x = 1.0;
double y = 2.0;
double z = 3.0;
double u = f(x, y, z);

How can we minimally transform this code so that not only u, but also its derivatives ∂u/∂x, ∂u/∂y, and ∂u/∂z, can be computed?

The next two sections present how this can be achieved using two automatic differentiation algorithms implemented in autodiff: forward mode and reverse mode.

Forward mode

In a forward mode automatic differentiation algorithm, both output variables and one or more of their derivatives are computed together. For example, the function evaluation f(x, y, z) can be transformed in a way that it will not only produce the value of u, the output variable, but also one or more of its derivatives (∂u/∂x, ∂u/∂y, ∂u/∂z) with respect to the input variables (x, y, z).

Enabling forward automatic differentiation for the calculation of derivatives using {{autodiff}} is relatively simple. For our previous function f, we only need to replace the floating-point type double to autodiff::dual for both input and output variables:

dual f(const dual& x, const dual& y, const dual& z)
{
    return (x + y + z) * exp(x * y * z);
}

We can now compute the derivatives ∂u/∂x, ∂u/∂y, and ∂u/∂z as follows:

dual x = 1.0;
dual y = 2.0;
dual z = 3.0;
dual u = f(x, y, z);

double dudx = derivative(f, wrt(x), at(x, y, z));
double dudy = derivative(f, wrt(y), at(x, y, z));
double dudz = derivative(f, wrt(z), at(x, y, z));

The auxiliary function autodiff::wrt, an acronym for with respect to, is used to indicate which input variable (x, y, z) is the selected one to compute the partial derivative of f. The auxiliary function autodiff::at is used to indicate where (at which values of its parameters) the derivative of f is evaluated.

Reverse mode

In a reverse mode automatic differentiation algorithm, the output variable of a function is evaluated first. During this function evaluation, all mathematical operations between the input variables are "recorded" in an expression tree. By traversing this tree from top-level (output variable as the root node) to bottom-level (input variables as the leaf nodes), it is possible to compute the contribution of each branch on the derivatives of the output variable with respect to input variables.

Thus, a single pass in a reverse mode calculation computes all derivatives, in contrast with forward mode, which requires one pass for each input variable. Note, however, that it is possible to change the behavior of a forward pass so that many (even all) derivatives of an output variable are computed simultaneously (e.g., in a single forward pass, ∂u/∂x, ∂u/∂y, and ∂u/∂z are evaluated together with u, in contrast with three forward passes, each one computing the individual derivatives).

Similar as before, we can use autodiff to enable reverse automatic differentiation for our function f by simply replacing type double by autodiff::var as follows:

var f(var x, var y, var z)
{
    return (x + y + z) * exp(x * y * z);
}

The code below demonstrates how the derivatives ∂u/∂x, ∂u/∂y, and ∂u/∂z can be calculated:

var x = 1.0;
var y = 2.0;
var z = 3.0;
var u = f(x, y, z);

Derivatives dud = derivatives(u);

double dudx = dud(x);
double dudy = dud(y);
double dudz = dud(z);

The function autodiff::derivatives will traverse the expression tree stored in variable u and compute all its derivatives with respect to the input variables (x, y, z), which are then stored in the object dud. The derivative of u with respect to input variable x (i.e., ∂u/∂x) can then be extracted from dud using dud(x). The operations dud(x), dud(y), dud(z) involve no computations! Just extraction of derivatives previously computed with a call to function autodiff::derivatives.

Check the documentation website for more details:

License

MIT License

Copyright (c) 2018–2020 Allan Leal

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Issues
  • autodiff of member functions - using automatic return type deduction

    autodiff of member functions - using automatic return type deduction

    I have written a C++ library that allows for evaluation of thermodynamic equation of state written in terms of residual Helmholtz energy (only, not derivatives): https://github.com/ianhbell/teqp . Derivatives are taken with complex derivatives for single derivatives, and multicomplex derivatives with higher (and cross) derivatives. This works great with multicomplex differentiation, but it is not very fast, so I wanted to see if automatic differentiation would be faster. The core function needs to accept arbitrary numerical types as template arguments.

    The below is a simple test case showing how I would like to use autodiff, in analogy to what I did with the complex differentiation tools. The generic model implements the function alphar and here I am trying to take the derivative of the first argument, which is scalar, as a first test.

    #include <iostream>
    #include <algorithm>
    #include <numeric>
    #include <valarray>
    
    #include "autodiff/forward.hpp"
    #include "autodiff/reverse.hpp"
    
    /* A (very) simple implementation of the van der Waals EOS*/
    class vdWEOSSimple {
    private:
        double a, b;
    public:
        vdWEOSSimple(double a, double b) : a(a), b(b) {};
    
        const double R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
    
        template<typename TType, typename RhoType>
        auto alphar(const TType &T, const RhoType& rho) const {
            auto rhotot = std::accumulate(std::begin(rho), std::end(rho), (RhoType::value_type)0.0);
            auto Psiminus = -log(1.0 - b * rhotot);
            auto Psiplus = rhotot;
            return Psiminus - a / (R * T) * Psiplus;
        }
    };
    
    void test_vdW_autodiff() {
        // Argon + Xenon
        std::valarray<double> Tc_K = { 150.687, 289.733 };
        std::valarray<double> pc_Pa = { 4863000.0, 5842000.0 };
        
        double T = 298.15;
        auto rho = 3.0;
        auto R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
        auto rhotot = rho;
        const std::valarray<double> rhovec = { rhotot / 2, rhotot / 2 };
        
        int i = 0;
        double ai = 27.0/64.0*pow(R*Tc_K[i], 2)/pc_Pa[i];
        double bi = 1.0/8.0*R*Tc_K[i]/pc_Pa[i];
        vdWEOSSimple vdW(ai, bi);
    
        autodiff::dual varT = T;
        auto u = vdW.alphar(varT, rhovec);
        auto dalphardT = derivative([&vdW, &rhovec](auto& T) {return vdW.alphar(T, rhovec); }, wrt(varT), at(varT));
    }
    
    int main() {
        test_vdW_autodiff();
        return EXIT_SUCCESS;
    }
    

    Visual studio gives voluminous compilation errors (below). I suspect it has something to do with the fact that the function I am wrapping in the lambda is an instance method, though I admit I haven't the foggiest idea how to workaround this issue because the lambda function approach is usually a success in other cases.

    Build started...
    1>------ Build started: Project: test_autodiff, Configuration: Debug x64 ------
    1>test_autodiff.cpp
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(768): error C2280: 'autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>>::BinaryExpr(void)': attempting to reference a deleted function
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612): message : compiler has generated 'autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>>::BinaryExpr' here
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>>::BinaryExpr(void)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>::UnaryExpr(void)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(605,1): message : 'autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>::UnaryExpr(void)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>::BinaryExpr(void)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>::BinaryExpr(void)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>::BinaryExpr(void)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>::BinaryExpr(void)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>::UnaryExpr(void)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(605,1): message : 'autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>::UnaryExpr(void)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::BinaryExpr(void)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::BinaryExpr(void)': function was implicitly deleted because 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>' has an uninitialized data member 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::r' of reference type
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(611): message : see declaration of 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::r'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\src\test_autodiff.cpp(46): message : see reference to function template instantiation 'auto autodiff::forward::derivative<test_vdW_autodiff::<lambda_18ca46ab0a682d1e64406ac8568ab8d2>,std::tuple<autodiff::forward::dual &>,std::tuple<autodiff::forward::dual &>>(const Function &,Wrt &&,Args &&)' being compiled
    1>        with
    1>        [
    1>            Function=test_vdW_autodiff::<lambda_18ca46ab0a682d1e64406ac8568ab8d2>,
    1>            Wrt=std::tuple<autodiff::forward::dual &>,
    1>            Args=std::tuple<autodiff::forward::dual &>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(759,7): error C2280: 'autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>> &autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>> &)': attempting to reference a deleted function
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612): message : compiler has generated 'autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>>::operator =' here
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>> &autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::AddOp,double,autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>> &)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>> &autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>::operator =(const autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>> &)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(605,1): message : 'autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>> &autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>>::operator =(const autodiff::forward::UnaryExpr<autodiff::forward::NegOp,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>> &)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>> &autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>> &)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>> &autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>> &)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>> &autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>> &)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>> &autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::MulOp,double,autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>> &)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>> &autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>::operator =(const autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>> &)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(605,1): message : 'autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>> &autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>>::operator =(const autodiff::forward::UnaryExpr<autodiff::forward::InvOp,autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>> &)': function was implicitly deleted because a data member invokes a deleted or inaccessible function 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &> &autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &> &)'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(612,1): message : 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &> &autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::operator =(const autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &> &)': function was implicitly deleted because 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>' has a data member 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::r' of reference type
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(611): message : see declaration of 'autodiff::forward::BinaryExpr<autodiff::forward::NumberDualMulOp,double,const TType &>::r'
    1>        with
    1>        [
    1>            TType=autodiff::forward::dual
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(769): message : see reference to function template instantiation 'auto autodiff::forward::derivative<Function,_Ty,_Ty,Result>(const Function &,Wrt &&,Args &&,Result &)' being compiled
    1>        with
    1>        [
    1>            Function=test_vdW_autodiff::<lambda_18ca46ab0a682d1e64406ac8568ab8d2>,
    1>            _Ty=std::tuple<autodiff::forward::dual &>,
    1>            Wrt=std::tuple<autodiff::forward::dual &>,
    1>            Args=std::tuple<autodiff::forward::dual &>,
    1>            Result=Result
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(761,12): error C2672: 'derivative': no matching overloaded function found
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(761,1): error C2784: 'auto autodiff::forward::derivative(const autodiff::forward::Dual<T,G> &)': could not deduce template argument for 'const autodiff::forward::Dual<T,G> &' from 'Result'
    1>        with
    1>        [
    1>            Result=Result
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/forward.hpp(745): message : see declaration of 'autodiff::forward::derivative'
    1>C:\Users\ihb\Code\teqp\src\test_autodiff.cpp(46,112): error C3313: 'dalphardT': variable cannot have the type 'void'
    1>Done building project "test_autodiff.vcxproj" -- FAILED.
    ========== Build: 0 succeeded, 1 failed, 1 up-to-date, 0 skipped ==========
    
    
    opened by ianhbell 95
  • Can't use boost::multiprecision types

    Can't use boost::multiprecision types

    I would like to use extended precision types in autodiff, but that doesn't appear to work. Evidently cpp_bin_float_50 is not a valid type. I had hoped that I could just drop in my favorite numerical type and it would just work. I'm trying to carry out calculations in emulated extended precision so I can measure the loss in precision in the obtained derivatives.

    // Imports from boost
    #include <boost/multiprecision/cpp_bin_float.hpp>
    
    // autodiff include
    #include <autodiff/forward/dual.hpp>
    using namespace autodiff;
    
    int main(){
    	constexpr int Nderiv = 3;
    	using my_float = boost::multiprecision::cpp_bin_float_50;
    	using my_dual = autodiff::HigherOrderDual<Nderiv, my_float>;
    	
    	double x = 8.1;
    	auto f = [](auto x) { return cos(x) * sin(x); };
    	
    	my_float fprime_exact = cos(2.0 * my_float(x));
    	my_dual xdual = x;
    	auto derivs = derivatives(f, wrt(xdual), at(xdual));
    	my_float fprime_ad = derivs[1];
    }
    

    yields

    1>C:\Users\ihb\Code\teqp\src\test_accuracy.cpp(21,19): error C2440: 'initializing': cannot convert from 'double' to 'autodiff::detail::Dual<autodiff::detail::Dual<autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>,autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>>,autodiff::detail::Dual<autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>,autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>>>'
    1>C:\Users\ihb\Code\teqp\src\test_accuracy.cpp(21,19): message : No constructor could take the source type, or constructor overload resolution was ambiguous
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/dual/dual.hpp(630,1): error C2440: 'static_cast': cannot convert from 'T' to 'autodiff::detail::DualValueTypeNotDefinedFor<T>'
    1>        with
    1>        [
    1>            T=double
    1>        ]
    1>        and
    1>        [
    1>            T=boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/dual/dual.hpp(630,1): message : No constructor could take the source type, or constructor overload resolution was ambiguous
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(93): message : see reference to function template instantiation 'auto autodiff::detail::seed<1,autodiff::detail::Dual<autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>,autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>>,autodiff::detail::Dual<autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>,autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>>,T&>(autodiff::detail::Dual<autodiff::detail::Dual<autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>,autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>>,autodiff::detail::Dual<autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>,autodiff::detail::Dual<boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>,boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>>>> &,U)' being compiled
    1>        with
    1>        [
    1>            T=double,
    1>            U=double &
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/common/meta.hpp(85): message : see reference to function template instantiation 'auto autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>::operator ()<autodiff::detail::Index<0>>(autodiff::detail::Index<0>) const' being compiled
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/common/meta.hpp(93): message : see reference to function template instantiation 'auto autodiff::detail::AuxFor<0,0,3,_Ty>(Function &&)' being compiled
    1>        with
    1>        [
    1>            _Ty=autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>,
    1>            Function=autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/common/meta.hpp(99): message : see reference to function template instantiation 'auto autodiff::detail::For<0,3,_Ty>(Function &&)' being compiled
    1>        with
    1>        [
    1>            _Ty=autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>,
    1>            Function=autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(93): message : see reference to function template instantiation 'auto autodiff::detail::For<3,autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>>(Function &&)' being compiled
    1>        with
    1>        [
    1>            Function=autodiff::detail::seed::<lambda_c8ccfd185779947cb8b9c71b4a30c525>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(107): message : see reference to function template instantiation 'auto autodiff::detail::seed<my_dual,,double>(const autodiff::detail::Wrt<my_dual &> &,T &&)' being compiled
    1>        with
    1>        [
    1>            T=double
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(159): message : see reference to function template instantiation 'auto autodiff::detail::seed<my_dual&>(const autodiff::detail::Wrt<my_dual &> &)' being compiled
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(229): message : see reference to function template instantiation 'auto autodiff::detail::eval<Fun,my_dual&,my_dual&>(const Fun &,const autodiff::detail::At<my_dual &> &,const autodiff::detail::Wrt<my_dual &> &)' being compiled
    1>        with
    1>        [
    1>            Fun=main::<lambda_461779cd9a12aaf964208968150d882d>
    1>        ]
    1>C:\Users\ihb\Code\teqp\src\test_accuracy.cpp(22): message : see reference to function template instantiation 'auto autodiff::detail::derivatives<main::<lambda_461779cd9a12aaf964208968150d882d>,my_dual&,my_dual&>(const Fun &,const autodiff::detail::Wrt<my_dual &> &,const autodiff::detail::At<my_dual &> &)' being compiled
    1>        with
    1>        [
    1>            Fun=main::<lambda_461779cd9a12aaf964208968150d882d>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(221,25): error C2672: 'derivative': no matching overloaded function found
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/common/meta.hpp(85): message : see reference to function template instantiation 'auto autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>::operator ()<autodiff::detail::Index<0>>(autodiff::detail::Index<0>) const' being compiled
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/common/meta.hpp(93): message : see reference to function template instantiation 'auto autodiff::detail::AuxFor<0,0,1,_Ty>(Function &&)' being compiled
    1>        with
    1>        [
    1>            _Ty=autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>,
    1>            Function=autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/common/meta.hpp(99): message : see reference to function template instantiation 'auto autodiff::detail::For<0,1,_Ty>(Function &&)' being compiled
    1>        with
    1>        [
    1>            _Ty=autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>,
    1>            Function=autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(222): message : see reference to function template instantiation 'auto autodiff::detail::For<1,autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>>(Function &&)' being compiled
    1>        with
    1>        [
    1>            Function=autodiff::detail::derivatives::<lambda_c58d400ba0fa8486240bdf95f504e2fe>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(229): message : see reference to function template instantiation 'auto autodiff::detail::derivatives<autodiff::detail::BinaryExpr<autodiff::detail::MulOp,autodiff::detail::UnaryExpr<autodiff::detail::CosOp,my_dual &>,autodiff::detail::UnaryExpr<autodiff::detail::SinOp,my_dual &>>>(const Result &)' being compiled
    1>        with
    1>        [
    1>            Result=autodiff::detail::BinaryExpr<autodiff::detail::MulOp,autodiff::detail::UnaryExpr<autodiff::detail::CosOp,my_dual &>,autodiff::detail::UnaryExpr<autodiff::detail::SinOp,my_dual &>>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/utils/derivative.hpp(221,1): error C2784: 'auto autodiff::detail::derivative(const autodiff::detail::Dual<T,G> &)': could not deduce template argument for 'const autodiff::detail::Dual<T,G> &' from 'const Result'
    1>        with
    1>        [
    1>            Result=autodiff::detail::BinaryExpr<autodiff::detail::MulOp,autodiff::detail::UnaryExpr<autodiff::detail::CosOp,my_dual &>,autodiff::detail::UnaryExpr<autodiff::detail::SinOp,my_dual &>>
    1>        ]
    1>C:\Users\ihb\Code\teqp\externals\autodiff\autodiff/forward/dual/dual.hpp(600): message : see declaration of 'autodiff::detail::derivative'
    1>C:\Users\ihb\Code\teqp\src\test_accuracy.cpp(23,32): error C2440: 'initializing': cannot convert from '_Ty' to 'boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<50,boost::multiprecision::backends::digit_base_10,void,int,0,0>,boost::multiprecision::et_off>'
    1>        with
    1>        [
    1>            _Ty=T
    1>        ]
    1>C:\Users\ihb\Code\teqp\src\test_accuracy.cpp(23,32): message : No constructor could take the source type, or constructor overload resolution was ambiguous
    
    opened by ianhbell 21
  • Multi-level derivatives (and seeding)

    Multi-level derivatives (and seeding)

    I'm getting strange behavior when trying to calculate a gradient. I wonder if it's because in my objective function I use calculate derivatives.

    To explain: I see the way one calculates involves seeding autodiff with the target variable:

    VectorXdual u(5);
    u << 1.0, 1.1, 1.2, 1.3, 1.4;
    
    dual w = energy(u);
    
    VectorXd dwdu(5);
    int i; for (i=0; i<5; i++)
    {
        seed(wrt(u[i]));
        dwdu[i] = energy(u).grad;
        unseed(wrt(u[i]));
    }
    

    So if I understand, seeding is a way to (globally?) track the variable of interest. But what happens if my target function also calculates derivatives?

    dual f(VectorXdual u)
    {
       // ....
    }
    
    dual energy(VectorXdual u)
    {
      double dfdx = derivative(f, wrt(x), u);
      // ...
    }
    

    I see in forward.hpp that derivative() also uses seeding. Do the two clash with one another? Can a nested derivative calculation like this work?

    question 
    opened by kewp 18
  • [autodiff/forward] Simplify higher order derivatives

    [autodiff/forward] Simplify higher order derivatives

    Add new wrt function overloading. As we discussed #57 users want to use simpler way of higher-order derivatives calculation. I remember that @allanleal thought about changing dual implementation, but I'm not sure that it is necessary and easy to implement. Also, we will get regressions using such an approach.

    using dual4th = Dual<double, 4>;
    
    dual4th x = 1.0;
    dual4th y = derivative(sin, wrt(x), at(x));
    
    cout << y[0] << endl; // value of the 0th order derivative of sin(x), which is its actual value
    cout << y[1] << endl; // value of the 1st order derivative of sin(x)
    cout << y[2] << endl; // value of the 2nd order derivative of sin(x)
    cout << y[3] << endl; // value of the 3rd order derivative of sin(x)
    cout << y[4] << endl; // value of the 4th order derivative of sin(x)
    

    Here we calculate 4 derivatives and it's mean that we call sin 4 times, what if the user wants only one in some cases?

    Consolations: We can change the implementation of Dual to avoid recursion, but we should not add overhead to the user, and that's mean that we need wrt in that form which we have =) correct me if I'm wrong.

    P.S. FYI @ibell

    opened by supersega 14
  • Make grad and jac more general

    Make grad and jac more general

    Hi @allanleal, as we discussed before #49 this PR makes computation of gradient vector and Jacobi matrix more general. I think it will be great if we add examples with new functionality.

    opened by supersega 14
  • [Proposal] Make gradient vector and Jacobian matrix computation more general

    [Proposal] Make gradient vector and Jacobian matrix computation more general

    This is proposal (not finaly completed yet) of more general computition of gradient vector and Jacobian matrix computation. Through this implementation we can compute gradient for function with next signature:

    dual f(const VectorXdual& x, dual a, dual b, dual c)
    {
        ...
    }
    

    Now we are able to call gradient wrt (a, b, c, x)

     VectorXd g2 = gradient(f, wrt(a, b, c, x), at(x, a, b, c), u);
    

    P.S. Btw, I know why tests fail, and how to resolve this

    opened by supersega 12
  • Derivatives w.r.t. `var` and references

    Derivatives w.r.t. `var` and references

    Can you tell me if the following code makes sense?

    var w = energy(m, D);
    VectorXvar u(2*m.nn);
            
    int i; for (i=0; i<m.nn; i++)
    {
        u(2*i) = m.one[i];
        u(2*i+1) = m.two[i];
    }
            
    VectorXd dwdu = gradient(w,u);
    

    So I'm building up a VectorXvar from two lists I have (my degrees of freedom) and then differentiation the energy with this new list. Will this work? I'm getting all zeros and I wonder if it's because I'm not using the variables directly but rather building a new list....

    opened by kewp 11
  • AUTODIFF LIBRARY COMPILE ERROR WITH EIGEN LINEAR SYSTEMS SOLVER

    AUTODIFF LIBRARY COMPILE ERROR WITH EIGEN LINEAR SYSTEMS SOLVER

    Hi Allan:

    Thank you for all your valuable work - autodiff C++ is central to the analytics library I am developing to compute market risk sensitivities of derivative instruments.

    I am attempting to use autodiff C++ v0.5.11 in conjunction with Eigen 3.3.7 to solve a linear system of equations, but encounter a compilation error in VS2019 v16.7.2 on MS Windows that I cannot fix, and would appreciate your assistance.

    Here is a small program that reproduces the issue:

    #include <eigen3/Eigen/Core> #include <eigen3/Eigen/Dense> #include <autodiff/reverse.hpp> #include <autodiff/reverse/eigen.hpp> #include <autodiff/reverse/reverse.hpp>

    int main() { using namespace Eigen; using namespace autodiff; using namespace autodiff::reverse;

    // Solve Ax = b. // Solution: x = (3, 2, 1). Matrix<var, 3, 3> A; A << 1, 3, 3, // row 1 1, 3. 4, // row 2 1, 4, 3; // row 3 Matrix<var, 3, 1> b; b << 12, 13, 14; const auto x = A.colPivHouseholderQr().solve(b);

    return 0; }

    %Compiler message% error C2678: binary ‘<‘: no operator found which takes a left-hand operator of type ‘autodiff::reverse::Variable’ (or there is no acceptable conversion). ...

    I have tried other methods available in Eigen, for example “jacobiSvd,” but encounter similar compilation errors.

    I can see in <reverse.hpp> that you have overloaded the comparison operators, so I cannot see how to resolve the issue, and would value your assistance.

    Kind regards, George-Eric

    opened by George-Eric 10
  • Make autodiff forward friendly with custom numbers

    Make autodiff forward friendly with custom numbers

    Description

    In this PR we make enable to use autodiff dual with custom scalars. Also example is added. Also improved dual construction. Resolve issue #91.

    Details

    • Re-implement constructor with parameter to make enable use implicit construction:
      using dual_cmpx = forward::Dual<complex<double>, complex<double>>;
      // ...
      dual_cmpx x = 2.0;
      

      If we use scalar which can be implicit constructed from e.g. from double above code is not legal with implementation:

      Dual(const ValueType<T>& val)
      : val(val), grad(0)
      {
      }
      

      After substitution we have something like this:

      Dual(const std::complex<double>& val)
      : val(val), grad(0)
      {
      }
      

      And this conversation is prevented by standard see this link.

    • Added Structure isNumber which can be specialized in user code. (Also I thought about checking operators using decltype + void_t but not sure if we need it.
    • Changed constants One and Zero to function to avoid failure for non-literals. E.g. posit is not literal and we can't create constexpr variable from it see. But constexpr functions can resolve this issue (in some cases they will generate constexpr(e.g. doubles) in other it will be non-constexpr(e.g. posit)
    opened by supersega 10
  • Bug with autodiff::Real and arrays

    Bug with autodiff::Real and arrays

    This code doesn't compile, which I believe should be valid because a should be promoted to Real first, and then the division done. Explicit cast of a to Real doesn't help either:

    Eigen::ArrayXd a = Eigen::ArrayXd::LinSpaced(11, 0, 10);
    auto r = 2.8 * a / autodiff::Real<2, double>(3.7);
    

    complaining about:

    1>C:\Users\ihb\Code\teqp\include\teqp/derivs.hpp(75,26): error C2676: binary '/': 'const Eigen::CwiseBinaryOp<Eigen::internal::scalar_product_op<double,double>,const Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>,const Eigen::Array<double,-1,1,0,-1,1>>,const Derived>' does not define this operator or a conversion to a type acceptable to the predefined operator
    1>        with
    1>        [
    1>            Derived=Eigen::Array<double,-1,1,0,-1,1>
    1>        ]
    
    
    opened by ianhbell 9
  • Can't use autodiff::jacobian in functions of different independent variables

    Can't use autodiff::jacobian in functions of different independent variables

    Hi,

    I am currently working on elasticity problems, in which the function to find is a mapping x(X) from some domain X in R3 to some codomain x in R3. We approximate this mapping by the interpolation x(X) = x1 * phi1(X) + x2 * phi2(X) + x3 * phi3(X) + x4 * phi4(X), where the xis represent positions in the codomain x and the phis are some basis functions (polynomials) acting on positions in the domain X. Working on elasticity problems will require differentiating with respect to X to compute a quantity F=dx/dX, the jacobian of the mapping x(X). However, we then use this quantity F to compute a strain quantity E = 1/2 (F.transpose() * F - I), and then use this quantity E to compute an energy Psi(X,x) = mu*(E:E) + 1/2 lambda * tr(E)^2. Up until now, we have only needed derivatives with respect to the domain X. But then I also need to compute derivatives of this energy with respect to x, which will give me "elastic forces" (not exactly) at the nodes (omitting integration for simplicity) as (dPsi/dx1, dPsi/dx2, dPsi/dx3, dPsi/dx4).

    My problem is that when I use the function autodiff::jacobian(Fun f, Wrt wrt, At at) to compute F = dx/dX, the energy derivatives are zeroed, which is an error. But when I instead myself implement the jacobian as a sum of outer products of xis with phis, I get the correct energy derivatives (elastic forces). I would like to know why this is the case, as I would find it much cleaner to be able to use the autodiff::jacobian function to compute jacobians instead of gradient outer products using autodiff::gradient.

    Here is a reproducible example program for my particular scenario:

    #include <Eigen/Geometry>
    #include <autodiff/forward/dual.hpp>
    #include <autodiff/forward/dual/eigen.hpp>
    #include <iostream>
    
    autodiff::Vector4dual polynomial3d_1st_order(autodiff::Vector3dual X)
    {
        return autodiff::Vector4dual(1.0, X.x(), X.y(), X.z());
    }
    
    struct basis_function_op_t
    {
        autodiff::dual operator()(autodiff::Vector3dual X) const
        {
            autodiff::Vector4dual p1 = polynomial3d_1st_order(X);
            return p1.dot(a);
        }
    
        autodiff::Vector4dual a;
    };
    
    struct interpolation_op_t
    {
        interpolation_op_t(
            autodiff::Vector3dual const& x1,
            autodiff::Vector3dual const& x2,
            autodiff::Vector3dual const& x3,
            autodiff::Vector3dual const& x4,
            basis_function_op_t const& phi1,
            basis_function_op_t const& phi2,
            basis_function_op_t const& phi3,
            basis_function_op_t const& phi4)
            : phi1(phi1), phi2(phi2), phi3(phi3), phi4(phi4)
        {
        }
    
        autodiff::Vector3dual operator()(
            autodiff::Vector3dual X,
            autodiff::Vector3dual x1,
            autodiff::Vector3dual x2,
            autodiff::Vector3dual x3,
            autodiff::Vector3dual x4) const
        {
            return x1 * phi1(X) + x2 * phi2(X) + x3 * phi3(X) + x4 * phi4(X);
        }
        basis_function_op_t phi1, phi2, phi3, phi4;
    };
    
    struct deformation_gradient_op_t
    {
        deformation_gradient_op_t(interpolation_op_t const& interpolate) : interpolate_op(interpolate)
        {
        }
    
        autodiff::Matrix3dual operator()(
            autodiff::Vector3dual X,
            autodiff::Vector3dual x1,
            autodiff::Vector3dual x2,
            autodiff::Vector3dual x3,
            autodiff::Vector3dual x4,
            Eigen::Vector3dual& u) const
        {
            using autodiff::at;
            using autodiff::gradient;
            using autodiff::jacobian;
            using autodiff::wrt;
    
            autodiff::dual phi1, phi2, phi3, phi4;
            autodiff::Vector3dual gradphi1 = gradient(interpolate_op.phi1, wrt(X), at(X), phi1);
            autodiff::Vector3dual gradphi2 = gradient(interpolate_op.phi2, wrt(X), at(X), phi2);
            autodiff::Vector3dual gradphi3 = gradient(interpolate_op.phi3, wrt(X), at(X), phi3);
            autodiff::Vector3dual gradphi4 = gradient(interpolate_op.phi4, wrt(X), at(X), phi4);
    
            u                       = x1 * phi1 + x2 * phi2 + x3 * phi3 + x4 * phi4;
            // IMPORTANT:
            // Computing the jacobian manually as outer products xi * gradphi works! 
            // Final gradients dPsi/dxi are correctly computed later on.
            autodiff::Matrix3dual F = x1 * gradphi1.transpose() + x2 * gradphi2.transpose() +
                                      x3 * gradphi3.transpose() + x4 * gradphi4.transpose();
    
            // IMPORTANT: 
            // Here, if I use jacobian, then I cannot compute dPsi/dxi correctly later on, 
            // because the gradients are zeroed out.
            // autodiff::Matrix3dual F = jacobian(interpolate_op, wrt(X), at(X, x1, x2, x3, x4));
            return F;
        }
    
        interpolation_op_t interpolate_op;
    };
    
    struct strain_op_t
    {
        strain_op_t(deformation_gradient_op_t const& deformation_gradient)
            : deformation_gradient_op(deformation_gradient)
        {
        }
    
        autodiff::Matrix3dual operator()(
            autodiff::Vector3dual X,
            autodiff::Vector3dual x1,
            autodiff::Vector3dual x2,
            autodiff::Vector3dual x3,
            autodiff::Vector3dual x4,
            autodiff::Vector3dual& u,
            autodiff::Matrix3dual& F) const
        {
            autodiff::Matrix3dual I = autodiff::Matrix3dual::Identity();
            F                       = deformation_gradient_op(X, x1, x2, x3, x4, u);
            autodiff::Matrix3dual E = 0.5 * (F.transpose() * F - I);
            return E;
        }
    
        deformation_gradient_op_t deformation_gradient_op;
    };
    
    struct strain_energy_density_op_t
    {
        strain_energy_density_op_t(strain_op_t const& strain, double mu, double lambda)
            : strain_op(strain), mu(mu), lambda(lambda)
        {
        }
    
        autodiff::dual operator()(
            autodiff::Vector3dual X,
            autodiff::Vector3dual x1,
            autodiff::Vector3dual x2,
            autodiff::Vector3dual x3,
            autodiff::Vector3dual x4,
            autodiff::Vector3dual& u,
            autodiff::Matrix3dual& F,
            autodiff::Matrix3dual& E) const
        {
            E                    = strain_op(X, x1, x2, x3, x4, u, F);
            autodiff::dual trace = E.trace();
            auto tr2             = trace * trace;
            autodiff::dual EdotE = (E.array() * E.array()).sum(); // contraction
            return mu * EdotE + 0.5 * lambda * tr2;
        }
    
        strain_op_t strain_op;
        double mu, lambda;
    };
    
    int main()
    {
        autodiff::Vector3dual X1, X2, X3, X4;
        X1 << 0., 0., 0.;
        X2 << 1., 0., 0.;
        X3 << 0., 1., 0.;
        X4 << 0., 0., 1.;
    
        autodiff::Matrix3dual S;
        S = 2. * autodiff::Matrix3dual::Identity();
    
        autodiff::Vector3dual x1 = S * X1;
        autodiff::Vector3dual x2 = S * X2;
        autodiff::Vector3dual x3 = S * X3;
        autodiff::Vector3dual x4 = S * X4;
    
        autodiff::Matrix4dual A;
        A.col(0) = polynomial3d_1st_order(X1);
        A.col(1) = polynomial3d_1st_order(X2);
        A.col(2) = polynomial3d_1st_order(X3);
        A.col(3) = polynomial3d_1st_order(X4);
    
        autodiff::Matrix4dual Ainv;
        Ainv = A.inverse();
    
        basis_function_op_t phi1, phi2, phi3, phi4;
        phi1.a = Ainv.row(0);
        phi2.a = Ainv.row(1);
        phi3.a = Ainv.row(2);
        phi4.a = Ainv.row(3);
    
        double young_modulus = 1e6;
        double poisson_ratio = 0.35;
        double mu            = (young_modulus) / (2. * (1 + poisson_ratio));
        double lambda =
            (young_modulus * poisson_ratio) / ((1 + poisson_ratio) * (1 - 2 * poisson_ratio));
    
        interpolation_op_t interpolate_op(x1, x2, x3, x4, phi1, phi2, phi3, phi4);
        deformation_gradient_op_t deformation_gradient_op(interpolate_op);
        strain_op_t strain_op(deformation_gradient_op);
        strain_energy_density_op_t strain_energy_density_op(strain_op, mu, lambda);
    
        autodiff::Vector3dual X;
        X << 0., 1., 0.;
    
        autodiff::Vector3dual x;
        autodiff::Matrix3dual F, E;
        autodiff::dual Psi;
        autodiff::Vector3dual f1 = autodiff::gradient(
            strain_energy_density_op,
            autodiff::wrt(x1),
            autodiff::at(X, x1, x2, x3, x4, x, F, E),
            Psi);
        autodiff::Vector3dual f2 = autodiff::gradient(
            strain_energy_density_op,
            autodiff::wrt(x2),
            autodiff::at(X, x1, x2, x3, x4, x, F, E),
            Psi);
        autodiff::Vector3dual f3 = autodiff::gradient(
            strain_energy_density_op,
            autodiff::wrt(x3),
            autodiff::at(X, x1, x2, x3, x4, x, F, E),
            Psi);
        autodiff::Vector3dual f4 = autodiff::gradient(
            strain_energy_density_op,
            autodiff::wrt(x4),
            autodiff::at(X, x1, x2, x3, x4, x, F, E),
            Psi);
    
        std::cout << "x:\n" << x << "\n";
        std::cout << "F:\n" << F << "\n";
        std::cout << "E:\n" << E << "\n";
        std::cout << "Psi:\n" << Psi << "\n";
        std::cout << "f1:\n" << f1 << "\n";
        std::cout << "f2:\n" << f2 << "\n";
        std::cout << "f3:\n" << f3 << "\n";
        std::cout << "f4:\n" << f4 << "\n";
    
        return 0;
    }
    

    The output of the program when I manually compute the jacobian F is:

    x:
    0
    2
    0
    F:
    2 0 0
    0 2 0
    0 0 2
    E:
    1.5   0   0
      0 1.5   0
      0   0 1.5
    Psi:
    1.125e+07
    f1:
    -1e+07
    -1e+07
    -1e+07
    f2:
    1e+07
        0
        0
    f3:
        0
    1e+07
        0
    f4:
        0
        0
    1e+07
    

    while the output of the program when I use autodiff::jacobian is:

    x:
    0
    2
    0
    F:
    2 0 0
    0 2 0
    0 0 2
    E:
    1.5   0   0
      0 1.5   0
      0   0 1.5
    Psi:
    1.125e+07
    f1:
    0
    0
    0
    f2:
    0
    0
    0
    f3:
    0
    0
    0
    f4:
    0
    0
    0
    

    Notice how the resulting forces f1, f2, f3, f4 differ in both cases. I have marked the sections in my example program where I encounter my problem with the comment // IMPORTANT: [...]. Specifically, the problem arises in the call operator of the deformation_gradient_op_t class.

    Thank you for this amazing library and I hope to find an answer to my problem with your help.

    opened by Q-Minh 9
  • reverse jacobian wrt a matrix and MatrixXvar times MatrixXd

    reverse jacobian wrt a matrix and MatrixXvar times MatrixXd

    Thank you very much for this wonderful package! I have been looking for a while on the AD packages in C++, and this is the only one that is well-maintained, Eigen-based, header-only library!

    I was trying the reverse mode and I have the following two questions:

    1. The reverse mode seems only support gradient and hessian. I wonder if there is support on jacobian matrix wrt a matrix. This is perhaps related to #224 but that is for forward mode.
    2. I also found that it is possible to multiply a MatrixXvar by Eigen::VectorXd (or VectorXd multiply MatrixXvar), but when trying to multiply MatrixXvar by MatrixXd, I will always get compilation error.
      MatrixXvar a(2, 2);
      VectorXvar b(2);
      Eigen::VectorXd c(2);
      Eigen::MatrixXd m(3, 2);
    
      a << 1, 2, 3, 4.;
      b << 1.2, 3.4;
      c << 1.2, 3.4;
      m << 1, 2, 3, 4, 5, 6;
    
      // acceptable MatrixXvar \times VectorXd
      std::cout << a * b << std::endl;
      std::cout << a * c << std::endl;
      std::cout << b.transpose() * a << std::endl;
      std::cout << c.transpose() * a << std::endl;
    
      // not acceptable MatrixXvar \times MatrixXd
      std::cout << m * a << std::endl;
      std::cout << a * m.transpose() << std::endl;
    

    For 1, I don't think it will be very complicated. I suppose I can do a loop on the input matrix and record the gradient vectors into a matrix. I wonder if this is the way to go?

    For 2, I was thinking that if I don't need the gradient of something (c or m) here, I can probably just use plain Eigen matrix/vector so that the computation graph will not record those and perhaps the reverse mode algorithm will run faster. I am not sure if this is true and if it is, then I wonder if my usage here (MatrixXvar times MatrixXd) is correct?

    Thank you!

    opened by fangzhou-xie 0
  • Fix some compiler warnings

    Fix some compiler warnings

    This PR corrects some compiler warnings:

    • Removes an unused variable from ForEachWrtVar in forward/utils/gradient.hpp
    • Fixes some warnings about -Wsign-compare, stemming from a mismatch in types between the loop index and a some size variable in several places.

    After this change, I can compile cleanly with -Wall -Werror.

    What are your thoughts on adding -DCMAKE_CXX_FLAGS="-Wall -Werror" to the cmake configure step in CI? It makes it nicer to include a header a only library in other projects if it will compile without warnings, and this would help guarantee that.

    opened by rabraker 0
  • Generalize higher derivatives

    Generalize higher derivatives

    I would like to generalize higher derivatives, so that I could take for instance the 7th derivative, three times w.r.t. x and 4 times w.r.t. y. I have been stymied in my attempts. I got the tuple of arguments to the wrt function to build, with something like:

    template<typename T, size_t ... Indices>
    auto _GetDupedTuple(const T& val, std::index_sequence<Indices...>) {
        return std::make_tuple((Indices, val)...);
    }
    
    template<int N, typename T>
    auto build_duplicated_tuple(const T& val) {
        return _GetDupedTuple(val, std::make_index_sequence<N>());
    }
    

    and so I could build the arguments like:

    constexpr int iX = 3, iY = 4;
    using adtype = autodiff::HigherOrderDual<iX + iY, double>;
    adtype x = 1.0, y = 3.14159;
    auto wrts = std::tuple_cat(build_duplicated_tuple<iX>(x), build_duplicated_tuple<iY>(y)); 
    

    But when I try to call, wrt is obviously (and reasonably) not happy with this approach. This doesn't work:

    auto f = [&](const adtype& x_, const adtype& y_) { return x_ + y_; };
    auto der = derivatives(f, wrt(wrts), at(x, y));
    

    And I tried all kinds of forwarding attempts (with std::apply for instance), but I could never get this to work by any means other than

    auto der = derivatives(f, wrt(x,x,x,y,y,y,y), at(x, y));
    

    which gets hard to do in a general way in the code aside from hardcoding many constexpr conditional branches for the number of times of derivatives of each variable, which I don't like.

    Thought: Is what is stored in the Wrt class not references? Am I close?

    opened by ianhbell 8
  • Multivariate Taylor series expansion

    Multivariate Taylor series expansion

    I see you have implemented Taylor series expansion around a point in 1D, very nice. What I would like is to be able to do the same thing, efficiently, in two dimensions. How difficult do you think it would be to implement efficiently in autodiff? More generally, it would be nice to be able to fill in the matrix of terms d^(n+m)/d(x_1)^nd(x_2)^m all the terms up to n and m in the most efficient manner possible for a set of two input arguments.

    Context: I'm doing higher order thermodynamic variable transformations; it gets very hairy and I'd like to see if I can offload some of the pain onto autodiff.

    opened by ianhbell 8
  • VectorXreal map

    VectorXreal map

    Hi,

    I have data stored in a std::vector, and would like to use the data with the autodiff::VectorXreal type. I wonder if there is a better way than copying it like this:

    std::vector<double> x{1, 2, 3}; 
    VectorXreal xr(3);
    for (unsigned int i = 0; i < x.size(); ++i) {
           xr(i) = x[i];
    }
    

    When using Eigen i typically use a map:

    Eigen::Map<Eigen::Matrix<double,1,3> > x_map(x.data());
    

    Can we do somethings similar with the VectorXreal class ?

    Looking forward.

    opened by kristianmeyerr 2
  • Added T cast operator to Real

    Added T cast operator to Real

    The cast operator is needed to make std::isfinite work on Real, as discussed in https://github.com/autodiff/autodiff/issues/191#. In consequence, it is now possible to compute derivatives for Eigenvalue/vector computations.

    opened by mattarroz 1
Releases(v0.6.9)
  • v0.6.9(Aug 2, 2022)

    🛠️ Improvements

    This release introduces fixes for:

    • Minimum CMake version should be 3.16 instead of 3.22 (#223, #240)
    • Permitting both operator() and operator[] in the computation of gradient (#233)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.8(May 11, 2022)

    🛠️ Improvements

    This release improves the compilation speed of autodiff's Python bindings. It also implements a tentative fix to allow packages depending on autodiff's pybindings to be compiled with different compilers and compiler versions.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.7(Mar 18, 2022)

    🛠️ Improvements

    This release updates the list of conda packages that autodiff depend on. It also improves the cmake instructions to install the Python bindings of autodiff, using pip.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.6(Mar 7, 2022)

    🐞 Bug Fixes

    This is a bug fix release that address the following issue:

    • derivative computation of asin, acos, atan not correct when using an autodiff::real number scaled by a value (#206)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.5(Jan 20, 2022)

    🛠️ Improvements

    This release introduces the implementation of __float__ for the Python bindings of autodiff::real and autodiff::dual so that numbers of these types can be converted to float in python using float(number).

    Source code(tar.gz)
    Source code(zip)
  • v0.6.4(Aug 24, 2021)

    🐞 Bug Fixes

    • Fixes the evaluation of derivatives using var when there are pow(x, y) with x=0 and/or y=0 (PR: #180 provided by @c-renton)
    • Fixes memory leak when using var for higher-order derivative computations (PR: #177 provided by @jargonzombies)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.3(Aug 5, 2021)

  • v0.6.2(Jul 29, 2021)

    This release enables the use of Eigen::IndexedView when using gradient, jacobian, and hessian functions. For example, assume you want to compute the Jacobian of a function f with respect to some selected variables in x given by x(indices), where indices is a container of int-like numbers. This can be done as follows:

    auto J = jacobian(f, wrt(x(indices)), at(x));
    

    Matrix J will have as many columns as there are entries in indices, and not in x.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Jul 13, 2021)

    In this release, implicit conversion (in Python) from int and float to Real<N, T> and Dual<T, G> has been enabled.

    This solves the issue in which a C++ function/method exported to python and expecting a real or dual number as an argument would not work if an int was passed instead.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Jul 5, 2021)

    This is a major version release with many improvements, code redesign, and reorganized as well as new features.

    This release introduces a new number type called real (and also typedefs for its higher-order variants such as real2nd, real3rd, and real4th). This family of types (without limitation on their order) was specifically designed for faster computations of higher-order derivatives along a given direction. This is in contrast with type dual (and its higher-order variants), which is general enough to not only support directional derivatives but also cross derivatives. Because directional derivatives can be computed without explicitly computing every single cross derivative, realNth has an advantage compared to dualNth for this sort of computations.

    With the introduction of realNth, TaylorSeries has also been introduced (check the examples, and the function taylorseries for more convenient construction of Taylor series of multivariable functions along a given direction).

    autodiff also provides python bindings for these number types, in case your C++ application is also used from Python (using pybind11).

    Please check the examples to identify API changes, the new location of header files, and how to use the new features of the library.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.13(Dec 4, 2020)

    This release introduces the missing comparison operators that were missing when comparing a var object and an expression of var objects:

    var a = 10;
    var b = 20;
    
    a < a + b;  // This was not possible before.
    
    Source code(tar.gz)
    Source code(zip)
  • v0.5.12(Nov 18, 2020)

    This release just implements some minor fixes provided by the pull requests below:

    #136 Fix some unused parameter warnings. #135 Fix CI on Windows.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.11(Sep 29, 2020)

    This release implements:

    • performance improvement in the reverse algorithm (#111)
    • support to std::atan2 (#116), std::hypot (#123)
    • code changes to reduce compilation warnings (#125)
    • support to Bazel build system (#126)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.10(Feb 19, 2020)

  • v0.5.9(Sep 9, 2019)

  • v0.5.8(Jul 29, 2019)

  • v0.5.6(Jul 12, 2019)

    This release fixes issue #43 in which example files were using #include <eigen3/Eigen/Core> instead of #include <Eigen/Core>, with the former being the expected usage when find_package(Eigen) command is used.

    Many thanks to @pariterre for the PR #45 .

    Source code(tar.gz)
    Source code(zip)
  • v0.5.5(Jul 3, 2019)

    This release fixes the issue reported in #41 and potentially others in which as the expression tree is constructed at compile time, some expression nodes are taken accidentally as reference, and then going out of scope.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.4(Jul 2, 2019)

  • v0.5.3(Jun 27, 2019)

    This release fixes issue #38, in which casting VectorXdual and MatrixXdual to VectorXd and MatrixXd, respectively, was not supported.

    See examples below on how to cast from one type to another:

    Converting VectorXdual to VectorXd

    VectorXdual x(3);
    x << 0.5, 0.2, 0.3;
    VectorXd y = x.cast<double>();
    

    Converting MatrixXdual to MatrixXd

    MatrixXdual x(2,2);
    x << 0.5, 0.2, 0.3, 0.7;
    MatrixXd y = x.cast<double>();
    

    Special thanks to @ludkinm !

    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Jun 18, 2019)

  • v0.5.1(Jun 18, 2019)

    This is an important bug fix that corrects aliasing. As reported in #32 , operations such as:

    dual a = 1;
    a = a - 2*a;
    

    did not work as expected, because a is used on the right-hand side expression. As this expression is evaluated, a changes and so does the other occurrences of a in the expression.

    To prevent this, the default behavior needs to be such that the result of the expression is stored in a temporary, and then the self dual object is assigned to this temporary.

    In a future release, something similar to Eigen .noalias() method could be envisioned if the user is sure there is no alias on the right-hand side expression.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Jun 17, 2019)

    This release introduces BREAKING CHANGES! These are easy to fix, though. Read below.

    From now on, methods derivative, gradient, and jacobian require the use of auxiliary functions wrt and the newly introduced one at.

    Unfortunately, it was not possible to keep the previous version of these functions, since they conflicted with the newly introduced versions in a way quite difficult to solve.

    This release also permits the calculation of gradient and Jacobian with respect to some variables, instead of all of them.

    Examples of how these functions are used now:

    // f = f(x)
    double dudx = derivative(f, wrt(x), at(x));
    
    // f = f(x, y, z)
    double dudx = derivative(f, wrt(x), at(x, y, z));
    double dudy = derivative(f, wrt(y), at(x, y, z));
    double dudz = derivative(f, wrt(z), at(x, y, z));
    
    // f = f(x), scalar function, where x is an Eigen vector
    VectorXd g = gradient(f, wrt(x), at(x));
    
    // Compuring gradient with respect to only some variables
    VectorXd gpartial = gradient(f, wrt(x.tail(5)), at(x));
    
    // F = F(x), vector function, where x is an Eigen vector
    MatrixXd J = jacobian(f, wrt(x), at(x));
    
    // F = F(x, p), vector function with params, where x and p are Eigen vectors
    MatrixXd Jx = jacobian(f, wrt(x), at(x, p));
    MatrixXd Jp = jacobian(f, wrt(p), at(x, p));
    
    // Compuring Jacobian with respect to only some variables
    MatrixXd Jpartial = jacobian(f, wrt(x.tail(5)), at(x));
    

    This release also permits one to retrieve the evaluated value of function during a call to the methods derivative, gradient, and jacobian:

    // f = f(x)
    dual u;
    double dudx = derivative(f, wrt(x), at(x), u);
    
    // f = f(x), scalar function, where x is an Eigen vector
    dual u;
    VectorXd g = gradient(f, wrt(x), at(x), u);
    
    // F = F(x), vector function, where x is an Eigen vector
    VectorXdual F;
    MatrixXd J = jacobian(f, wrt(x), at(x), F);
    
    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Mar 28, 2019)

  • v0.4.1(Mar 26, 2019)

    This release fixes a bug in the computation of Jacobian matrices when the input and output vectors in a vector-valued function have different dimensions (see issue #24).

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Feb 20, 2019)

    This release contains changes that enable autodiff to be successfully compiled in Linux, macOS, and Windows.

    Compilers tested were GCC 7, Clang 9, and Visual Studio 2017. Compilers should support C++17.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 5, 2019)

    This release improves the forward mode algorithm to compute derivatives of any order.

    It also introduces a proper website containing a more detailed documentation of autodiff library:

    https://autodiff.github.io

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jul 26, 2018)

  • v0.1.0(Jul 19, 2018)

a playground for working with fully static tensors and automatic differentiation

This is a playground for learning about how to apply template-meta-programming to get more efficient evaluation for tensor-based automatic differentiation.

Edward Kmett 16 Mar 18, 2021
Automatic differentiation with weighted finite-state transducers.

GTN: Automatic Differentiation with WFSTs Quickstart | Installation | Documentation What is GTN? GTN is a framework for automatic differentiation with

null 91 Jul 25, 2022
PyCppAD — Python bindings for CppAD Automatic Differentiation library

PyCppAD is an open source framework which provides bindings for the CppAD Automatic Differentiation(CppAD) C++ library in Python. PyCppAD also includes support for the CppADCodeGen (CppADCodeGen), C++ library, which exploits CppAD functionality to perform code generation.

SimpleRobotics 12 Jun 15, 2022
Code generation for automatic differentiation with GPU support.

Code generation for automatic differentiation with GPU support.

Eric Heiden 38 Jun 13, 2022
MissionImpossible - A concise C++17 implementation of automatic differentiation (operator overloading)

Mission : Impossible (AutoDiff) Table of contents What is it? News Compilation Meson CMake Examples Jacobian example Complex number example Hessian ac

pixor 18 Jun 1, 2022
Harsh Badwaik 1 Dec 19, 2021
Windows 10 interface adjustment tool supports automatic switching of light and dark modes, automatic switching of themes and transparent setting of taskbar

win10_tools Windows 10 interface adjustment tool supports automatic switching of light and dark modes, automatic switching of themes and transparent s

Simon 1 Dec 3, 2021
Enoki: structured vectorization and differentiation on modern processor architectures

Enoki: structured vectorization and differentiation on modern processor architectures

Mitsuba Physically Based Renderer 1.1k Aug 8, 2022
compile time symbolic differentiation via C++ template expressions

SEMT - Compile-time symbolic differentiation via C++ templates The SEMT library provides an easy way to define arbitrary functions and obtain their de

null 14 Apr 8, 2022
STL compatible C++ memory allocator library using a new RawAllocator concept that is similar to an Allocator but easier to use and write.

memory The C++ STL allocator model has various flaws. For example, they are fixed to a certain type, because they are almost necessarily required to b

Jonathan Müller 1.1k Aug 4, 2022
Toolbox that makes homebrewing the PS Vita easier

VitaDeploy Toolbox that makes homebrewing the Playstation Vita/TV easier Features file manager (VitaShell) sd2vita mount/format firmware updater/downg

null 126 Jul 31, 2022
OGRE is a scene-oriented, flexible 3D engine written in C++ designed to make it easier and more intuitive for developers to produce games and demos utilising 3D hardware.

OGRE (Object-Oriented Graphics Rendering Engine) is a scene-oriented, flexible 3D engine written in C++ designed to make it easier and more intuitive for developers to produce games and demos utilising 3D hardware. The class library abstracts all the details of using the underlying system libraries like Direct3D and OpenGL and provides an interface based on world objects and other intuitive classes.

null 2.9k Aug 2, 2022
A revised version of NanoLog which writes human readable log file, and is easier to use.

NanoLogLite NanoLogLite is a revised version of NanoLog, and is easier to use without performance compromise. The major changes are: NanoLogLite write

Meng Rao 21 Jul 30, 2022
CnPython - Implementing most of pythons builtin functions in C, making it easier to read.

CnPython - Implementing most of pythons builtin functions in C, making it easier to read.

Arin 9 Feb 27, 2022
shufflev2-yolov5: lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

pogg 1.2k Aug 5, 2022
MMCTX (Memory Management ConTeXualizer), is a tiny (< 300 lines), single header C99 library that allows for easier memory management by implementing contexts that remember allocations for you and provide freeall()-like functionality.

MMCTX (Memory Management ConTeXualizer), is a tiny (< 300 lines), single header C99 library that allows for easier memory management by implementing contexts that remember allocations for you and provide freeall()-like functionality.

A.P. Jo. 4 Oct 2, 2021
A C++ implementation of nx-TAS by hamhub7 intended to make shortcuts easier than before.

C-TAS Documentation Features C-TAS is a C++ implementation of nx-TAS by hamhub7 intended to make shortcuts easier than before. This is a blatant conve

Deltaion Lee 2 Sep 20, 2021
HoI4 Modding Tool That Does It All! Now with a QT based GUI, all your work wil be easier!

Kadaif - HoI4 Modding Tool Kadaif is a cross-platform tool meant to help you make mods for Hearts of Iron IV. With VSCode and all it's extensions, man

null 1 Dec 28, 2021
ApeX is a static library for C++ software. Originally it was created to make C++ studying easier,

ApeX is a static library for C++ software. Originally it was created to make C++ studying easier, so it has functions to complete common tasks with just one line of code. But who knows, maybe this library will get bigger some day

null 0 Jan 18, 2022
C++ Unit Testing Easier: A Header-only C++ unit testing framework

CUTE C++ Unit Testing Easier: A Header-only C++ unit testing framework usually available as part of the Cevelop C++ IDE (http://cevelop.com) Dependenc

Peter Sommerlad 34 Jul 22, 2022
C++ getopt wrapper to make it easier to parse command line arguments

Options is a simple C++ wrapper for getopt that makes it easier to handle command line argument parsing in C++. See demo.cc and the Makefile for an e

Gary Hollis 1 Oct 30, 2021
A single file, single function, header to make notifications on the PS4 easier

Notifi Synopsis Adds a single function notifi(). It functions like printf however the first arg is the image to use (NULL and any invalid input should

Al Azif 7 Mar 24, 2022
STL compatible C++ memory allocator library using a new RawAllocator concept that is similar to an Allocator but easier to use and write.

STL compatible C++ memory allocator library using a new RawAllocator concept that is similar to an Allocator but easier to use and write.

Jonathan Müller 1k Dec 2, 2021
Seam is a pin-based node editor for OpenFrameworks that makes prototyping visual systems easier and faster.

seam Seam is a pin-based node editor for openFrameworks built using: openFrameworks Dear ImGui the node editor extension for ImGui It is heavily WIP,

Austin Clifton 2 Jan 2, 2022
A Cross platform implement of Wenet ASR. It's based on ONNXRuntime and Wenet. We provide a set of easier APIs to call wenet models.

RapidASR: a new member of RapidAI family. Our visio is to offer an out-of-box engineering implementation for ASR. A cpp implementation of recognize-on

RapidAI-NG 76 Jul 28, 2022
Easier CPP interface to PCRE regex engine with global match and replace

RegExp Easier CPP interface to PCRE regex engine with global match and replace. I was looking around for better regex engine than regcomp for my C/C++

Yasser Asmi 5 May 21, 2022
A library to control esp-8266 from Arduino by AT commands easier.

ArduinoESPAT A library to control esp-8266 from Arduino by AT commands easier. Wiring Diagram Arduino Uno ESPr 5V Vin GND GND D2 TX D3 RX Usage Defini

Nyampass Corporation 8 Oct 22, 2021
A simple program to make your life a little easier when you evaluate the Rush exercises at 42.

Rush exercise number A simple program to make your life a little easier when you evaluate the Rush exercises at 42. Usage Run make to generate the exe

Edmar Paulino 3 Feb 7, 2022