A small and easy to use neural net implementation for C++. Just download and #include!

Overview
N++

NN++

A short, self-contained, and easy-to-use neural net implementation for C++. It includes the neural net implementation and a Matrix class for basic linear algebra operations. This project is mostly for learning purposes, but preliminary testing results over the MNIST dataset show some promise.

Getting Started

These instructions will get you a copy of the net up and running on your local machine for development and testing purposes.

Prerequisites

Any compiler that can handle C++11.

Installing

  1. Download Matrix.hpp, Matrix.cpp, NeuralNet.hpp, and NeuralNet.cpp and place them in your project's working directory.
  2. Include the headers in your main driver program (e.g. main.cpp).
#include "Matrix.hpp"
#include "NeuralNet.hpp"

NOTE: It is not required to #include "Matrix.hpp" since it is included within NeuralNet.hpp. However it is probably better to do so for clarity and safety in case you plan on using Matrix objects in your code (and you probably will if you use NeuralNet).

Example Code

The Matrix Class

First you need to know how to use the Matrix class. Matrix objects are basically 2D-vectors with built-in linear algebra operations.

Matrix Initialization

Matrix A;       // Initializes a 0x0 matrix.
Matrix B(2,2)   // Initializes a 2x2 matrix with all zeros. Values are doubles.
Matrix C(2,1)   // Initializes a 2x1 matrix.

Element Access

To access/modify a value in a matrix, use operator(), NOT operator[]:

B(0,0) = 1; B(0,1) = 2; B(1,0) = 3; B(1,1) = 4;   // [1    2]
                                                  // [3    4]

C(0,0) = 1; C(1,0) = 2;                           // [1]
                                                  // [2]

Matrix Term-by-Term Addition/Subtraction/Multiplication

// Commutative property is supported for addition
Matrix D = B+B;       // D = [2   4]
                             [6   8]
                             
Matrix E = B-B;       // E = [0   0]
                             [0   0]
                             
// Commutative property is supported for multiplication                             
Matrix F = B*B        // F = [1   4]
                             [9  16]
                             
// Mismatching matrix dimensions in term-by-term operations
// is illegal and a MatrixDimensionsMismatch exception will be thrown.
Matrix G = B+C;       // Throws MatrixDimensionsMismatch()
Matrix G = B-C;       // Throws MatrixDimensionsMismatch()
Matrix G = B*C;       // Throws MatrixDimensionsMismatch()

Matrix and Scalars

// Commutative property is supported for addition
Matrix BplusTwo = B+2;  // (== 2+B)   BplusTwo = [3   4]
                                                 [5   6]

Matrix CminusTwo = C-2; //           CminusTwo = [-1]
                                                 [ 0]
                                                 
Matrix TwominusB = 2-C; //           TwominusB = [ 1]
                                                 [ 0]
                                                 
// Commutative property is supported for multiplication
Matrix BtimesThree = B*3; // (== 3*B) BtimesThree = [3    6]
                                                    [9   12]

Matrix Multiplication (Dot Product)

Matrix BB = B.dot(B);     // BB = [ 7  10]
                                  [15  22]
                                  
Matrix BC = B.dot(C);     // BC = [ 5]
                                  [11]

// Mismatching the number of columns in the left-hand-side matrix
// with the number of rows in the right-hand-side matrix is illegal
// A MatrixInnderDimensionsMismatch exception will be thrown.
Matrix CB = C.dot(B);     // Throws MatrixInnderDimensionsMismatch()

Matrix Transpose

Matrix B_T = B.T();   // B_T = [1   3]
                               [2   4]
                                 
Matrix C_T = C.T();   // C_T = [1   2]

An Example of Populating a 4x3 Matrix

int m = 4;
int n = 3;

Matrix mtrx(m,n);

int count = 1;
for (int i = 0; i < mtrx.getNumOfRows(); ++i) {
    for (int j = 0; j < mtrx.getNumOfCols(); ++j) {
        mtrx(i,j) = count;
        ++count;
    }
}

This will result with mtrx ==

[ 1     2     3]
[ 4     5     6]
[ 7     8     9]
[10    11    12]

The NeuralNet Class

Neural Net Initialization (The Parameters)

When initialized, a net takes in five parameters:

  1. Number of input nodes.
  2. Number of nodes per hidden layer.
  3. Number of output nodes.
  4. Number of hidden layers.
  5. The learning rate.
NeuralNet NN(4, 3, 1, 10, 0.1);

This particular neural net has 4 input nodes, 1 hidden layer with 3 nodes, 10 output node, and has a learning rate of 0.1.
New neural nets' weights are initialized with values drawn from a normal distribution centered at 0, with standard deviation that is equal to 1/sqrt(number_of_inputs_to_nodes_in_next_layer). In other words, small negative and positive values that are proportional to the size of their previous layer.

A Training Cycle

Once the net is initialized, it is ready to do work.
ONE training cycle == one feed forward and one back propagation with weight adjustments.

To train one cycle, the input data must be parsed into a Matrix object with dimensions: 1xnumber_of_input_nodes (1x4 in our case), and the target output must be parsed into a Matrix object with dimensions: 1xnumber_of_output_nodes (1x10 in our case).

Matrix input(1,4);
input(0,0) =  0.3;
input(0,1) = -0.1;
input(0,2) =  0.2;
input(0,3) =  0.8;

Matrix targetOutput(1,1);
target(0,0) =  0.5;
target(0,1) = -0.3;
        .
        .
        .
target(0,9) =  0.23;        // Obviously, matrices should be populated using
                            // some parser and not manualy like this.

Then, simply execute the training cycle on the data as follows:

NN.trainingCycle(input, targetOutput);

Repeate the process over all training instances.

Querying the Net

Once the training phase is complete, you can query it as follows:
(Technically speaking, you can query it right after initialization).

Parse the query into a Matrix like parsed the training instance:

Matrix query(1,2);
input(0,0) =  0.5;
input(0,1) = -0.2;
input(0,2) = -0.3;
input(0,3) =  0.4;

Query the net and catch the result:

Matrix prediction = NN.queryNet(query);   // Will return a 1x10 Matrix object with net's prediction

AND THAT'S IT!

TODO

  1. Add array, std::vector, and std::initializer_list constructors to the Matrix class
  2. Either improve on or replace my Matrix class for better/faster performance
  3. Add multiple epoch learning with early stopping.

Authors

  • Gil Dekel - Initial implementation - stagadish

See also the list of contributors who participated in this project.

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Issues
  • Add some work on operator << and a (very) general training function

    Add some work on operator << and a (very) general training function

    Happy to do some more work to get this up to scratch (I'm new to performance heavy c++).

    I started work on a training function that takes a std::vector<std::pair<Matrix, Matrix>> which and a lambda (this is for defining your own end conditions) should make training simple.

    I can add some documentation on how to use it you like it.

    Thanks

    opened by Cypher1 6
  • Some issues regarding your library

    Some issues regarding your library

    Hey,

    Today I have got to know about your library thru your post in G+. I am also trying to write a similar library/toolbox to be used in optimization.

    At first look at your library, I can say that it looks neat. However, it would be great if you considered using templates in your data structures. Or, if you would like to go for inheritance for some reason, then you can abstract some functionality in your base classes.

    What I mean by the above comment is that, for example, you could use an activation function abstraction and keep a pointer inside your NeuralNet class to support different activation functions, such as:

    #include <iostream>
    #include <vector>
    
    template <typename T>
    class IFunction {
    public:
      virtual T operator()(T variable) const = 0;
      virtual ~IFunction() = default;
    };
    
    template <typename T>
    class PReLU : public IFunction<T> {
    private:
      T alpha_;
    public:
      PReLU(T alpha = T { 0 }) : alpha_ {alpha} { }
      T operator()(T variable) const override {
        T zero { 0 };
        return variable < zero ? alpha_*variable : variable;
      }
    };
    
    template <typename T>
    class Identity : public IFunction<T> {
    public:
      T operator()(T variable) const override {
        return variable;
      }
    };
    
    int main (int argc, char* argv[]) {
      Identity    <double>  identity_function {         };
      PReLU       <double>  relu_function     {         };
      PReLU       <double>  absolute_value    { -1.     };
    
      IFunction   <double>  *generic_func_ptr { nullptr };
    
      size_t numelems { 100ul };
      double xmin { -5. }, xmax { 5. }, dx {(xmax - xmin)/(numelems-1)};
      std::vector <double>  xvalues ( numelems );
      for (size_t idx = 0; idx < xvalues.size(); idx++)
        xvalues[idx] = xmin + idx*dx;
    
      // either directly call as if your instances were functions (actually, they
      // are functors right now, with their operator()'s overloaded)
      for (auto x : xvalues)
        std::cout << "x: " << x
                  << ", identity_function(x): " << identity_function(x)
                  << ", relu_function(x): " << relu_function(x)
                  << std::endl;
    
      // or, use polymorphism
      generic_func_ptr = &identity_function;
      for (auto x : xvalues)
        std::cout << "x: " << x
                  << ", generic_func_ptr->operator()(x): "
                  << generic_func_ptr->operator()(x)
                  << std::endl;
    
      // or, both
      generic_func_ptr = &relu_function;
      for (auto x : xvalues)
        std::cout << "x: " << x
                  << ", (*generic_func_ptr)(x): " << (*generic_func_ptr)(x)
                  << std::endl;
    
      // absolute value function
      for (auto x : xvalues)
        std::cout << "x: " << x
                  << ", absolute_value(x): " << absolute_value(x)
                  << std::endl;
    
      return 0;
    }
    

    I hope this helps :) Good luck with your code!...

    opened by aytekinar 3
Owner
Gil Dekel
Software Engineer @ Google NYC
Gil Dekel
Include binary files in C/C++

incbin Include binary and textual files in your C/C++ applications with ease Example #include "incbin.h" INCBIN(Icon, "icon.png"); // Re

Dale Weiler 675 Jun 15, 2022
Read file to console, automatically recognize file encoding, include ansi, utf16le, utf16be, utf8. Currently output ansi as gbk for chinese text search.

rgpre A tool for rg --pre. Read file to console, automatically recognize file encoding, include ansi, utf16le, utf16be, utf8. Currently output ansi as

null 3 Mar 18, 2022
Handcrafted Flutter application well organized and easy to understand and easy to use.

Handcrafted Flutter application well organized and easy to understand and easy to use.

Justin Dah-kenangnon 2 Feb 1, 2022
An ESP32 system that can perform a Directory, Upload, Download, Delete, Rename and Stream Files in SPIFFS

ESP-File-Server An ESP32 system that can perform a Directory, Upload, Download, Delete, Rename and Stream Files in SPIFFS Using an ESP32 to handle fil

G6EJD 22 Jun 24, 2022
Free osu tool used to download beatmap collections from osucollector.com

osu collector free edition. created in c++. Background Osu collector is a paid service used to download packages of organized osu beatmaps called "col

EternalRift 4 Jun 4, 2022
Open Source Cheat for Apex Legends, designed for ease of use. Made to understand reversing of Apex Legends and respawn's modified source engine as well as their Easy Anti Cheat Implementation.

Apex-Legends-SDK Open Source Cheat for Apex Legends, designed for ease of use. Made to understand reversing of Apex Legends and respawn's modified sou

null 86 Jun 24, 2022
anthemtotheego 323 Jun 18, 2022
BAAF-Net - Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

null 82 Jun 5, 2022
Capture Minidump of .NET Applications

This repository bundles sentry-native with Google's crashpad and distribute via NuGet with a .NET API to initialize. It allows capturing minidumps of .NET applications caused by native libraries or the .NET runtime itself.

Sentry 19 Jun 14, 2022
🖱️⌨️ Arduino Input Proxying for PC (.NET Core)

SerialProxy SerialProxy is a complete MiTM solution for modifying ??️ mouse & ⌨️ keyboard input against highly sophisticated FPS anti-cheats (ESEA/Fac

earthlion 39 Apr 22, 2022
Inject .NET assemblies into an existing process

inject-assembly - Execute .NET in an Existing Process This tool is an alternative to traditional fork and run execution for Cobalt Strike. The loader

Kyle Avery 309 Jun 15, 2022
Mirror of the HPSx64 Repository on Sourceforge.net

Full Program Name: Highly-Experimental Playstation Simulator x64 Program Author: TheGangster Platforms: Windows 64-bit Contents: hps1x64 - Playstat

Bryan Kirk 4 Apr 14, 2022
C++11 header-only library that offers small vector, small flat map/set/multimap/multiset.

sfl library This is header-only C++11 library that offers several new containers: small_vector small_flat_set small_flat_map small_flat_multiset small

null 9 Jun 18, 2022
The repository contains some examples of pre-trained SNN (Spiking Neural Network) models.

About the Project The repository contains some examples of pre-trained SNN (Spiking Neural Network) models. The models were trained using the MM-BP tr

ETRI 12 Dec 30, 2021
Fast and easy to use, high frequency trading framework for betfair

Hedg Fast and easy to use, high frequency trading framework for betfair About Hedg In the sports trading industry, low latency is really important. Th

Oluwatosin Alagbe 8 Jun 11, 2022
Samir Teymurov 1 Oct 6, 2021
Gfx - A minimalist and easy to use graphics API.

gfx gfx is a minimalist and easy to use graphics API built on top of Direct3D12/HLSL intended for rapid prototyping. It supports: Full shader reflecti

Guillaume Boissé 236 Jun 24, 2022
MasterPlan is a project management software / visual idea board software. It attempts to be easy to use, lightweight, and fun.

MasterPlan is a customizeable graphical project management software for independent users or small teams. If you need to share plans across a whole co

SolarLune 405 Jun 23, 2022
A simple and easy-to-use Lua library to enjoy videogames programming

raylib-lua-sol Lua bindings for raylib, a simple and easy-to-use library to enjoy videogames programming, with sol (www.raylib.com) raylib-lua-sol bin

Rob Loach 61 Jun 2, 2022