A header-only C++ library for deep neural networks

Overview

MiniDNN

MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional to construct different types of feed-forward neural networks. MiniDNN is built on top of Eigen.

MiniDNN is a header-only library implemented purely in C++98, whose only dependency, Eigen, is also header-only. These features make it easy to embed MiniDNN into larger projects with a broad range of compiler support.

This project was largely inspired by the tiny-dnn library, a header-only C++14 implementation of deep learning models. What makes MiniDNN different is that MiniDNN is based on the high-performance Eigen library for numerical computing, and it has better compiler support.

MiniDNN is still quite experimental for now. Originally I wrote it with the aim of studying deep learning and practicing model implementation, but I also find it useful in my own statistical and machine learning research projects.

Features

  • Able to build feed-forward neural networks with a few lines of code
  • Header-only, highly portable
  • Fast on CPU
  • Modularized and extensible
  • Provides detailed documentation that is a resource for learning
  • Helps understanding how DNN works
  • A wonderful opportunity to learn and practice both the nice and dirty parts of DNN

Quick Start

The self-explanatory code below is a minimal example to fit a DNN model:

#include <MiniDNN.h>

using namespace MiniDNN;

typedef Eigen::MatrixXd Matrix;
typedef Eigen::VectorXd Vector;

int main()
{
    // Set random seed and generate some data
    std::srand(123);
    // Predictors -- each column is an observation
    Matrix x = Matrix::Random(400, 100);
    // Response variables -- each column is an observation
    Matrix y = Matrix::Random(2, 100);

    // Construct a network object
    Network net;

    // Create three layers
    // Layer 1 -- convolutional, input size 20x20x1, 3 output channels, filter size 5x5
    Layer* layer1 = new Convolutional<ReLU>(20, 20, 1, 3, 5, 5);
    // Layer 2 -- max pooling, input size 16x16x3, pooling window size 3x3
    Layer* layer2 = new MaxPooling<ReLU>(16, 16, 3, 3, 3);
    // Layer 3 -- fully connected, input size 5x5x3, output size 2
    Layer* layer3 = new FullyConnected<Identity>(5 * 5 * 3, 2);

    // Add layers to the network object
    net.add_layer(layer1);
    net.add_layer(layer2);
    net.add_layer(layer3);

    // Set output layer
    net.set_output(new RegressionMSE());

    // Create optimizer object
    RMSProp opt;
    opt.m_lrate = 0.001;

    // (Optional) set callback function object
    VerboseCallback callback;
    net.set_callback(callback);

    // Initialize parameters with N(0, 0.01^2) using random seed 123
    net.init(0, 0.01, 123);

    // Fit the model with a batch size of 100, running 10 epochs with random seed 123
    net.fit(opt, x, y, 100, 10, 123);

    // Obtain prediction -- each column is an observation
    Matrix pred = net.predict(x);

    // Layer objects will be freed by the network object,
    // so do not manually delete them

    return 0;
}

To compile and run this example, simply download the source code of MiniDNN and Eigen, and let the compiler know about their paths. For example:

g++ -O2 -I/path/to/eigen -I/path/to/MiniDNN/include example.cpp

Documentation

The API reference page contains the documentation of MiniDNN generated by Doxygen, including all the class APIs.

License

MiniDNN is an open source project licensed under MPL2.

Comments
  • Saving and loading models

    Saving and loading models

    Hi @yixuan, Thank you for sharing such a wonderful project with us.

    I wish to contribute to the project by providing model save and load functionalities in MiniDNN. So here is the plan:

    1. Use a standard JSON file format to describe the layers in the model.
    2. Weights of the layers can be saved as a binary file. I found this awesome project which basically facilitates saving Eigen matrices in hdf5. https://github.com/garrison/eigen3-hdf5

    Unfortunately, I am not familiar with the template and header-only programming. What approach would you use?

    Asheesh

    opened by Asheeshkrsharma 14
  • sparsepp isn't c++98, therefore MiniDNN is also not c++98 compliant

    sparsepp isn't c++98, therefore MiniDNN is also not c++98 compliant

    With -std=c++98 clang-8 complains:

    MiniDNN/include/Optimizer/../external/sparsepp/spp_utils.h:147:9: error: no member named 'tr1' in namespace 'std'
            SPP_HASH_CLASS<T> hasher;
            ^~~~~~~~~~~~~~
    MiniDNN/include/Optimizer/../external/sparsepp/spp_utils.h:88:36: note: expanded from macro 'SPP_HASH_CLASS'
           #define SPP_HASH_CLASS std::tr1::hash
                                  ~~~~~^
    MiniDNN/include/Optimizer/../external/sparsepp/spp_utils.h:147:24: error: 'T' does not refer to a value
            SPP_HASH_CLASS<T> hasher;
                           ^
    MiniDNN/include/Optimizer/../external/sparsepp/spp_utils.h:142:17: note: declared here
    template <class T>
                    ^
    MiniDNN/include/Optimizer/../external/sparsepp/spp_utils.h:147:27: error: use of undeclared identifier 'hasher'
            SPP_HASH_CLASS<T> hasher;
                              ^
    

    So you might want to update the README with a correct level of C++ standard for MiniDNN.

    opened by yurivict 6
  • compile-time specification of scalar type

    compile-time specification of scalar type

    Hi! Well done creating this library. It looks fantastic!

    This is a rather minor change. It adds a compilation flag for the Scalar type (-DMDNN_SCALAR). I thought that would be more convenient than changing/patching the config.h file. I've made a few more changes down the line so the types would be Scalar rather than double.

    In order to test this with float types (-DMDNN_SCALAR=float), I've changed the type definitions in the example.cpp as follows:

    ...
    using namespace MiniDNN;
    
    typedef Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic>  Matrix;
    typedef Eigen::Vector<float, Eigen::Dynamic> Vector;
    ...
    
    

    Thanks! Ben.

    opened by benman1 4
  • c++11 instead

    c++11 instead

    Hello yixuan,

    I just download your work and compile. I was able to compile and execute the code successfully.

    The only issue is that I have to use the compiling flag "--std=c++11" g++ -O2 --std=c++11 -I eigen -I MiniDNN/include example.cpp My system pops up syntax error without the flag (or with "--std=c++98")

    Nevertheless, great work!

    With regards, YenHao

    opened by YenHaoChen 4
  • Feature Request: Add Mish activation

    Feature Request: Add Mish activation

    Mish is a new novel activation function proposed in this paper. It has shown promising results so far and has been adopted in several packages including:

    All benchmarks, analysis and links to official package implementations can be found in this repository

    It would be nice to have Mish as an option within the activation function group.

    This is the comparison of Mish with other conventional activation functions in a SEResNet-50 for CIFAR-10: (Better accuracy and faster than GELU) se50_1

    opened by digantamisra98 3
  • Replacing <windows.h> with <direct.h> in Utils/IO.h

    Replacing with in Utils/IO.h

    I was working on some code with your library (which is awesome) when I stumbled across this little mistake. Here is a fix. The _mkdir function does not live in <windows.h> but in <direct.h>. It was only affecting the Windows version.

    opened by debruss 2
  • simple XOR

    simple XOR

    Hey, I was trying out your library and I was having issues. I was just starting out with XOR. main c++ code is below. Hopefully you can tell me what I'm doing wrong. I noticed that the m_Weight is a 2x2 for the first layer which is correct, but the "prev_layer_data" in the feedforward function is a 2x4 because my input matrix is 2x4. Shouldn't it be splitting the inputs into each "vector" of inputs. it looks like its trying to do all the inputs at the same time.

    #include <Eigen/Dense> #include #include #include "MiniDNN.h"

    using namespace MiniDNN;

    typedef Eigen::MatrixXd Matrix; typedef Eigen::VectorXd Vector;

    int main() { std::srand(123);

    Matrix inputs(2, 4);
    inputs(0, 0) = 0; inputs(1, 0) = 0;
    inputs(0, 1) = 0; inputs(1, 1) = 1;
    inputs(0, 2) = 1; inputs(1, 2) = 0;
    inputs(0, 3) = 1; inputs(1, 3) = 1;
    Matrix outputs(1, 4);
    outputs(0, 0) = 0;
    outputs(0, 1) = 1;
    outputs(0, 2) = 1;
    outputs(0, 3) = 0;
    
    //std::cout << inputs << std::endl;
    //std::cout << outputs << std::endl;
    
    // Construct a network object
    Network net;
    
    // Create layers
    Layer* layer1 = new FullyConnected<Sigmoid>(2, 2);//2 input, 2 hidden
    Layer* layer2 = new FullyConnected<Sigmoid>(2, 1);//1 output
    
    // Add layers to the network object
    net.add_layer(layer1);
    net.add_layer(layer2);
    
    // Set output layer
    net.set_output(new RegressionMSE());
    
    //stocastic gradient descent
    //SGD opt;
    RMSProp opt;
    opt.m_lrate = 0.01;
    
    // Initialize parameters with N(0, 0.01^2) using random seed 123
    net.init(0, 0.01, 123);
    
    // Fit the model with a batch size of 4, running 10 epochs with random seed 123
    net.fit(opt, inputs, outputs, 4, 10, 123);
    
    Matrix pred = net.predict(inputs);
    std::cout << pred << std::endl;
    
    
     
    std::cin.get();  
    

    }

    opened by katzb123 2
  • Some cleaning to make the code comply with C++98

    Some cleaning to make the code comply with C++98

    @giovastabile Related to #13, I did some cleaning of the code in the master branch, so that the program can be compiled in C++98 as the README claims.

    Some code in MiniDNNStream.h requires C++11 (e.g. #include <unsupported/Eigen/CXX11/Tensor>), but it is not needed in the main program. Hence I extracted the necessary functions in MiniDNNStream.h and put them into a new header IO.h. Right now the main program does not include MiniDNNStream.h, but I still keep this file since it may be useful in the future.

    I will keep this PR open for one week, and it will be merged to master if no serious issues are discovered.

    opened by yixuan 1
  • Compile error with GCC 8.3

    Compile error with GCC 8.3

    With recent GCC I get a compiler warning (which is interpreted as an error in my project due to -Werror): sparsepp.h:3881:27: error: ‘void* memcpy(void*, const void*, size_t)’ writing to an object of type ‘spp::sparsetable<std::pair<const double* const, Eigen::Array<double, -1, 1> >, spp::libc_allocator_with_realloc<std::pair<const double* const, Eigen::Array<double, -1, 1> > > >::group_type’ {aka ‘class spp::sparsegroup<std::pair<const double* const, Eigen::Array<double, -1, 1> >, spp::libc_allocator_with_realloc<std::pair<const double* const, Eigen::Array<double, -1, 1> > > >’} with no trivial copy-assignment; use copy-initialization instead [-Werror=class-memaccess] memcpy(first, _first_group, sizeof(*first) * (std::min)(sz, old_sz));

    Do you have any idea to fix this ? I can compile it by deactivating the specific warning, but there is definitely an issue in the code...

    opened by SebDyn 1
  • Doing inference/prediction only, with weights loaded from a Tensorflow NN

    Doing inference/prediction only, with weights loaded from a Tensorflow NN

    Nice project! Is it possible to do the following:

    1. Train a CNN with, for example, Tensorflow + Python
    2. Save the model weights into a file
    3. Load this weights file from MiniDNN / C++ code, and do prediction/inference only with MiniDNN (no training done with MiniDNN)

    How would it be possible to load weights/coefficients of a model pre-trained with Tensorflow, into MiniDNN?

    opened by josephernest 1
  • Classification of the Spiral dataset

    Classification of the Spiral dataset

    Hi there, and thank you very much for this brilliant work with MiniDNN. I am slowly understanding more and more of the code.

    Right now I am testing MiniDNN used to classify the the Spiral dataset: https://cs231n.github.io/neural-networks-case-study/

    It doesnt seem to find the optimum.

    Any tips on my code, or what optimizer to use? Have anyone of you tried a testdata for classification?

    I would be happy to put together some testdata and make a another tutorial example..... with some help. :)

    Sincerely, Bernt

    // Code to read in data and put it into Matrix is scipped.

    Network net;
    Layer* layer1 = new FullyConnected<Sigmoid>(2, 20);
    Layer* layer2 = new FullyConnected<ReLU>(20, 20);
    Layer* layer3 = new FullyConnected<Softmax>(20, 3);
    net.add_layer(layer1);
    net.add_layer(layer2);
    net.add_layer(layer3);
    
    net.set_output(new MultiClassEntropy() );
    
    //Adam opt;
    //opt.m_lrate = 0.01;
    SGD opt;
        
    VerboseCallback callback;
    net.set_callback(callback);
    net.init(0, 0.01, 000);
    
    int nr_epochs = 3000;
    net.fit(opt, Xdata.transpose(), Ydata.transpose(), 60, nr_epochs , 000);
    
    Matrix pred = net.predict(Xtest.transpose() );
    Matrix P = pred.transpose();
    
    std::cout << P.rows()  << "  " << P.cols() << std::endl;
    for(int r = 0; r < P.rows() ; r++){
        std::cout << P(r,0) << " " << P(r,1) << " " << P(r,2) << "\n";
    }
    
    opened by bermat72 0
  • The quick-start example doesn't compile on Debian10 64 bit (run is WLS Windows-10 from VisualStudioCode)

    The quick-start example doesn't compile on Debian10 64 bit (run is WLS Windows-10 from VisualStudioCode)

    on any g++ v8.3 compiler (g++, c89-gcc, c99-gcc) choosen and any C++ standard (CMAKE_CXX_STANDARD 98 / 11 / 14).

    Eigen3 v 3.3.7-1 is installed from within Debian : apt install libeigen3-dev

    The project is really needed for very old (Debian6 which can't be upgraded on the targets) LINUX (where builds well with gcc-4.4.5 but many "Parameter .. not used" warnings issued). DEBIAN10 WLS in Windows10 is only tried for develepment convinience reasons as long as the Eigen vesion is 3.3.7-1 in both DEBIAN6 and DEBIAN10.

    The build log :

    [main] Building folder: uzsearch uzsearch [build] Starting build [proc] Executing command: /usr/bin/cmake --build /home/pochta/myprojects/uzsearch/build --config Debug --target uzsearch -- -j 6 [build] [ 50%] Building CXX object CMakeFiles/uzsearch.dir/main.cpp.o [build] In file included from /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback/VerboseCallback.h:8, [build] from /home/pochta/myprojects/GITROOT/MiniDNN/include/MiniDNN.h:34, [build] from /home/pochta/myprojects/uzsearch/main.cpp:1: [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback/../Network.h: In instantiation of ‘bool MiniDNN::Network::fit(MiniDNN::Optimizer&, const Eigen::MatrixBase&, const Eigen::MatrixBase&, int, int, int) [with DerivedX = Eigen::Matrix<double, -1, 2>; DerivedY = Eigen::Matrix<double, -1, 2>]’: [build] /home/pochta/myprojects/uzsearch/main.cpp:48:36: required from here [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback/../Network.h:477:21: error: call of overloaded ‘pre_training_batch(MiniDNN::Network*, Eigen::Matrix<double, -1, 2>&, Eigen::Matrix<double, -1, 2>&)’ is ambiguous [build] m_callback->pre_training_batch(this, x_batches[i], y_batches[i]); [build] ^~~~~~~~~~ [build] In file included from /home/pochta/myprojects/GITROOT/MiniDNN/include/MiniDNN.h:33, [build] from /home/pochta/myprojects/uzsearch/main.cpp:1: [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback.h:49:22: note: candidate: ‘virtual void MiniDNN::Callback::pre_training_batch(const MiniDNN::Network*, const Matrix&, const Matrix&)’ [build] virtual void pre_training_batch(const Network* net, const Matrix& x, [build] ^~~~~~~~~~~~~~~~~~ [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback.h:51:22: note: candidate: ‘virtual void MiniDNN::Callback::pre_training_batch(const MiniDNN::Network*, const Matrix&, const IntegerVector&)’ [build] virtual void pre_training_batch(const Network* net, const Matrix& x, [build] ^~~~~~~~~~~~~~~~~~ [build] In file included from /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback/VerboseCallback.h:8, [build] from /home/pochta/myprojects/GITROOT/MiniDNN/include/MiniDNN.h:34, [build] from /home/pochta/myprojects/uzsearch/main.cpp:1: [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback/../Network.h:481:21: error: call of overloaded ‘post_training_batch(MiniDNN::Network*, Eigen::Matrix<double, -1, 2>&, Eigen::Matrix<double, -1, 2>&)’ is ambiguous [build] m_callback->post_training_batch(this, x_batches[i], y_batches[i]); [build] ^~~~~~~~~~ [build] In file included from /home/pochta/myprojects/GITROOT/MiniDNN/include/MiniDNN.h:33, [build] from /home/pochta/myprojects/uzsearch/main.cpp:1: [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback.h:55:22: note: candidate: ‘virtual void MiniDNN::Callback::post_training_batch(const MiniDNN::Network*, const Matrix&, const Matrix&)’ [build] virtual void post_training_batch(const Network* net, const Matrix& x, [build] ^~~~~~~~~~~~~~~~~~~ [build] /home/pochta/myprojects/GITROOT/MiniDNN/include/Callback.h:57:22: note: candidate: ‘virtual void MiniDNN::Callback::post_training_batch(const MiniDNN::Network*, const Matrix&, const IntegerVector&)’ [build] virtual void post_training_batch(const Network* net, const Matrix& x, [build] ^~~~~~~~~~~~~~~~~~~ [build] make[3]: *** [CMakeFiles/uzsearch.dir/build.make:63: CMakeFiles/uzsearch.dir/main.cpp.o] Error 1 [build] make[2]: *** [CMakeFiles/Makefile2:585: CMakeFiles/uzsearch.dir/all] Error 2 [build] make[1]: *** [CMakeFiles/Makefile2:597: CMakeFiles/uzsearch.dir/rule] Error 2 [build] make: *** [Makefile:359: uzsearch] Error 2 [build] Build finished with exit code 2

    It also fails in runtime with the only instantiation (all the latter code is commented): Matrix x = Matrix::Random(400, 100); with the run log :

    pochta@Vano-Home:~/myprojects/uzsearch$ /home/pochta/myprojects/uzsearch/build/uzsearch uzsearch: /usr/include/eigen3/Eigen/src/Core/util/XprHelper.h:110: Eigen::internal::variable_if_dynamic<T, Value>::variable_if_dynamic(T) [with T = long int; int Value = 2]: Assertion `v == T(Value)' failed. Aborted (core dumped)

    opened by IvankoB 0
  • new compiler flag for storage order

    new compiler flag for storage order

    Hi This is a bit along the same lines as the last PR, but with storage order. I noticed there were a lot of Matrix and Vector typedefs, so I pulled them all together into Config.h, and then decided it would be nice to be able to define storage order, i.e. row or column major, at compile time.

    So there's a new compiler flag MDNN_ROWMAJOR that if set to 1 will mean matrices will be row-major.

    Test: (I'll attach the mostly unchanged example.cpp for convenience)

    > g++ -I ./include/ example.cpp -DMDNN_ROWMAJOR=1
    > ./a.out
    (base) ben@Ben-xubuntu:~/MiniDNN$ ./a.out 
    IsRowMajor?: 1
    [Epoch 0, batch 0] Loss = 0.328066
    [Epoch 1, batch 0] Loss = 0.327707
    [Epoch 2, batch 0] Loss = 0.327475
    [Epoch 3, batch 0] Loss = 0.327273
    [Epoch 4, batch 0] Loss = 0.327095
    [Epoch 5, batch 0] Loss = 0.32692
    [Epoch 6, batch 0] Loss = 0.326753
    [Epoch 7, batch 0] Loss = 0.326593
    [Epoch 8, batch 0] Loss = 0.326437
    [Epoch 9, batch 0] Loss = 0.326274
    
    > g++ -I ./include/ example.cpp
    > ./a.out 
    IsRowMajor?: 0
    [Epoch 0, batch 0] Loss = 0.32792
    [Epoch 1, batch 0] Loss = 0.326679
    [Epoch 2, batch 0] Loss = 0.325873
    [Epoch 3, batch 0] Loss = 0.325187
    [Epoch 4, batch 0] Loss = 0.324576
    [Epoch 5, batch 0] Loss = 0.324013
    [Epoch 6, batch 0] Loss = 0.323497
    [Epoch 7, batch 0] Loss = 0.323033
    [Epoch 8, batch 0] Loss = 0.322599
    [Epoch 9, batch 0] Loss = 0.322178
    
    opened by benman1 5
Owner
Yixuan Qiu
Yixuan Qiu
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
International Business Machines 10 Dec 20, 2022
A lightweight C library for artificial neural networks

Getting Started # acquire source code and compile git clone https://github.com/attractivechaos/kann cd kann; make # learn unsigned addition (30000 sam

Attractive Chaos 617 Dec 19, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 9, 2023
Tiny CUDA Neural Networks

This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning fast "fully fused" multi-layer perceptron as well as support for various advanced input encodings, losses, and optimizers.

NVIDIA Research Projects 1.9k Jan 7, 2023
Raspberry Pi guitar pedal using neural networks to emulate real amps and pedals.

NeuralPi NeuralPi is a guitar pedal using neural networks to emulate real amps and pedals on a Raspberry Pi 4. The NeuralPi software is a VST3 plugin

Keith Bloemer 865 Jan 5, 2023
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Jiabao Lei 45 Dec 21, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

null 45 Dec 21, 2022
InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.

InsNet documentation InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is desig

Chauncey Wang 62 Jan 3, 2023
A framework for generic hybrid two-party computation and private inference with neural networks

MOTION2NX -- A Framework for Generic Hybrid Two-Party Computation and Private Inference with Neural Networks This software is an extension of the MOTI

ENCRYPTO 15 Nov 29, 2022
TS-9 guitar pedal clone using neural networks.

TS-M1N3 TS-M1N3 is a guitar plugin clone of the TS-9 Tubescreamer overdrive pedal. Machine learning was used to train a model of both the drive and to

Keith Bloemer 29 Nov 23, 2022
A Tool for Verifying Neural Networks using SMT-Based Model Checking

Project Title QNNVerifier Description A Tool for Verifying Neural Networks using SMT-Based Model Checking. Using Frama-C and ESBMC as the backends. Yo

null 2 Dec 11, 2021
CoDi is a cellular automaton model for spiking neural networks

CoDi CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signa

Jett LaRue 6 May 5, 2022
oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-

oneAPI-SRC 3k Jan 6, 2023
Cranium - 🤖 A portable, header-only, artificial neural network library written in C99

Cranium is a portable, header-only, feedforward artificial neural network library written in vanilla C99. It supports fully-connected networks of arbi

Devin Soni 543 Dec 25, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

NeuroSim 32 Nov 24, 2022
Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution

DeepC: Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution This code uses FSRCNN algorithm t

Milad Abdollahzadeh 13 Dec 27, 2022
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Amazon Archives 4.4k Dec 30, 2022