simple neural network library in ANSI C

Overview

Build Status

Genann logo

Genann

Genann is a minimal, well-tested library for training and using feedforward artificial neural networks (ANN) in C. Its primary focus is on being simple, fast, reliable, and hackable. It achieves this by providing only the necessary functions and little extra.

Features

  • C99 with no dependencies.
  • Contained in a single source code and header file.
  • Simple.
  • Fast and thread-safe.
  • Easily extendible.
  • Implements backpropagation training.
  • Compatible with alternative training methods (classic optimization, genetic algorithms, etc)
  • Includes examples and test suite.
  • Released under the zlib license - free for nearly any use.

Building

Genann is self-contained in two files: genann.c and genann.h. To use Genann, simply add those two files to your project.

Example Code

Four example programs are included with the source code.

Quick Example

We create an ANN taking 2 inputs, having 1 layer of 3 hidden neurons, and providing 2 outputs. It has the following structure:

NN Example Structure

We then train it on a set of labeled data using backpropagation and ask it to predict on a test data point:

#include "genann.h"

/* Not shown, loading your training and test data. */
double **training_data_input, **training_data_output, **test_data_input;

/* New network with 2 inputs,
 * 1 hidden layer of 3 neurons each,
 * and 2 outputs. */
genann *ann = genann_init(2, 1, 3, 2);

/* Learn on the training set. */
for (i = 0; i < 300; ++i) {
    for (j = 0; j < 100; ++j)
        genann_train(ann, training_data_input[j], training_data_output[j], 0.1);
}

/* Run the network and see what it predicts. */
double const *prediction = genann_run(ann, test_data_input[0]);
printf("Output for the first test data point is: %f, %f\n", prediction[0], prediction[1]);

genann_free(ann);

This example is to show API usage, it is not showing good machine learning techniques. In a real application you would likely want to learn on the test data in a random order. You would also want to monitor the learning to prevent over-fitting.

Usage

Creating and Freeing ANNs

genann *genann_init(int inputs, int hidden_layers, int hidden, int outputs);
genann *genann_copy(genann const *ann);
void genann_free(genann *ann);

Creating a new ANN is done with the genann_init() function. Its arguments are the number of inputs, the number of hidden layers, the number of neurons in each hidden layer, and the number of outputs. It returns a genann struct pointer.

Calling genann_copy() will create a deep-copy of an existing genann struct.

Call genann_free() when you're finished with an ANN returned by genann_init().

Training ANNs

void genann_train(genann const *ann, double const *inputs,
        double const *desired_outputs, double learning_rate);

genann_train() will preform one update using standard backpropogation. It should be called by passing in an array of inputs, an array of expected outputs, and a learning rate. See example1.c for an example of learning with backpropogation.

A primary design goal of Genann was to store all the network weights in one contigious block of memory. This makes it easy and efficient to train the network weights using direct-search numeric optimization algorthims, such as Hill Climbing, the Genetic Algorithm, Simulated Annealing, etc. These methods can be used by searching on the ANN's weights directly. Every genann struct contains the members int total_weights; and double *weight;. *weight points to an array of total_weights size which contains all weights used by the ANN. See example2.c for an example of training using random hill climbing search.

Saving and Loading ANNs

genann *genann_read(FILE *in);
void genann_write(genann const *ann, FILE *out);

Genann provides the genann_read() and genann_write() functions for loading or saving an ANN in a text-based format.

Evaluating

double const *genann_run(genann const *ann, double const *inputs);

Call genann_run() on a trained ANN to run a feed-forward pass on a given set of inputs. genann_run() will provide a pointer to the array of predicted outputs (of ann->outputs length).

Hints

  • All functions start with genann_.
  • The code is simple. Dig in and change things.

Extra Resources

The comp.ai.neural-nets FAQ is an excellent resource for an introduction to artificial neural networks.

If you need an even smaller neural network library, check out the excellent single-hidden-layer library tinn.

If you're looking for a heavier, more opinionated neural network library in C, I recommend the FANN library. Another good library is Peter van Rossum's Lightweight Neural Network, which despite its name, is heavier and has more features than Genann.

Comments
  • Addition of a build system generator

    Addition of a build system generator

    opened by elfring 12
  • Add Meson build support

    Add Meson build support

    I think it would help to add a meson.build script for Meson users. I was going to wrap your library and have it added to the WrapDB but there doesn’t seem to be any archived releases of this library.

    Why?

    opened by troglodyte-coder 6
  • link issue using msvc

    link issue using msvc

    Great work on this, just wanted to make note of a minor issue when using MSVC(compiler v19, linker v14) to compile and link w/ the test file for windows. (gcc on linux worked fine.)

    It seems the 'inline' definitions are causing unresolved external symbol errors (e.g genann_act_threshold). I messed around w/ different optimization switches (/GL, /LTCG, /Ob{0|1|2} etc.) but it doesn't seem to help. Making the declarations explicitly 'extern' fixes it, but not sure if that will cause issues elsewhere, or make the 'inline' superfluous .

    Or maybe I'm missing something obvious?

    bug help wanted 
    opened by jeog 5
  • ANSI C compatibility fixes: declarations must always preceed statements

    ANSI C compatibility fixes: declarations must always preceed statements

    genann states to be ANSI C compatible, but clang disagrees when executed with -std=gnu89:

    genann.c:92:12: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
        size_t j = (size_t)((a-sigmoid_dom_min)*interval+0.5);
               ^
    genann.c:115:15: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
        const int hidden_weights = hidden_layers ? (inputs+1) * hidden + (hidden_layers-1) * (hidden+1) * hidden : 0;
                  ^
    genann.c:161:13: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
        genann *ann = genann_init(inputs, hidden_layers, hidden, outputs);
                ^
    genann.c:220:9: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
        int h, j, k;
            ^
    genann.c:282:9: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
        int h, j, k;
            ^
    genann.c:399:9: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
        int i;
            ^
    6 warnings generated.
    

    this patch fixes the above.

    opened by mateuszviste 4
  • Removed inline function specifiers

    Removed inline function specifiers

    This pull request fixes issue #28, which was unrelated to the build environment the user was building on. Instead, the inline function specifiers meant that a build of only one header and one source file would build correctly, even in the case of a library, but since the current master branch has no library build system, users were including the files genann.c and genann.h to their projects.

    This meant that projects were trying to link to an inline function definition that they did not have access to because it was defined in a different compilation unit. As stated in the commit message, I thought about moving the functions to the header, but I figured the best way to move forward for now is to simply remove the inline specifier while we profile builds both with and without them, and see what the results say.

    The alternative would have required moving a significant number of macros to the header file as well, which could have had a waterfall effect on other builds, since anyone linking to the library would have needed the header, which was now full of new macros. Another option would have been to remove the unused specifiers on the variables, but then there would be no point in having the macros in the first place.

    This option was the simplest one, and until we profile the builds, I think it's the one that makes the most sense.

    opened by jflopezfernandez 3
  • genann_run() always return constant value

    genann_run() always return constant value

    trained a very simple stock price predictor the input and the outputs that go into the genann_train() seem correct, as soon as the training is completed, a call to genann_run with any set of inputs will always produce the the same, constant value.

    const auto layers = 40;
    const auto neurons = 40;
    const auto N = 100;
    genann* ann = genann_init(5, layers, neurons, 1);
    
    ... 
    
    for (auto& p: inputs) {
        const double o = outputs[i_outputs] / maxValue[targetFilename];
        genann_train(ann, p.second.data(), &o, 0.01);
        i_outputs++;
    }
    
    ... 
    
    for (... test inputs ...) {
       const auto prediction_raw = *genann_run(ann, input);
       LOG(INFO) << "prediction_raw: " << prediction_raw;
    }
    

    the prediction_raw value will be the same, regardless of the input.

    opened by korovkin 3
  • There no srand in examples

    There no srand in examples

    Hi, at first thank you for this simple and interesting software! I get a problem like in issue 2, example1 doesn't work right (but tests passed):

    $ ./example1
    GENANN example 1.
    Train a small ANN to the XOR function using backpropagation.
    Output for [0, 0] is 0.
    Output for [0, 1] is 1.
    Output for [1, 0] is 1.
    Output for [1, 1] is 1.
    

    This result always the same and depends only on test training duration. I found srand function never used in examples or the lib:

    $ ack-grep srand
    test.c
    261:    srand(100)
    

    In test.c srand has constant argument. In this way, neural network always initialized with the same data. The C library function void srand(unsigned int seed) seeds the random number generator used by the function rand. I think that srand have to be included into every example:

    #include <time.h>
    ...
    srand(time(NULL));
    
    opened by aleksei-udalov 3
  • Feature Extraction

    Feature Extraction

    In example4.c there is an example (l. 96) where features where extracted column per column from an file in the genann_train function. Is there a way to extract picked features? Or does anyone have an idea or suggestions how to implement that?

    opened by mbeddeveloper 2
  • Refactored project to incorporate a build system

    Refactored project to incorporate a build system

    The project now uses Autotools as its build system, and running the usual './configure', 'make', 'make install' results in the building of a shared library that gets installed in /usr/local/lib by default, as well as the genann.h header, which gets installed in /usr/local/include.

    All of the usual configuration flexibility is present, including the addition of gmp and mpfr simply as a demonstration of the auto-configuration ability of the system. The build system looks for gmp and mpfr if the user specifies --with-gmp or --with-mpfr and looks for the installation. If found, it automatically links the target to those libraries.

    The usual variables are configurable (CC, CFLAGS, CPPFLAGS, LDFLAGS, LIBS), and the configuration even allows specifying the activation function through there with ACTIVATION_FUNCTION=SIGMOID, although LINEAR and THRESHOLD are also options. Specifying these options defines the necessary preprocessor symbol.

    Running 'make check' builds the test, links it against the library, and runs it.

    Running 'make examples' builds all four examples.

    opened by jflopezfernandez 2
  • -nan Assertion Error

    -nan Assertion Error

    When the activation_output and activation_hidden functions are set to sigmoid_cached, every once in a while, the assertion error pops up. p.s. i'm quite new to neural networks and was using your code to learn.

    opened by Pakleni 2
  • A stack buffer overflow has been found.

    A stack buffer overflow has been found.

    A stack buffer overflow has been found in genann.c:299:

    =================================================================
    ==12375==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd0e981ce0 at pc 0x0000004081b4 bp 0x7ffd0e981b50 sp 0x7ffd0e981b40
    READ of size 8 at 0x7ffd0e981ce0 thread T0
        #0 0x4081b3 in genann_train /home/mfc_fuzz/genann/genann.c:299
        #1 0x40147c in main /home/mfc_fuzz/genann/example1.c:36
        #2 0x7f823226182f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
        #3 0x4018a8 in _start (/home/mfc_fuzz/genann/example1+0x4018a8)
    
    Address 0x7ffd0e981ce0 is located in stack of thread T0 at offset 160 in frame
        #0 0x40110f in main /home/mfc_fuzz/genann/example1.c:5
    
      This frame has 3 object(s):
        [64, 80) 'a'
        [128, 160) 'output' <== Memory access at offset 160 overflows this variable
        [192, 256) 'input'
    HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
          (longjmp and C++ exceptions *are* supported)
    SUMMARY: AddressSanitizer: stack-buffer-overflow /home/mfc_fuzz/genann/genann.c:299 genann_train
    Shadow bytes around the buggy address:
      0x100021d28340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d28350: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d28360: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d28370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d28380: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 f1 f1 f1 f1
    =>0x100021d28390: 00 00 f4 f4 f2 f2 f2 f2 00 00 00 00[f2]f2 f2 f2
      0x100021d283a0: 00 00 00 00 00 00 00 00 f3 f3 f3 f3 f3 f3 f3 f3
      0x100021d283b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d283c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d283d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x100021d283e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable:           00
      Partially addressable: 01 02 03 04 05 06 07 
      Heap left redzone:       fa
      Heap right redzone:      fb
      Freed heap region:       fd
      Stack left redzone:      f1
      Stack mid redzone:       f2
      Stack right redzone:     f3
      Stack partial redzone:   f4
      Stack after return:      f5
      Stack use after scope:   f8
      Global redzone:          f9
      Global init order:       f6
      Poisoned by user:        f7
      Container overflow:      fc
      Array cookie:            ac
      Intra object redzone:    bb
      ASan internal:           fe
    ==12375==ABORTING
    

    The program I ran was example1, but I have made some changes in that file. The example1 I wrote has been placed at : https://github.com/fCorleone/fuzz_programs/blob/master/genann/example1.c The input file has been put here: https://github.com/fCorleone/fuzz_programs/blob/master/genann/testcase

    opened by fCorleone 2
  • Add vcpkg installation instructions

    Add vcpkg installation instructions

    genann is available as a port in vcpkg, a C++ library manager that simplifies installation for genann and other project dependencies. Documenting the install process here will help users get started by providing a single set of commands to build genann, ready to be included in their projects.

    We also test whether our library ports build in various configurations (dynamic, static) on various platforms (OSX, Linux, Windows: x86, x64) to keep a wide coverage for users.

    I'm a maintainer for vcpkg, and here is what the port script looks like. We try to keep the library maintained as close as possible to the original library. :)

    opened by FrankXie05 0
  • Non-linear regression

    Non-linear regression

    I'm not skilled with machine learning and i'm trying to study its practical applications. I'm trying to use an ANN for non-linear regression of function sin(x) with analitical points and i have found some problems.

    Creating an ANN with 10 hidden layers and 4 neurons, running a for-cycle 1 million times over 100 points maked with x values evenly distributed between 0 and 2pi i obtain this:

    alt text

    with this specs once created the ANN: ann->activation_output = genann_act_linear; and the output doesn't change trying to make different ANN.

    Someone can help me to understand where i'm wrong (also teoretically)? I've seen that sigmoid activation function is not the best choice for this purpose like relu, but it's just this?

    opened by ScratchyCode 1
  • example1 not enough training.

    example1 not enough training.

    with a value of 300 in the training loop I see this output:

    Output for [0, 0] is 0.
    Output for [0, 1] is 1.
    Output for [1, 0] is 1.
    Output for [1, 1] is 1.
    

    changing the loop to 350 gives:

    Output for [0, 0] is 0.
    Output for [0, 1] is 1.
    Output for [1, 0] is 1.
    Output for [1, 1] is 0.
    

    Was this done on purpose to show some kind of limitation of back propagation ?

    opened by chriscamacho 5
  • Implement relu

    Implement relu

    Hello,

    I started to implement the relu function for the genann library on a fork under my name (https://github.com/kasey-/genann) before sending you a PR:

    double inline genann_act_relu(const struct genann *ann unused, double a) {
        return (a > 0.0) ? a : 0.0;
    }
    

    But I am a bit lost in the way you compute the back propagation of the neural network. The derivative of relu formula is trivial (a > 0.0) ? 1.0 : 0.0 But I cannot understand were I should plug-it inside your formula as I do not understand how do you compute your back propagation. Did you implemented only the derivate of the sigmoid ?

    opened by kasey- 3
  • Issue with changing activation functions

    Issue with changing activation functions

    I was wondering how to change the default sigmoid activation function to something else. I've tried changing it to tanh and it's not working. I've also tried using the linear activation function on the examples given and it's failing that as well

    opened by rnagurla 13
Releases(v1.0.0)
A GPU (CUDA) based Artificial Neural Network library

Updates - 05/10/2017: Added a new example The program "image_generator" is located in the "/src/examples" subdirectory and was submitted by Ben Bogart

Daniel Frenzel 93 Dec 10, 2022
oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-

oneAPI-SRC 3k Jan 6, 2023
Cranium - 🤖 A portable, header-only, artificial neural network library written in C99

Cranium is a portable, header-only, feedforward artificial neural network library written in vanilla C99. It supports fully-connected networks of arbi

Devin Soni 543 Dec 25, 2022
DyNet: The Dynamic Neural Network Toolkit

The Dynamic Neural Network Toolkit General Installation C++ Python Getting Started Citing Releases and Contributing General DyNet is a neural network

Chris Dyer's lab @ LTI/CMU 3.3k Dec 31, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

NeuroSim 32 Nov 24, 2022
ffcnn is a cnn neural network inference framework, written in 600 lines C language.

+----------------------------+ ffcnn 卷积神经网络前向推理库 +----------------------------+ ffcnn 是一个 c 语言编写的卷积神经网络前向推理库 只用了 500 多行代码就实现了完整的 yolov3、yolo-fastes

ck 54 Dec 28, 2022
Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution

DeepC: Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution This code uses FSRCNN algorithm t

Milad Abdollahzadeh 13 Dec 27, 2022
Real time monaural source separation base on fully convolutional neural network operates on Time-frequency domain.

Real time monaural source separation base on fully convolutional neural network operates on Time-frequency domain.

James Fung 111 Jan 9, 2023
Ncnn version demo of [CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search (ncnn) The official implementation by pytorch: ht

null 34 Dec 26, 2022
ncnn is a high-performance neural network inference framework optimized for the mobile platform

ncnn ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployme

Tencent 16.2k Jan 5, 2023
Minctest - tiny unit testing framework for ANSI C

Minctest Minctest is a very minimal unit-testing "framework" written in ANSI C and implemented in a single header file. It's handy when you want some

Lewis Van Winkle 47 Oct 20, 2022
A lightweight C library for artificial neural networks

Getting Started # acquire source code and compile git clone https://github.com/attractivechaos/kann cd kann; make # learn unsigned addition (30000 sam

Attractive Chaos 617 Dec 19, 2022
A header-only C++ library for deep neural networks

MiniDNN MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional

Yixuan Qiu 336 Dec 22, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 9, 2023
Low dependency(C++11 STL only), good portability, header-only, deep neural networks for embedded

LKYDeepNN LKYDeepNN 可訓練的深度類神經網路 (Deep Neural Network) 函式庫。 輕量,核心部份只依賴 C++11 標準函式庫,低相依性、好移植,方便在嵌入式系統上使用。 Class diagram 附有訓練視覺化 demo 程式 訓練視覺化程式以 OpenCV

Lin Kao-Yuan 44 Nov 7, 2022
neural net with blackjack and hookers

SkyNet is a light deep learning library. Linux/Windows License ResNet cpp-example for Win Compare with Tensorflow, inference ResNet50. PC: i5-2400, GF

Alexander Medvedev 62 Nov 26, 2022
TensorVox is an application designed to enable user-friendly and lightweight neural speech synthesis in the desktop

TensorVox is an application designed to enable user-friendly and lightweight neural speech synthesis in the desktop, aimed at increasing accessibility to such technology.

null 143 Dec 15, 2022