A lightweight C++ machine learning library for embedded electronics and robotics.

Overview

Fido

MIT License Join the chat at https://gitter.im/FidoProject/Fido

Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic and embedded contexts, as it is written in C++ with minimal use of the standard library, comes packaged with a robotic simulator, and provides and easy interface in which to write robotic drivers.

Check out the project site and documentation for more information.

The library was adapted from a universal robot control system.

Authors

The Fido library was primarily developed by Michael Truell. Joshua Gruenstein helped develop Fido's robotic simulator. Most of his commits are to the schematics and paper of a decoupled research study that he and Michael performed together.

Beta Status

This library is in beta. It has been used in a couple of projects, but the API may still change in backward-incompatible ways. There are definetly bugs.

Contributing

Send us a pull request. If you are looking for things to do, check out the repo's open issues. If you find a bug or have any trouble with the library, please open an issue. We are happy to help you out.

Author

Author

Contributor

Comments
  • Save network training state.

    Save network training state.

    I have seen from the tests that it is possible to save the network, but is it poossible to save its weights efter it has been trained? I.e I would like to train the net and then store it to flash, so that when I boot up my embedded project the next time it is already trained. Is this possible?

    Great project btw!

    opened by mekerhult 4
  • Simulator's loading of resources

    Simulator's loading of resources

    ==9371== Command: ./tests.o
    ==9371== 
    ==9371== Invalid read of size 4
    ==9371==    at 0x507CBB4: sf::priv::GlxContext::GlxContext(sf::priv::GlxContext*) (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x5076B9C: sf::priv::GlContext::globalInit() (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x5077482: sf::GlResource::GlResource() (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x5079A55: sf::Window::Window() (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x4E5C3C5: sf::RenderWindow::RenderWindow() (in /usr/lib/libsfml-graphics.so.2.1)
    ==9371==    by 0x467A30: Simlink::Simlink() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x45A872: ____C_A_T_C_H____T_E_S_T____8() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x4271CD: Catch::FreeFunctionTestCase::invoke() const (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x4132F6: Catch::TestCase::invoke() const (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x4261DA: Catch::RunContext::invokeActiveTestCase() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x425E69: Catch::RunContext::runCurrentTest(std::string&, std::string&) (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x424810: Catch::RunContext::runTest(Catch::TestCase const&) (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==  Address 0xe0 is not stack'd, malloc'd or (recently) free'd
    ==9371== 
    
    bug 
    opened by hmwildermuth 3
  • Compilation of project fails on Mac OS & Ubuntu 16.04

    Compilation of project fails on Mac OS & Ubuntu 16.04

    When I attempt to compile the project using sudo make install, I receive a fatal error: 'SFML/Graphics.hpp' file not found

    This error happened on both my Mac OS Version 10.13.3 and Ubuntu 16.04.

    screen shot 2018-02-04 at 6 59 37 pm

    opened by astroesteban 2
  • Asserts and change of bool initFromStream functions to voids.

    Asserts and change of bool initFromStream functions to voids.

    Hopefully cleaner and with fewer dependencies. @truell20 You said in the earlier pull request that my indentation is messed up; I believe you, but I've looked through it on my version and it seems fine. Can you point to where it's a problem?

    opened by Sydriax 2
  • Genetic Algorithim doesnt work?

    Genetic Algorithim doesnt work?

    So im looking at usage of the Genetic Algo, and ive gone through all of the source code, and i cant seem to find where the network gets the output, which makes no sense since it would need to get the output to get the best network. is this something we have to input ourselves or am i just high?

    opened by DancingRicardo 1
  • Example compilation on OS X

    Example compilation on OS X

    How can we compile one of the examples?

    For instance, I am trying gcc Backpropagation.cpp -o backpropagation -std=c++11 /usr/local/lib/fido.a

    but it is not working. I get the error:

    Undefined symbols for architecture x86_64: "std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from: std::__1::vector<double, std::__1::allocator<double> >::__vallocate(unsigned long) in Backpropagation-9a2547.o std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::__vallocate(unsigned long) in Backpropagation-9a2547.o std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<double, std::__1::allocator<double> >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > >, std::__1::allocator<std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > > > >::__recommend(unsigned long) const in fido.a(SGDTrainer.o) std::__1::vector<net::Neuron, std::__1::allocator<net::Neuron> >::__recommend(unsigned long) const in fido.a(NeuralNet.o) ... "std::__1::locale::use_facet(std::__1::locale::id&) const", referenced from: std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(SGDTrainer.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(Backpropagation.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(NeuralNet.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(Layer.o) ...

    opened by renatosc 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    FidoProject/Fido now has a Chat Room on Gitter

    @joshuagruenstein has just created a chat room. You can visit it here: https://gitter.im/FidoProject/Fido.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 1
  • API dealing with TensorFLow's models ?

    API dealing with TensorFLow's models ?

    Hi guys, I am trying to apply TensorFlow trained models to embedded systems, say, Raspberry Pi, or even much less power boards with no OS.

    So, have you ever thought to develop something can use models of TensofFlow on very tiny devices?

    Cheers, Dorje

    opened by dorje 1
  • Fixed WireFitQlearn's memory leak issue

    Fixed WireFitQlearn's memory leak issue

    I found some memory leak issues in WireFitQLearn using Valgrind. modelNet and network are created in the constructor but not freed in the destructor. Also WireFitQLearn::reset allocates memory for network but does not free the previously allocated network.

    opened by andraspatka 0
  • how to use the simulator

    how to use the simulator

    hey, I try to run the example of reinforcement learning from http://fidoproject.github.io/ . I see that it check if it on the line. where I can create line/circle in the simulator and see if it tracking the line? Thanks!

    opened by TZVIh 0
  • how to compile example

    how to compile example

    hi, I want to compile the ReinforcementLearning.cpp. I am using ver 0.02 with ubuntu 16.04.

    I use g++/gcc and I got error of " undefined reference to.." can you write some example how to compile it? thanks!

    opened by TZVIh 3
  • OCR Memory Usage

    OCR Memory Usage

    I was trying to make an OCR neural network using the MNIST OCR image library, however my process was killed every time I ran it by a kernel process called the OOM Killer. It kills processes which use too much memory. I am not sure whether this is because of my code, or something about the backpropagation code. Either way, any help would be appreciated.

    also, just to note, when I run the program with the learning sample size cut down to only 250 images, it works, but above 500 it fails.

    The C++ File:

    #include "ocr.h"
    
    int main(int argc, char const *argv[]) {
        std::string lbels = "train-labels.idx1-ubyte";
        std::string imges = "train-images.idx3-ubyte";
        std::string outputFilename = (argc > 1) ? argv[1] : "ocr.txt";
    
        int mgicNum;
        int sizeNum;
    
        std::cout << "Loading images from files..." << std::endl;
    
        auto inputArr = read_mnist_images(imges, mgicNum, sizeNum);
        auto outputArr = read_mnist_labels(lbels, mgicNum);
    
        net::NeuralNet neuralNetwork = net::NeuralNet(sizeNum, 10, 1, sizeNum, "sigmoid");
    
        std::vector< std::vector<double> > input;
        std::vector< std::vector<double> > correctOutput;
    
        std::cout << "Loading into vector...\n";
        for (size_t i = 0; i < mgicNum; i++) {
            std::vector<double> imgeArr;
            for (size_t j = 0; j < sizeNum; j++) {
                imgeArr.push_back(double(inputArr[i][j])/double(255));
            }
            //std::cout << imgeArr.size() << "; " << sizeNum << "\n";
            input.push_back(imgeArr);
            correctOutput.push_back(digits(outputArr[i]));
        }
    
        std::cout << "Done with loading.\n";
    
        std::cout << "Freeing memory..." << std::endl;
    
        delete [] inputArr; // <- Is this how you use delete? idk
        delete [] outputArr;
    
        // free(inputArr);
        // free(outputArr);
    
        std::cout << "Done with freeing memory." << std::endl;
    
        std::cout << "Supposed # of samples: " << mgicNum << std::endl;
        std::cout << "Actual # of samples: " << input.size() << std::endl;
        net::Backpropagation backprop = net::Backpropagation(0.01, 0.9, 0.1, 10);
        std::cout << "Inputs: " << neuralNetwork.numberOfInputs() << std::endl;
        std::cout << "Hidden: " << neuralNetwork.numberOfHiddenNeurons() << std::endl;
        std::cout << "Outputs: " << neuralNetwork.numberOfOutputs() << std::endl;
    
        std::cout << "Input array: " << input[0].size() << std::endl;
        std::cout << "Correct array: " << correctOutput[0].size() << std::endl;
    
        if (input.size() != correctOutput.size()) {
            throw std::runtime_error("Differing sizes between two of the same thing");
        }
    
        /* To decrease memory usage
    
        #define RESIZE_Value 500
    
        // Works at 100, 250
        // Killed at 500 and above
    
        std::cout << "Resizing arrays to " << RESIZE_Value << " each..." << std::endl;
    
        input.resize(RESIZE_Value);
        correctOutput.resize(RESIZE_Value);
    
        // */
    
        std::cout << "Beginning training..." << std::endl;
    
        backprop.train(&neuralNetwork, input, correctOutput);
    
        std::cout << "Done training. Storing..." << std::endl;
    
        std::ofstream myfile;
        myfile.open(outputFilename);
        neuralNetwork.store(&myfile);
        myfile.close();
    
        std::cout << "Done storing to output file '" << outputFilename << "'. Testing..." << std::endl;
    
        #define TEST_INDEX 23 // Random test index
    
        std::cout << "Test: " << findTop(neuralNetwork.getOutput(input[TEST_INDEX])) << std::endl;
        std::cout << "Correct answer: " << findTop(correctOutput[TEST_INDEX]) << std::endl;
    
        return 0;
    }
    

    The header file which contains functions for loading test images and picking highest members of arrays: (The MNIST functions I copied from somewhere else)

    #include "include/Fido.h"
    #ifndef OCR
    #define OCR
    
    typedef unsigned char uchar;
    
    uchar** read_mnist_images(std::string full_path, int& number_of_images, int& image_size) {
        auto reverseInt = [](int i) {
            unsigned char c1, c2, c3, c4;
            c1 = i & 255, c2 = (i >> 8) & 255, c3 = (i >> 16) & 255, c4 = (i >> 24) & 255;
            return ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;
        };
    
        std::ifstream file(full_path);
    
        if(file.is_open()) {
            int magic_number = 0, n_rows = 0, n_cols = 0;
    
            file.read((char *)&magic_number, sizeof(magic_number));
            magic_number = reverseInt(magic_number);
    
            if(magic_number != 2051) throw std::runtime_error("Invalid MNIST image file!");
    
            file.read((char *)&number_of_images, sizeof(number_of_images)), number_of_images = reverseInt(number_of_images);
            file.read((char *)&n_rows, sizeof(n_rows)), n_rows = reverseInt(n_rows);
            file.read((char *)&n_cols, sizeof(n_cols)), n_cols = reverseInt(n_cols);
    
            image_size = n_rows * n_cols;
    
            uchar** _dataset = new uchar*[number_of_images];
            for(int i = 0; i < number_of_images; i++) {
                _dataset[i] = new uchar[image_size];
                file.read((char *)_dataset[i], image_size);
            }
            return _dataset;
        } else {
            throw std::runtime_error("Cannot open file `" + full_path + "`!");
        }
    }
    
    uchar* read_mnist_labels(std::string full_path, int& number_of_labels) {
        auto reverseInt = [](int i) {
            unsigned char c1, c2, c3, c4;
            c1 = i & 255, c2 = (i >> 8) & 255, c3 = (i >> 16) & 255, c4 = (i >> 24) & 255;
            return ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;
        };
    
        typedef unsigned char uchar;
    
        std::ifstream file(full_path);
    
        if(file.is_open()) {
            int magic_number = 0;
            file.read((char *)&magic_number, sizeof(magic_number));
            magic_number = reverseInt(magic_number);
    
            if(magic_number != 2049) throw std::runtime_error("Invalid MNIST label file!");
    
            file.read((char *)&number_of_labels, sizeof(number_of_labels)), number_of_labels = reverseInt(number_of_labels);
    
            uchar* _dataset = new uchar[number_of_labels];
            for(int i = 0; i < number_of_labels; i++) {
                file.read((char*)&_dataset[i], 1);
            }
            return _dataset;
        } else {
            throw std::runtime_error("Unable to open file `" + full_path + "`!");
        }
    }
    
    std::vector<double> digits(uchar j) {
        std::vector<double> v;
        for (size_t i = 0; i < 10; i++) {
            if (j == i) {
                v.push_back(1);
            } else {
                v.push_back(0);
            }
        }
        return v;
    }
    
    int findTop(std::vector<double> v) {
        int best = -1;
        double top = -1.0;
        for (size_t i = 0; i < 10; i++) {
            if (v[i] > top) {
                best = i;
                top = v[i];
            }
        }
        return best;
    }
    
    #endif
    
    opened by hmwildermuth 2
Releases(0.0.4)
  • v0.0.2(May 10, 2016)

    Added support for the Adadelta training algorithm, a neural network pruning algorithm (Karnin 1990), and a robotic simulator (though currently undocumented). More tests were written for functionality across the library. We made class structure changes to make it easier to add new functionality to the Fido library.

    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Apr 28, 2016)

    v0.0.1 - First release

    • Neural networks
      • Multilayer feed-forward neural networks
      • Sigmoid, linear, and hyperbolic tangent activation functions
    • Trainers
      • SGD backpropagation
      • Adadelta
    • Reinforcement Learning
      • Q-learning
      • Wire-fitted q-learning
      • A universal robot control system
    • Genetic algorithms
    Source code(tar.gz)
    Source code(zip)
Owner
The Fido Project
Machine learning for embedded electronics and robotics, implemented in C++.
The Fido Project
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Amazon Archives 4.4k Dec 30, 2022
The Robotics Library (RL) is a self-contained C++ library for rigid body kinematics and dynamics, motion planning, and control.

Robotics Library The Robotics Library (RL) is a self-contained C++ library for rigid body kinematics and dynamics, motion planning, and control. It co

Robotics Library 656 Jan 1, 2023
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

NetEase Youdao 179 Dec 20, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Dec 31, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Dec 23, 2022
Mobile Robot Programming Toolkit (MRPT) provides C++ libraries aimed at researchers in mobile robotics and computer vision

The MRPT project 1. Introduction Mobile Robot Programming Toolkit (MRPT) provides C++ libraries aimed at researchers in mobile robotics and computer v

MRPT 1.6k Dec 24, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Siavash Eliasi 33 May 31, 2022
Nano - C++ library [machine learning & numerical optimization] - superseeded by libnano

Nano Nano provides numerical optimization and machine learning utilities. For example it can be used to train models such as multi-layer perceptrons (

Cosmin Atanasoaei 1 Apr 18, 2020
Nvvl - A library that uses hardware acceleration to load sequences of video frames to facilitate machine learning training

NVVL is part of DALI! DALI (Nvidia Data Loading Library) incorporates NVVL functionality and offers much more than that, so it is recommended to switc

NVIDIA Corporation 660 Dec 19, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Nicholas Gillian 793 Dec 29, 2022
An open-source, low-code machine learning library in Python

An open-source, low-code machine learning library in Python ?? Version 2.3.6 out now! Check out the release notes here. Official • Docs • Install • Tu

PyCaret 6.7k Dec 29, 2022
In-situ data analyses and machine learning with OpenFOAM and Python

PythonFOAM: In-situ data analyses with OpenFOAM and Python Using Python modules for in-situ data analytics with OpenFOAM 8. NOTE that this is NOT PyFO

Argonne Leadership Computing Facility - ALCF 129 Dec 29, 2022
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Libonnx A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. Getting Started The library's

xboot.org 442 Dec 25, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 5, 2023
Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and deploy without Python.

Python Inference Script(PyIS) Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and d

Microsoft 13 Nov 4, 2022
SHARK - High Performance Machine Learning for CPUs, GPUs, Accelerators and Heterogeneous Clusters

SHARK Communication Channels GitHub issues: Feature requests, bugs etc Nod.ai SHARK Discord server: Real time discussions with the nod.ai team and oth

nod.ai 187 Jan 1, 2023
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 170.2k Jan 7, 2023
Distributed machine learning platform

Veles Distributed platform for rapid Deep learning application development Consists of: Platform - https://github.com/Samsung/veles Znicz Plugin - Neu

Samsung 897 Dec 5, 2022