A lightweight C++ machine learning library for embedded electronics and robotics.

Overview

Fido

MIT License Join the chat at https://gitter.im/FidoProject/Fido

Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic and embedded contexts, as it is written in C++ with minimal use of the standard library, comes packaged with a robotic simulator, and provides and easy interface in which to write robotic drivers.

Check out the project site and documentation for more information.

The library was adapted from a universal robot control system.

Authors

The Fido library was primarily developed by Michael Truell. Joshua Gruenstein helped develop Fido's robotic simulator. Most of his commits are to the schematics and paper of a decoupled research study that he and Michael performed together.

Beta Status

This library is in beta. It has been used in a couple of projects, but the API may still change in backward-incompatible ways. There are definetly bugs.

Contributing

Send us a pull request. If you are looking for things to do, check out the repo's open issues. If you find a bug or have any trouble with the library, please open an issue. We are happy to help you out.

Author

Author

Contributor

Comments
  • Save network training state.

    Save network training state.

    I have seen from the tests that it is possible to save the network, but is it poossible to save its weights efter it has been trained? I.e I would like to train the net and then store it to flash, so that when I boot up my embedded project the next time it is already trained. Is this possible?

    Great project btw!

    opened by mekerhult 4
  • Simulator's loading of resources

    Simulator's loading of resources

    ==9371== Command: ./tests.o
    ==9371== 
    ==9371== Invalid read of size 4
    ==9371==    at 0x507CBB4: sf::priv::GlxContext::GlxContext(sf::priv::GlxContext*) (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x5076B9C: sf::priv::GlContext::globalInit() (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x5077482: sf::GlResource::GlResource() (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x5079A55: sf::Window::Window() (in /usr/lib/libsfml-window.so.2.1)
    ==9371==    by 0x4E5C3C5: sf::RenderWindow::RenderWindow() (in /usr/lib/libsfml-graphics.so.2.1)
    ==9371==    by 0x467A30: Simlink::Simlink() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x45A872: ____C_A_T_C_H____T_E_S_T____8() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x4271CD: Catch::FreeFunctionTestCase::invoke() const (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x4132F6: Catch::TestCase::invoke() const (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x4261DA: Catch::RunContext::invokeActiveTestCase() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x425E69: Catch::RunContext::runCurrentTest(std::string&, std::string&) (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==    by 0x424810: Catch::RunContext::runTest(Catch::TestCase const&) (in /home/travis/build/FidoProject/Fido/tests/tests.o)
    ==9371==  Address 0xe0 is not stack'd, malloc'd or (recently) free'd
    ==9371== 
    
    bug 
    opened by hmwildermuth 3
  • Compilation of project fails on Mac OS & Ubuntu 16.04

    Compilation of project fails on Mac OS & Ubuntu 16.04

    When I attempt to compile the project using sudo make install, I receive a fatal error: 'SFML/Graphics.hpp' file not found

    This error happened on both my Mac OS Version 10.13.3 and Ubuntu 16.04.

    screen shot 2018-02-04 at 6 59 37 pm

    opened by astroesteban 2
  • Asserts and change of bool initFromStream functions to voids.

    Asserts and change of bool initFromStream functions to voids.

    Hopefully cleaner and with fewer dependencies. @truell20 You said in the earlier pull request that my indentation is messed up; I believe you, but I've looked through it on my version and it seems fine. Can you point to where it's a problem?

    opened by Sydriax 2
  • Genetic Algorithim doesnt work?

    Genetic Algorithim doesnt work?

    So im looking at usage of the Genetic Algo, and ive gone through all of the source code, and i cant seem to find where the network gets the output, which makes no sense since it would need to get the output to get the best network. is this something we have to input ourselves or am i just high?

    opened by DancingRicardo 1
  • Example compilation on OS X

    Example compilation on OS X

    How can we compile one of the examples?

    For instance, I am trying gcc Backpropagation.cpp -o backpropagation -std=c++11 /usr/local/lib/fido.a

    but it is not working. I get the error:

    Undefined symbols for architecture x86_64: "std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from: std::__1::vector<double, std::__1::allocator<double> >::__vallocate(unsigned long) in Backpropagation-9a2547.o std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::__vallocate(unsigned long) in Backpropagation-9a2547.o std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<double, std::__1::allocator<double> >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > >, std::__1::allocator<std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > > > >::__recommend(unsigned long) const in fido.a(SGDTrainer.o) std::__1::vector<net::Neuron, std::__1::allocator<net::Neuron> >::__recommend(unsigned long) const in fido.a(NeuralNet.o) ... "std::__1::locale::use_facet(std::__1::locale::id&) const", referenced from: std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(SGDTrainer.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(Backpropagation.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(NeuralNet.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(Layer.o) ...

    opened by renatosc 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    FidoProject/Fido now has a Chat Room on Gitter

    @joshuagruenstein has just created a chat room. You can visit it here: https://gitter.im/FidoProject/Fido.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 1
  • API dealing with TensorFLow's models ?

    API dealing with TensorFLow's models ?

    Hi guys, I am trying to apply TensorFlow trained models to embedded systems, say, Raspberry Pi, or even much less power boards with no OS.

    So, have you ever thought to develop something can use models of TensofFlow on very tiny devices?

    Cheers, Dorje

    opened by dorje 1
  • Fixed WireFitQlearn's memory leak issue

    Fixed WireFitQlearn's memory leak issue

    I found some memory leak issues in WireFitQLearn using Valgrind. modelNet and network are created in the constructor but not freed in the destructor. Also WireFitQLearn::reset allocates memory for network but does not free the previously allocated network.

    opened by andraspatka 0
  • how to use the simulator

    how to use the simulator

    hey, I try to run the example of reinforcement learning from http://fidoproject.github.io/ . I see that it check if it on the line. where I can create line/circle in the simulator and see if it tracking the line? Thanks!

    opened by TZVIh 0
  • how to compile example

    how to compile example

    hi, I want to compile the ReinforcementLearning.cpp. I am using ver 0.02 with ubuntu 16.04.

    I use g++/gcc and I got error of " undefined reference to.." can you write some example how to compile it? thanks!

    opened by TZVIh 3
  • OCR Memory Usage

    OCR Memory Usage

    I was trying to make an OCR neural network using the MNIST OCR image library, however my process was killed every time I ran it by a kernel process called the OOM Killer. It kills processes which use too much memory. I am not sure whether this is because of my code, or something about the backpropagation code. Either way, any help would be appreciated.

    also, just to note, when I run the program with the learning sample size cut down to only 250 images, it works, but above 500 it fails.

    The C++ File:

    #include "ocr.h"
    
    int main(int argc, char const *argv[]) {
        std::string lbels = "train-labels.idx1-ubyte";
        std::string imges = "train-images.idx3-ubyte";
        std::string outputFilename = (argc > 1) ? argv[1] : "ocr.txt";
    
        int mgicNum;
        int sizeNum;
    
        std::cout << "Loading images from files..." << std::endl;
    
        auto inputArr = read_mnist_images(imges, mgicNum, sizeNum);
        auto outputArr = read_mnist_labels(lbels, mgicNum);
    
        net::NeuralNet neuralNetwork = net::NeuralNet(sizeNum, 10, 1, sizeNum, "sigmoid");
    
        std::vector< std::vector<double> > input;
        std::vector< std::vector<double> > correctOutput;
    
        std::cout << "Loading into vector...\n";
        for (size_t i = 0; i < mgicNum; i++) {
            std::vector<double> imgeArr;
            for (size_t j = 0; j < sizeNum; j++) {
                imgeArr.push_back(double(inputArr[i][j])/double(255));
            }
            //std::cout << imgeArr.size() << "; " << sizeNum << "\n";
            input.push_back(imgeArr);
            correctOutput.push_back(digits(outputArr[i]));
        }
    
        std::cout << "Done with loading.\n";
    
        std::cout << "Freeing memory..." << std::endl;
    
        delete [] inputArr; // <- Is this how you use delete? idk
        delete [] outputArr;
    
        // free(inputArr);
        // free(outputArr);
    
        std::cout << "Done with freeing memory." << std::endl;
    
        std::cout << "Supposed # of samples: " << mgicNum << std::endl;
        std::cout << "Actual # of samples: " << input.size() << std::endl;
        net::Backpropagation backprop = net::Backpropagation(0.01, 0.9, 0.1, 10);
        std::cout << "Inputs: " << neuralNetwork.numberOfInputs() << std::endl;
        std::cout << "Hidden: " << neuralNetwork.numberOfHiddenNeurons() << std::endl;
        std::cout << "Outputs: " << neuralNetwork.numberOfOutputs() << std::endl;
    
        std::cout << "Input array: " << input[0].size() << std::endl;
        std::cout << "Correct array: " << correctOutput[0].size() << std::endl;
    
        if (input.size() != correctOutput.size()) {
            throw std::runtime_error("Differing sizes between two of the same thing");
        }
    
        /* To decrease memory usage
    
        #define RESIZE_Value 500
    
        // Works at 100, 250
        // Killed at 500 and above
    
        std::cout << "Resizing arrays to " << RESIZE_Value << " each..." << std::endl;
    
        input.resize(RESIZE_Value);
        correctOutput.resize(RESIZE_Value);
    
        // */
    
        std::cout << "Beginning training..." << std::endl;
    
        backprop.train(&neuralNetwork, input, correctOutput);
    
        std::cout << "Done training. Storing..." << std::endl;
    
        std::ofstream myfile;
        myfile.open(outputFilename);
        neuralNetwork.store(&myfile);
        myfile.close();
    
        std::cout << "Done storing to output file '" << outputFilename << "'. Testing..." << std::endl;
    
        #define TEST_INDEX 23 // Random test index
    
        std::cout << "Test: " << findTop(neuralNetwork.getOutput(input[TEST_INDEX])) << std::endl;
        std::cout << "Correct answer: " << findTop(correctOutput[TEST_INDEX]) << std::endl;
    
        return 0;
    }
    

    The header file which contains functions for loading test images and picking highest members of arrays: (The MNIST functions I copied from somewhere else)

    #include "include/Fido.h"
    #ifndef OCR
    #define OCR
    
    typedef unsigned char uchar;
    
    uchar** read_mnist_images(std::string full_path, int& number_of_images, int& image_size) {
        auto reverseInt = [](int i) {
            unsigned char c1, c2, c3, c4;
            c1 = i & 255, c2 = (i >> 8) & 255, c3 = (i >> 16) & 255, c4 = (i >> 24) & 255;
            return ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;
        };
    
        std::ifstream file(full_path);
    
        if(file.is_open()) {
            int magic_number = 0, n_rows = 0, n_cols = 0;
    
            file.read((char *)&magic_number, sizeof(magic_number));
            magic_number = reverseInt(magic_number);
    
            if(magic_number != 2051) throw std::runtime_error("Invalid MNIST image file!");
    
            file.read((char *)&number_of_images, sizeof(number_of_images)), number_of_images = reverseInt(number_of_images);
            file.read((char *)&n_rows, sizeof(n_rows)), n_rows = reverseInt(n_rows);
            file.read((char *)&n_cols, sizeof(n_cols)), n_cols = reverseInt(n_cols);
    
            image_size = n_rows * n_cols;
    
            uchar** _dataset = new uchar*[number_of_images];
            for(int i = 0; i < number_of_images; i++) {
                _dataset[i] = new uchar[image_size];
                file.read((char *)_dataset[i], image_size);
            }
            return _dataset;
        } else {
            throw std::runtime_error("Cannot open file `" + full_path + "`!");
        }
    }
    
    uchar* read_mnist_labels(std::string full_path, int& number_of_labels) {
        auto reverseInt = [](int i) {
            unsigned char c1, c2, c3, c4;
            c1 = i & 255, c2 = (i >> 8) & 255, c3 = (i >> 16) & 255, c4 = (i >> 24) & 255;
            return ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;
        };
    
        typedef unsigned char uchar;
    
        std::ifstream file(full_path);
    
        if(file.is_open()) {
            int magic_number = 0;
            file.read((char *)&magic_number, sizeof(magic_number));
            magic_number = reverseInt(magic_number);
    
            if(magic_number != 2049) throw std::runtime_error("Invalid MNIST label file!");
    
            file.read((char *)&number_of_labels, sizeof(number_of_labels)), number_of_labels = reverseInt(number_of_labels);
    
            uchar* _dataset = new uchar[number_of_labels];
            for(int i = 0; i < number_of_labels; i++) {
                file.read((char*)&_dataset[i], 1);
            }
            return _dataset;
        } else {
            throw std::runtime_error("Unable to open file `" + full_path + "`!");
        }
    }
    
    std::vector<double> digits(uchar j) {
        std::vector<double> v;
        for (size_t i = 0; i < 10; i++) {
            if (j == i) {
                v.push_back(1);
            } else {
                v.push_back(0);
            }
        }
        return v;
    }
    
    int findTop(std::vector<double> v) {
        int best = -1;
        double top = -1.0;
        for (size_t i = 0; i < 10; i++) {
            if (v[i] > top) {
                best = i;
                top = v[i];
            }
        }
        return best;
    }
    
    #endif
    
    opened by hmwildermuth 2
Releases(0.0.4)
  • v0.0.2(May 10, 2016)

    Added support for the Adadelta training algorithm, a neural network pruning algorithm (Karnin 1990), and a robotic simulator (though currently undocumented). More tests were written for functionality across the library. We made class structure changes to make it easier to add new functionality to the Fido library.

    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Apr 28, 2016)

    v0.0.1 - First release

    • Neural networks
      • Multilayer feed-forward neural networks
      • Sigmoid, linear, and hyperbolic tangent activation functions
    • Trainers
      • SGD backpropagation
      • Adadelta
    • Reinforcement Learning
      • Q-learning
      • Wire-fitted q-learning
      • A universal robot control system
    • Genetic algorithms
    Source code(tar.gz)
    Source code(zip)
Owner
The Fido Project
Machine learning for embedded electronics and robotics, implemented in C++.
The Fido Project
A C++ standalone library for machine learning

Flashlight: Fast, Flexible Machine Learning in C++ Quickstart | Installation | Documentation Flashlight is a fast, flexible machine learning library w

Facebook Research 4.7k Jan 8, 2023
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Dec 30, 2022
Flashlight is a C++ standalone library for machine learning

Flashlight is a fast, flexible machine learning library written entirely in C++ from the Facebook AI Research Speech team and the creators of Torch and Deep Speech.

null 4.7k Jan 8, 2023
ML++ - A library created to revitalize C++ as a machine learning front end

ML++ Machine learning is a vast and exiciting discipline, garnering attention from specialists of many fields. Unfortunately, for C++ programmers and

marc 1k Dec 31, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Dec 31, 2022
null 5.7k Jan 4, 2023
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

File systems and Storage Lab (FSL) 186 Nov 24, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Dec 30, 2022
RNNLIB is a recurrent neural network library for sequence learning problems. Forked from Alex Graves work http://sourceforge.net/projects/rnnl/

Origin The original RNNLIB is hosted at http://sourceforge.net/projects/rnnl while this "fork" is created to repeat results for the online handwriting

Sergey Zyrianov 879 Dec 26, 2022
Samsung Washing Machine replacing OS control unit

hacksung Samsung Washing Machine WS1702 replacing OS control unit More info at https://www.hackster.io/roni-bandini/dead-washing-machine-returns-to-li

null 27 Dec 19, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 337 Dec 27, 2022
Distributed (Deep) Machine Learning Community 682 Dec 28, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 33k Jan 1, 2023
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. All NLP modules are based on Timbl, the Tilburg memory-based learning software package.

Frog - A Tagger-Lemmatizer-Morphological-Analyzer-Dependency-Parser for Dutch Copyright 2006-2020 Ko van der Sloot, Maarten van Gompel, Antal van den

Language Machines 70 Dec 14, 2022
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library

Build Status Travis CI VM: Linux x64: Raspberry Pi 3: Jetson TX2: Backstory I set to build ccv with a minimalism inspiration. That was back in 2010, o

Liu Liu 6.9k Jan 6, 2023
MITIE: library and tools for information extraction

MITIE: MIT Information Extraction This project provides free (even for commercial use) state-of-the-art information extraction tools. The current rele

null 2.8k Dec 29, 2022
libsvm websitelibsvm - A simple, easy-to-use, efficient library for Support Vector Machines. [BSD-3-Clause] website

Libsvm is a simple, easy-to-use, and efficient software for SVM classification and regression. It solves C-SVM classification, nu-SVM classification,

Chih-Jen Lin 4.3k Jan 2, 2023
Open Source Computer Vision Library

OpenCV: Open Source Computer Vision Library Resources Homepage: https://opencv.org Courses: https://opencv.org/courses Docs: https://docs.opencv.org/m

OpenCV 65.6k Jan 1, 2023