CubbyDNN - Deep learning framework using C++17 in a single header file

Overview

CubbyDNN

License Build Status Build status Build Status

CubbyDNN is C++17 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices. The code can be compiled with commonly available compilers such as g++, clang++, or Microsoft Visual Studio. CubbyDNN currently supports macOS (10.12.6 or later), Ubuntu (18.04 or later), Windows (Visual Studio 2017 or later), and Windows Subsystem for Linux (WSL). Other untested platforms that support C++17 also should be able to build CubbyDNN.

Key Features

  • Reasonably fast, without GPU
  • Portable & header-only
  • Easy to integrate with real applications
  • Simply implemented

Contact

You can contact me via e-mail (utilForever at gmail.com). I am always happy to answer questions or help with any issues you might have, and please be sure to share any additional work or your creations with me, I love seeing what other people are making.

License

The class is licensed under the MIT License:

Copyright © 2018 Chris Ohk, Justin Kim and Daewoong Ahn.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Issues
  • Adding Universal number system library to project

    Adding Universal number system library to project

    Attatch new library for bFloat16 and posit number systems https://github.com/stillwater-sc/universal

    Planning to make use of BLAS implementation from this library for CPU implementation. POSIT numbers and bFloat16 will be converted to IEEE754 form for GPU implementation.

    As new library assumes support of AVX2 instructions, We are going to assume support of AVX2 as well

    planning feature 
    opened by jwkim98 7
  • What is the role of Tensor-related classes?

    What is the role of Tensor-related classes?

    There are many classes related to Tensor in this framework. (Tensor/TensorData/TensorInfo/TensorObject/TensorShape) I would like to let you know the role of each class. If you like, you can write in Korean.

    question help wanted 
    opened by utilForever 4
  • How do you think about custom matrix&vector type?

    How do you think about custom matrix&vector type?

    We can fix mtl::mat::dense2D as default conversion type from TensorData, but if we are implementing any custom matrix rather than MTL, It might be better to define some custom matrix type and custom functions that can convert custom matrix type into MTL types.

    This is required since data alignment of custom implementation can be different from alignment that MTL is using. Also, since hpr-blas implementation of non-posit type matrix multiplication requires operators such as * or + , our custom matrix&vector type should support them.

    How do you guys think about it? should we fix MTL types as our default matrix type? or should we define our own and add some custom implementations?

    question 
    opened by jwkim98 3
  • Basic graph implementation

    Basic graph implementation

    basic graph creation and some example has been implemented. still, we need to consider ways to initialize default weights and biases, and way to stream data into the placeHolder

    opened by jwkim98 3
  • Build is not working, need dependencies

    Build is not working, need dependencies

    Trying to build DNN I ran into dependency issues:

    Selecting Windows SDK version 10.0.16299.0 to target Windows 10.0.17134. CMake Error at CMakeLists.txt:39 (add_subdirectory): The source directory

    C:/Users/tomtz/Documents/dev/clones/CubbyDNN/Libraries/googletest
    

    does not contain a CMakeLists.txt file.

    CMake Error at CMakeLists.txt:42 (add_subdirectory): The source directory

    C:/Users/tomtz/Documents/dev/clones/CubbyDNN/Libraries/googlebenchmark
    

    does not contain a CMakeLists.txt file.

    Configuring incomplete, errors occurred! See also "C:/Users/tomtz/Documents/dev/clones/CubbyDNN/build/CMakeFiles/CMakeOutput.log".

    question help wanted 
    opened by Ravenwater 2
  • Implement Half class

    Implement Half class

    opened by utilForever 1
  • Implement universal type wrapper

    Implement universal type wrapper

    Universal type wrapper


    • To minimize templates, We will put universal wrapper with its type enum
    • Classes, methods that implementation doesn't differ by data types will use this general form
    • std::reinterpret_cast will be performed to turn them into actual type, and they will be passed to template functions when different implementation is required on different types

    Advantages

    • Minimize binary size by minimizing templates
    • Makes cleaner, readable code with better consistency
    enhancement 
    opened by jwkim98 0
  • Implement String Class

    Implement String Class

    • [ ] Get length of string
    • [ ] Join given string in string list into one string
    • [ ] Split elements based on sep
    • [ ] Check that given string matches the regular expression pattern
    • [ ] Replace elements that matches regex
    • [ ] Return substrings
    feature 
    opened by circle-oo 0
  • Implement graph compile algorithm

    Implement graph compile algorithm

    Role of Graph compile process

    • Verify Graph is valid and ready to run (check for tensor shapes, variable,constant configurations)
    • Initialize Variables (Several options are possible such as Random distribution, constants, or other user-customized algorithms)
    p0 feature 
    opened by jwkim98 0
  • Implement Constant/Variable/Placeholder

    Implement Constant/Variable/Placeholder

    • function Constant: Creates a constant tensor.
    • class Variable: A variable maintains state in the graph across calls to run(). You add a variable to the graph by constructing an instance of the class Variable.
    • function Placeholder: Inserts a placeholder for a tensor that will be always fed.
    p0 feature 
    opened by utilForever 0
  • Ver 0.2 - Planning

    Ver 0.2 - Planning

    • Implement code
      • Convert Tensor-related classes to template (#16)
      • Implement Half class (#17)
      • Implement Graph class (#18)
      • Implement Constant/Variable/Placeholder (#23)
    • CI and code coverage
      • Add code quality tool (#19)
      • Add code coverage tool (Codecov) (#20)
      • Apply lcov settings to generate code coverage report (#21)
    planning 
    opened by utilForever 0
  • Ver 0.3 - Planning

    Ver 0.3 - Planning

    • Fix computation code to work on Linux and macOS
    • Add computation code for supporting GPU parallel using CUDA
    • Implement base code for making CNN example
    • Create more nodes for improving availability
    planning 
    opened by utilForever 0
  • Implement Image class

    Implement Image class

    To put image to a tensor, implementation of image class is needed

    Basic Features

    • Support some image types(jpg, png, gif, bmp)
    • Load and save images
    • Resizable and croppable
    • Rotate, transform
    • Convertable to tensor(or array)

    Additional features

    • Gaussian blur
    • Noise
    • Adjust hue, rgb etc...
    • ETC...
    feature 
    opened by Yoogeonhui 6
  • Apply lcov settings to generate code coverage report

    Apply lcov settings to generate code coverage report

    This revision sets lcov to generate code coverage report.

    CAUTION lcov --directory . --capture --output-file coverage.info doesn't work. I added the compiler flag and link flag, and when I built it, the .gcno file was created, but the .gcda file was not created. It seems to be a problem caused by the updated version of gcov. (WARNING: no .gcda files found in .)

    The following is a list of commands to print a report using lcov.

    mkdir build
    cd build
    cmake .. -DCMAKE_BUILD_TYPE=Debug
    make -j 8
    lcov -c -i -d Tests/UnitTests -o base.info
    bin/UnitTests
    lcov -c -d Tests/UnitTests -o test.info
    lcov -a base.info -a test.info -o coverage.info
    lcov -r coverage.info '/usr/*' -o coverage.info
    lcov -r coverage.info '*/Libraries/*' -o coverage.info
    lcov -r coverage.info '*/Tests/*' -o coverage.info
    lcov -l coverage.info
    genhtml coverage.info -o out
    
    p1 CI 
    opened by utilForever 0
Owner
Chris Ohk
@corp-momenti Engine Engineer, @microsoft MVP, @CppKorea Founder & Administrator, @reinforcement-learning-kr Administrator, @RustFestEU Global 2021 Organizer
Chris Ohk
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

tiny-dnn 5.5k Jun 22, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 32.7k Jun 20, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Tencent 108 May 19, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 75 Apr 14, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices.

Xiaomi 4.6k Jun 26, 2022
Plaidml - PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.4k Jun 27, 2022
Caffe2 is a lightweight, modular, and scalable deep learning framework.

Source code now lives in the PyTorch repository. Caffe2 Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the origin

Meta Archive 8.4k Jun 22, 2022
An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit

DREAMPlaceFPGA An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit. This work leverages the open-source A

Rachel Selina Rajarathnam 14 May 30, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8k Jun 19, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20k Jun 23, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.2k Jun 24, 2022
LibDEEP BSD-3-ClauseLibDEEP - Deep learning library. BSD-3-Clause

LibDEEP LibDEEP is a deep learning library developed in C language for the development of artificial intelligence-based techniques. Please visit our W

Joao Paulo Papa 18 Mar 15, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Jun 13, 2022
Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Tencent 123 Mar 17, 2021
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Tencent 502 May 31, 2022
Nimble: Physics Engine for Deep Learning

Nimble: Physics Engine for Deep Learning

Keenon Werling 262 Jun 1, 2022
Deploying Deep Learning Models in C++: BERT Language Model

This repository show the code to deploy a deep learning model serialized and running in C++ backend.

null 42 Mar 24, 2022