InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.

Overview

InsNet documentation

InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is designed to support padding-free dynamic batching, thus allow users to focus on building the model for a single instance. This design has at least four advantages as follows:

  1. It can batch not only operators in a mini-batch but also operators in the same instance. For example, it can batch two parallel transformers from the same instance.
  2. It makes it super easy to build NLP models with instance-dependent computation graphs and execute them in batch, such as tree-LSTM and hierarchical Transformers.
  3. It reduces users' intellectual burden of manual batching, as InsNet can efficiently take over all batching procedures. As such, users even need not know the concept of tensor, but only the matrix and vector (which is a one-column matrix), neither the concept of padding.
  4. It significantly reduces memory usage since no padding is needed and lazy execution can release useless tensors immediately.

To summarize, we believe that Padding-free Dynamic Batching is the feature that NLPers will dive into but is surprisingly not supported by today's deep learning libraries.

Besides, InsNet has the following features:

  1. It is written in C++ 14 and is built as a static library.
  2. For GPU computation, we write almost all CUDA kernels by hand, allowing efficient parallel computation for matrices of unaligned shapes.
  3. Both lazy and eager execution is supported, with the former allowing for automatic batching and the latter facilitating users' debugging.
  4. For the moment, it provides about thirty operators with both GPU and CPU implementations, supporting building modern NLP models for sentence classification, sequence tagging, and language generation. It furthermore provides NLP modules such as attention, RNNs, and the Transformer, built with the aforementioned operators.

Studies using InsNet are listed as follows, and we are looking forward to enriching this list:

InsNet uses Apache 2.0 license allowing you to use it in any project. But if you use InsNet for research, please cite this paper as follows and declare it as an early version of InsNet since the paper of InsNet is not completed yet:

@article{wang2019n3ldg,
title={N3LDG: A Lightweight Neural Network Library for Natural Language Processing},
author={Wang, Qiansheng and Yu, Nan and Zhang, Meishan and Han, Zijia and Fu, Guohong},
journal={Beijing Da Xue Xue Bao},
volume={55},
number={1},
pages={113--119},
year={2019},
publisher={Acta Scientiarum Naturalium Universitatis Pekinenis}
}

Due to incorrect Git operations, the very early history of InsNet is erased, but you can see it in another repo.

If you have any question about InsNet, feel free to post an issue or send me an email: [email protected]

See the documentation for more details.

You might also like...
TS-9 guitar pedal clone using neural networks.
TS-9 guitar pedal clone using neural networks.

TS-M1N3 TS-M1N3 is a guitar plugin clone of the TS-9 Tubescreamer overdrive pedal. Machine learning was used to train a model of both the drive and to

A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

A Tool for Verifying Neural Networks using SMT-Based Model Checking

Project Title QNNVerifier Description A Tool for Verifying Neural Networks using SMT-Based Model Checking. Using Frama-C and ESBMC as the backends. Yo

CoDi is a cellular automaton model for spiking neural networks

CoDi CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signa

DyNet: The Dynamic Neural Network Toolkit
DyNet: The Dynamic Neural Network Toolkit

The Dynamic Neural Network Toolkit General Installation C++ Python Getting Started Citing Releases and Contributing General DyNet is a neural network

Grouped Feedback Delay Networks for Coupled Room Modeling
Grouped Feedback Delay Networks for Coupled Room Modeling

Grouped Feedback Delay Networks Reverb Plugin GFDNs connect multiple spaces with different T60 characteristics and a parameterized mixing matrix to co

Parallel library for approximate inference on discrete Bayesian networks

baylib C++ library Baylib is a parallel inference library for discrete Bayesian networks supporting approximate inference algorithms both in CPU and G

Computer Networks, CSE@CUHK, taught by Hong Xu

CSCI4430, Computer Networks (Spring 2022) Administrivia Schedule Lectures: Mon 12:30pm -- 2:15pm, ERB LT (Zoom link) Tue 4:30pm -- 5:15pm, ERB LT (Zoo

Releases(0.0.3-alpha)
  • 0.0.3-alpha(Jul 23, 2021)

  • v0.0.2-alpha(Jun 27, 2021)

  • v0.0.1-alpha(Jun 6, 2021)

    We have made huge improvements over its early version N3LDG:

    1. Convenient model building (e.g., Node *y = dropout(x, 0.1) vs N3LDG's vector ys; ys.resize(n); for (Node *y : ys) y.init(blabla); y.forward(blabla);).
    2. Transformer support, including the decoder for inference time.
    3. More operators with more thoughtful dynamic batching.
    4. Preliminary documentation
    5. Careful Memory Management(tensor's ref_count, the object pool, the better memory pool, etc.).
    Source code(tar.gz)
    Source code(zip)
Owner
Chauncey Wang
Chauncey Wang
Code for Paper A Systematic Framework to Identify Violations of Scenario-dependent Driving Rules in Autonomous Vehicle Software

Code for Paper A Systematic Framework to Identify Violations of Scenario-dependent Driving Rules in Autonomous Vehicle Software

Qingzhao Zhang 6 Nov 28, 2022
A lightweight C library for artificial neural networks

Getting Started # acquire source code and compile git clone https://github.com/attractivechaos/kann cd kann; make # learn unsigned addition (30000 sam

Attractive Chaos 617 Dec 19, 2022
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 9, 2023
Low dependency(C++11 STL only), good portability, header-only, deep neural networks for embedded

LKYDeepNN LKYDeepNN 可訓練的深度類神經網路 (Deep Neural Network) 函式庫。 輕量,核心部份只依賴 C++11 標準函式庫,低相依性、好移植,方便在嵌入式系統上使用。 Class diagram 附有訓練視覺化 demo 程式 訓練視覺化程式以 OpenCV

Lin Kao-Yuan 44 Nov 7, 2022
Tiny CUDA Neural Networks

This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning fast "fully fused" multi-layer perceptron as well as support for various advanced input encodings, losses, and optimizers.

NVIDIA Research Projects 1.9k Jan 7, 2023
Raspberry Pi guitar pedal using neural networks to emulate real amps and pedals.

NeuralPi NeuralPi is a guitar pedal using neural networks to emulate real amps and pedals on a Raspberry Pi 4. The NeuralPi software is a VST3 plugin

Keith Bloemer 865 Jan 5, 2023
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Jiabao Lei 45 Dec 21, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

null 45 Dec 21, 2022
A header-only C++ library for deep neural networks

MiniDNN MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional

Yixuan Qiu 336 Dec 22, 2022
A framework for generic hybrid two-party computation and private inference with neural networks

MOTION2NX -- A Framework for Generic Hybrid Two-Party Computation and Private Inference with Neural Networks This software is an extension of the MOTI

ENCRYPTO 15 Nov 29, 2022