DyNet: The Dynamic Neural Network Toolkit

Overview
DyNet


Build Status (Travis CI) Build Status (AppVeyor) Build Status (Docs) PyPI version

The Dynamic Neural Network Toolkit

General

DyNet is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance. For example, these kinds of networks are particularly important in natural language processing tasks, and DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection, and many other application areas.

Read the documentation to get started, and feel free to contact the dynet-users group group with any questions (if you want to receive email make sure to select "all email" when you sign up). We greatly appreciate any bug reports and contributions, which can be made by filing an issue or making a pull request through the github page.

You can also read more technical details in our technical report.

Getting started

You can find tutorials about using DyNet here (C++) and here (python), and here (EMNLP 2016 tutorial).

One aspect that sets DyNet apart from other tookits is the auto-batching feature. See the documentation about batching.

The example folder contains a variety of examples in C++ and python.

Installation

DyNet relies on a number of external programs/libraries including CMake and Eigen. CMake can be installed from standard repositories.

For example on Ubuntu Linux:

sudo apt-get install build-essential cmake

Or on macOS, first make sure the Apple Command Line Tools are installed, then get CMake, and Mercurial with either homebrew or macports:

xcode-select --install
brew install cmake  # Using homebrew.
sudo port install cmake # Using macports.

On Windows, see documentation.

To compile DyNet you also need a specific version of the Eigen library. If you use any of the released versions, you may get assertion failures or compile errors. You can get it easily using the following command:

mkdir eigen
cd eigen
wget https://github.com/clab/dynet/releases/download/2.1/eigen-b2e267dc99d4.zip
unzip eigen-b2e267dc99d4.zip

C++ installation

You can install dynet for C++ with the following commands

# Clone the github repository
git clone https://github.com/clab/dynet.git
cd dynet
mkdir build
cd build
# Run CMake
# -DENABLE_BOOST=ON in combination with -DENABLE_CPP_EXAMPLES=ON also
# compiles the multiprocessing c++ examples
cmake .. -DEIGEN3_INCLUDE_DIR=/path/to/eigen -DENABLE_CPP_EXAMPLES=ON
# Compile using 2 processes
make -j 2
# Test with an example
./examples/train_xor

For more details refer to the documentation

Python installation

You can install DyNet for python by using the following command

pip install git+https://github.com/clab/dynet#egg=dynet

For more details refer to the documentation

Citing

If you use DyNet for research, please cite this report as follows:

@article{dynet,
  title={DyNet: The Dynamic Neural Network Toolkit},
  author={Graham Neubig and Chris Dyer and Yoav Goldberg and Austin Matthews and Waleed Ammar and Antonios Anastasopoulos and Miguel Ballesteros and David Chiang and Daniel Clothiaux and Trevor Cohn and Kevin Duh and Manaal Faruqui and Cynthia Gan and Dan Garrette and Yangfeng Ji and Lingpeng Kong and Adhiguna Kuncoro and Gaurav Kumar and Chaitanya Malaviya and Paul Michel and Yusuke Oda and Matthew Richardson and Naomi Saphra and Swabha Swayamdipta and Pengcheng Yin},
  journal={arXiv preprint arXiv:1701.03980},
  year={2017}
}

Contributing

We welcome any contribution to DyNet! You can find the contributing guidelines here

Comments
  • Incorporate cuDNN, add conv2d CPU/GPU version (based on Eigen and cuDNN)

    Incorporate cuDNN, add conv2d CPU/GPU version (based on Eigen and cuDNN)

    #229 This is the CPU implementation based on Eigen SpatialConvolution. It is reported as the current fastest (available) CPU version of conv2d. For GPU support, I think implementing a new version using cublas kernels (by hand) is worthless, so I am currently incorporating cudnn into DyNet and will provide a cudnn-based (standard) implementation.

    opened by zhisbug 33
  • First attempt at Yarin Gal dropout for LSTM

    First attempt at Yarin Gal dropout for LSTM

    https://arxiv.org/pdf/1512.05287v5.pdf

    I'm not 100% sure its correct, and it has some ugliness -- LSTMBuilder now keeps a pointer to ComputationGraph -- but Gal's dropout seems to be the preferred way to do dropout for LSTMs.

    Will appreciate another pair of eyes.

    opened by yoavg 29
  • Support installation through pip

    Support installation through pip

    With this change, DyNet can be installed with the following command line:

    pip install git+https://github.com/clab/dynet#egg=dynet
    

    If Boost is installed in a non-standard location, it has to be set in the environment variable BOOST prior to installation.

    To try this out from my fork before merging the pull request, use:

    pip install git+https://github.com/danielhers/dynet#egg=dynet
    
    opened by danielhers 23
  • Auto-batching 'inf' gradient

    Auto-batching 'inf' gradient

    Hi,

    We successfully implement a seq2seq model with auto-batching (in GPU) and it works great. We wanted to improve the speed by reducing the size of the softmax:

    Expression W = select_rows(p2c,candsInt); Expression x = W * v; Expression candidates = log_softmax(x);

    When not using auto-batching the code works and behaves as expected, however when using the auto-batch we get a runtime error what(): Magnitude of gradient is bad: inf

    Thank you, Eli

    major bug fix needs confirmation 
    opened by elikip 22
  • Is there a alternative way to save model besides the boost?

    Is there a alternative way to save model besides the boost?

    Hi,

    Currently I am facing problem to create a model loader in different languages(e.g. Java). Is there a better way to serialize the model (or parameters) in more human-readable way? It would be great to be more widely-used in many ways. Any kind of suggestions will be appreciated!

    Thanks, YJ

    opened by iamyoungjo 21
  • Combine python/setup.py.in into setup.py

    Combine python/setup.py.in into setup.py

    Simplify Python installation process by combining the generated setup.py into the top one, using environment variables to pass information from cmake. Should allow fixing #657 now that the Cython extensions are created by the main setup.py.

    opened by danielhers 20
  • GPU (backend cuda) build problem

    GPU (backend cuda) build problem

    I am having problem on build with BACKEND=cuda. My system is OS X (10.11.6 El Capitan) cmake works fine, once I do "make -j 4", it returns following error:

    Undefined symbols for architecture x86_64: ... ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation)

    As I use GPU in tensorflow without any issues, I doubt that my CUDA setting has not done or something.

    I searched similar issues here but I am the only person who's having this issue as it seems. There's no issue on compilation if I don't put cuda backend.

    If I missed a significant step here or if anyone who's familiar with this error, please help. I wasted more than 6 hours because of this.

    make.log.zip

    moderate bug fix needs confirmation 
    opened by iamyoungjo 20
  • Batch manipulation operations

    Batch manipulation operations

    It would be nice to have operations that allow you to do things like

    • concat_batch: concatenate multiple expressions into a single batched expression
    • pick_batch_elements: pick only a subset of the elements from a batched expression
    enhancement 
    opened by neubig 19
  • Instalation issue

    Instalation issue

    Hello,

    I'm trying to install dynet on my local machine and I keep getting an error while importing dynet in python.

    import dynet as dy Traceback (most recent call last): File "", line 1, in File "dynet.py", line 17, in from _dynet import * ImportError: dlopen(./_dynet.so, 2): Library not loaded: @rpath/libdynet.dylib Referenced from: /dynet-base/dynet/build/python/_dynet.so Reason: image not found

    I'm using:

    • MBP w/ MacOS Sierra
    • Eigen's default branch from bitbucket
    • The latest dynet (w/ Today's commit that fixed TravisCI)
    • boost 160
    • python 2.7.10
    • cmake 3.6.3
    • make 3.81 (built for i386-apple-darwin11.3.0)

    The make log also references that file c++ -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -Wl,-F. build/temp.macosx-10.12-intel-2.7/dynet.o -L. -L/dynet-base/dynet/build/dynet/ -L/dynet-base/dynet/build/dynet/ -ldynet -o /dynet-base/dynet/build/python/_dynet.so ld: warning: ignoring file /dynet-base/dynet/build/dynet//libdynet.dylib, file was built for x86_64 which is not the architecture being linked (i386): /dynet-base/dynet/build/dynet//libdynet.dylib

    Could you advise?

    Thanks, Florin.

    moderate bug 
    opened by fmacicasan 19
  • Eliminate dependency on libdynet from _dynet.so

    Eliminate dependency on libdynet from _dynet.so

    Currently, the compiled Cython file, _dynet.so depends on libdynet. This has a couple of disadvantages such as:

    • Installation can be clumsy (e.g., setting LD_LIBRARY_PATH in Linux, DYLD_LIBRARY_PATH in macOS to load libdynet)
    • Not easy to deploy to servers.

    This change eliminates the dependency by creating a static library of dynet, making the installation and deployment easier. The idea is to link the static library rather than the shared/dynamic library when generating _dynet.so.

    The static and shared/dynamic libraries are generated from an object library (it's just a collection of object files) [1]. By creating an object library, we can avoid compiling object files for both libraries.

    [1] https://cmake.org/cmake/help/latest/command/add_library.html#object-libraries


    This change is Reviewable

    opened by tetsuok 18
  • Scala bindings for DyNet (via swig)

    Scala bindings for DyNet (via swig)

    We have created SWIG bindings so that we can use DyNet from Scala. They are pretty comprehensive, with lots of documentation and tests and examples, and we are actively using DyNet from Scala code.

    Other than a few lines in the top level CMakeLists, all of our changes are under the new swig directory (and are hidden behind a flag which is OFF by default).

    We wanted to contribute this back, as it seems like something that could be useful to a lot of people.

    Incorporating this would require some sort of plan around keeping the bindings in sync with the root C++ code. Presumably that's already required for the Python bindings, so maybe it's not terribly hard.

    Anyway, I know this isn't just a simple "LGTM" change, so let's discuss.

    opened by joelgrus 18
  • Unable to build wheel for dynet on Windows (pip install dynet)

    Unable to build wheel for dynet on Windows (pip install dynet)

    Unable to build wheel for dynet on Windows (pip install dynet)

    File "C:\Users\User\anaconda3\Scripts\cmake.exe_main_.py", line 4, in ModuleNotFoundError: No module named 'cmake' error: make not found, and MAKE is not set. [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for dynet Failed to build dynet ERROR: Could not build wheels for dynet, which is required to install pyproject.toml-based projects

    Any advice appreciated.

    opened by AndrewBrooks56 0
  •  does DyNet help in rewiring analysis?

    does DyNet help in rewiring analysis?

    Hi,

    Just wondering if DyNet help in rewiring analysis. I have three different networks and would like to know the most rewarding nodes across all networks. Best, amare

    opened by adesalegn 0
  • Bump numpy from 1.14.2 to 1.22.0 in /examples/variational-autoencoder/basic-image-recon

    Bump numpy from 1.14.2 to 1.22.0 in /examples/variational-autoencoder/basic-image-recon

    Bumps numpy from 1.14.2 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Add support to release Linux aarch64 wheels

    Add support to release Linux aarch64 wheels

    Problem

    On aarch64, ‘pip install dyNET’ builds the wheels from source code and is giving below error:

    [ 71%] Building CXX object dynet/CMakeFiles/dynet.dir/nodes-random.cc.o  
      /tmp/pip-install-u5zcjrap/dyNET/dynet/mem.cc:13:10: fatal error: mm_malloc.h: No such file or directory  
         13 | #include <mm_malloc.h>  
            |          ^~~~~~~~~~~~~  
      compilation terminated.  
      make[2]: *** [dynet/CMakeFiles/dynet.dir/build.make:297: dynet/CMakeFiles/dynet.dir/mem.cc.o] Error 1  
      make[2]: *** Waiting for unfinished jobs....  
      /tmp/pip-install-u5zcjrap/dyNET/dynet/lstm.cc: In member function ‘void dynet::SparseLSTMBuilder::set_sparsity(float)’:  
      /tmp/pip-install-u5zcjrap/dyNET/dynet/lstm.cc:686:19: warning: comparison of integer expressions of different signedness: ‘int’ and ‘unsigned int’ [-Wsign-compare]  
        686 |     for (int i=0;i<layers;i++){  
            |                  ~^~~~~~~  
      /tmp/pip-install-u5zcjrap/dyNET/dynet/expr.cc: In function ‘dynet::Expression dynet::strided_select(const dynet::Expression&, const std::vector<int>&, const std::vector<int>&, const std::vector<int>&)’:  
      /tmp/pip-install-u5zcjrap/dyNET/dynet/expr.cc:201:74: warning: comparison of integer expressions of different signedness: ‘const value_type’ {aka ‘const int’} and ‘unsigned int’ [-Wsign-compare]  
        201 |   for(unsigned d=0;d<range_to.size() && d<x.dim().nd;d++){ if(range_to[d]!=x.dim()[d]) inplaced = false; }  
      make[1]: *** [CMakeFiles/Makefile2:116: dynet/CMakeFiles/dynet.dir/all] Error 2  
      make: *** [Makefile:130: all] Error 2
      /tmp/pip-build-env-sbod7r54/overlay/lib/python3.8/site-packages/setuptools/dist.py:516: UserWarning: Normalizing 'v2.1.2' to '2.1.2'  
        warnings.warn(tmpl.format(**locals()))  
      /tmp/pip-build-env-sbod7r54/overlay/lib/python3.8/site-packages/setuptools/dist.py:757: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead  
        warnings.warn(  
      error: /usr/bin/make -j 48
      ----------------------------------------  
      ERROR: Failed building wheel for dyNET  
    Failed to build dyNET
    ERROR: Could not build wheels for dyNET which use PEP 517 and cannot be installed directly
    

    Resolution

    On aarch64, ‘pip install dyNET’ should download the wheels from pypi.

    I have modified the code in order to add support for Linux aarch64 wheel. Aarch64 wheels is building successfully but test cases are getting hanged at a point as below:

    python test.py 
    [dynet] random seed: 238741976 
    [dynet] allocating memory: 512MB 
    [dynet] memory allocation done. 
    ....Reading clusters from cluster_file.txt ... 
    Read 10 words in 5 clusters (0 singleton clusters) 
    ........................ 
    No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself. 
    Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received 
    The build has been terminated
    

    Commit Link - https://github.com/odidev/dynet/commit/9fc79c50d9f723cd899622290c881edb73dd494f

    Travis Link - https://app.travis-ci.com/github/odidev/dynet/builds/249269741

    @Team Please let me know your interest in releasing Linux aarch64 wheels. To start with, can I get some suggestions on why test cases are getting stuck at a point?

    opened by odidev 0
  • Bump pillow from 9.0.0 to 9.0.1 in /examples/variational-autoencoder/basic-image-recon

    Bump pillow from 9.0.0 to 9.0.1 in /examples/variational-autoencoder/basic-image-recon

    Bumps pillow from 9.0.0 to 9.0.1.

    Release notes

    Sourced from pillow's releases.

    9.0.1

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.1.html

    Changes

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [@​radarhere, @​hugovk]
    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]
    Changelog

    Sourced from pillow's changelog.

    9.0.1 (2022-02-03)

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [radarhere, hugovk]

    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]

    Commits
    • 6deac9e 9.0.1 version bump
    • c04d812 Update CHANGES.rst [ci skip]
    • 4fabec3 Added release notes for 9.0.1
    • 02affaa Added delay after opening image with xdg-open
    • ca0b585 Updated formatting
    • 427221e In show_file, use os.remove to remove temporary images
    • c930be0 Restrict builtins within lambdas for ImageMath.eval
    • 75b69dd Dont need to pin for GHA
    • cd938a7 Autolink CWE numbers with sphinx-issues
    • 2e9c461 Add CVE IDs
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(2.1.2)
  • 2.1.2(Oct 21, 2020)

  • 2.1.1(Oct 20, 2020)

  • 2.1(Sep 18, 2018)

    DyNet v. 2.1 incorporates the following changes:

    • Parameters are now implicitly cast Expressions in python. This changes the API slightly as there is no need to call dy.parameter or .expr anymore. #1233
    • Python 3.7 support (pre-built binaries on PyPI) #1450 (thanks @danielhers )
    • Advanced Numpy-like slicing #1363 (thanks @msperber)
    • Argmax and straight-through estimators #1208
    • Updated API doc #1312 (thanks @zhechenyan)
    • Fix segmentation fault in RNNs https://github.com/clab/dynet/issues/1371
    • Many other small fixes and QoL improvements (see the full list of merged PRs since the last release for more details)

    Link to the 2.1 documentation: https://dynet.readthedocs.io/en/2.1/

    Source code(tar.gz)
    Source code(zip)
    eigen-b2e267dc99d4.zip(3.16 MB)
  • 2.0.3(Feb 16, 2018)

    DyNet v. 2.0.3 incorporates the following changes:

    • On-GPU random number generation (https://github.com/clab/dynet/issues/1059 https://github.com/clab/dynet/pull/1094 https://github.com/clab/dynet/pull/1154)
    • Memory savings through in-place operations (https://github.com/clab/dynet/pull/1103)
    • More efficient inputTensor that doesn't switch memory layout (https://github.com/clab/dynet/issues/1143)
    • More stable sigmoid (https://github.com/clab/dynet/pull/1200)
    • Fix bug in weight decay (https://github.com/clab/dynet/issues/1201)
    • Many other fixes, etc.

    Link to the documentation: Dynet v2.0.3

    Source code(tar.gz)
    Source code(zip)
  • 2.0.2(Dec 21, 2017)

    v 2.0.2 of DyNet includes the following improvements. Thanks to everyone who made them happen!

    Done:

    Better organized examples: https://github.com/clab/dynet/issues/191 Full multi-device support: https://github.com/clab/dynet/issues/952 Broadcasting standard elementwise operations: https://github.com/clab/dynet/pull/776 Some refactoring: https://github.com/clab/dynet/issues/522 Better profiling: https://github.com/clab/dynet/pull/1088 Fix performance regression on autobatching: https://github.com/clab/dynet/issues/974 Pre-compiled pip binaries A bunch of other small functionality additions and bug fixes

    Source code(tar.gz)
    Source code(zip)
  • 2.0.1(Sep 2, 2017)

    DyNet v2.0.1 made the following major improvements:

    Simplified training interface: https://github.com/clab/dynet/pull/695 Support for multi-device computation (thanks @xunzhang !): https://github.com/clab/dynet/pull/704 A memory efficient version of LSTMBuilder (thanks @msperber): https://github.com/clab/dynet/pull/729 Scratch memory for better memory efficiency (thanks @zhisbug @Abasyoni !): https://github.com/clab/dynet/pull/692 Work towards pre-compiled pip files (thanks @danielhers !)

    Source code(tar.gz)
    Source code(zip)
  • v2.0(Jul 10, 2017)

    This release includes a number of new features that are breaking changes with respect to v1.1.

    • DyNet no longer requires boost (thanks @xunzhang)! This means that models are now not saved in Boost format, but instead a format supported natively by DyNet.
    • Other changes to reading and writing include the ability to read/write only parts of models. There have been a number of changes to the reading/writing interface as well, and examples of how to use it can be found in the "examples". (https://github.com/clab/dynet/issues/84)
    • Renaming of "Model" as "ParameterCollection"
    • Removing the dynet::expr namespace in C++ (now expressions are in the dynet:: namespace)
    • Making VanillaLSTMBuilder the default LSTM interface https://github.com/clab/dynet/issues/474

    Other new features include

    • Autobatching (by @yoavgo and @neubig): https://github.com/clab/dynet/blob/master/examples/python/tutorials/Autobatching.ipynb
    • Scala bindings (thanks @joelgrus!) https://github.com/clab/dynet/pull/357
    • Dynamically increasing memory pools (thanks @yoavgo) https://github.com/clab/dynet/pull/364
    • Convolutions and cuDNN (thanks @zhisbug!): https://github.com/clab/dynet/issues/229https://github.com/clab/dynet/issues/236
    • Better error handling: https://github.com/clab/dynet/pull/358https://github.com/clab/dynet/pull/365
    • Better documentation (thanks @pmichel31415!)
    • Gal dropout (thanks @yoavgo and @pmichel31415!): https://github.com/clab/dynet/pull/261
    • Integration into pip (thanks @danielhers !)
    • A cool new logo! (http://dynet.readthedocs.io/en/latest/citing.html)
    • A huge number of other changes by other contributors. Thank you everyone!
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Jun 28, 2017)

  • v1.0-rc1(Oct 12, 2016)

    This is the first release candidate for DyNet version 1.0. Compared to the previous cnn, it supports a number of new features:

    • Full GPU support
    • Simple support of mini-batching
    • Better integration with Python bindings
    • Better efficiency
    • Correct implementation of l2 regularization
    • More supported functions
    • And much more!
    Source code(tar.gz)
    Source code(zip)
Owner
Chris Dyer's lab @ LTI/CMU
Chris Dyer's lab @ LTI/CMU
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 59.2k Sep 30, 2022
InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.

InsNet documentation InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is desig

Chauncey Wang 60 Oct 1, 2022
A GPU (CUDA) based Artificial Neural Network library

Updates - 05/10/2017: Added a new example The program "image_generator" is located in the "/src/examples" subdirectory and was submitted by Ben Bogart

Daniel Frenzel 92 Sep 27, 2022
simple neural network library in ANSI C

Genann Genann is a minimal, well-tested library for training and using feedforward artificial neural networks (ANN) in C. Its primary focus is on bein

Lewis Van Winkle 1.3k Sep 26, 2022
oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-

oneAPI-SRC 3k Sep 28, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

NeuroSim 26 Sep 27, 2022
ffcnn is a cnn neural network inference framework, written in 600 lines C language.

+----------------------------+ ffcnn 卷积神经网络前向推理库 +----------------------------+ ffcnn 是一个 c 语言编写的卷积神经网络前向推理库 只用了 500 多行代码就实现了完整的 yolov3、yolo-fastes

ck 49 Sep 11, 2022
Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution

DeepC: Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution This code uses FSRCNN algorithm t

Milad Abdollahzadeh 9 Aug 14, 2022
Real time monaural source separation base on fully convolutional neural network operates on Time-frequency domain.

Real time monaural source separation base on fully convolutional neural network operates on Time-frequency domain.

James Fung 102 Sep 13, 2022
Ncnn version demo of [CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search (ncnn) The official implementation by pytorch: ht

null 28 Oct 4, 2022
ncnn is a high-performance neural network inference framework optimized for the mobile platform

ncnn ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployme

Tencent 15.6k Oct 2, 2022
Cranium - 🤖 A portable, header-only, artificial neural network library written in C99

Cranium is a portable, header-only, feedforward artificial neural network library written in vanilla C99. It supports fully-connected networks of arbi

Devin Soni 537 Sep 28, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.2k Sep 29, 2022
Lite.AI.ToolKit 🚀🚀🌟: A lite C++ toolkit of awesome AI models such as RobustVideoMatting🔥, YOLOX🔥, YOLOP🔥 etc.

Lite.AI.ToolKit ?? ?? ?? : A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc.

DefTruth 2.1k Oct 1, 2022
Zenotech 7 Nov 13, 2020
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Insight Software Consortium 1.1k Sep 30, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

The Microsoft Cognitive Toolkit is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph.

Microsoft 17.2k Oct 2, 2022
A powerful and versatile dynamic instrumentation toolkit.

MIGI Migi(My Ideas Got Incepted) is a powerful and versatile dynamic instrumentation toolkit. How it works By injecting Python scripts into target hos

nomads 4 Sep 11, 2022
A lightweight C library for artificial neural networks

Getting Started # acquire source code and compile git clone https://github.com/attractivechaos/kann cd kann; make # learn unsigned addition (30000 sam

Attractive Chaos 609 Sep 7, 2022