Open standard for machine learning interoperability

Overview

Build Status Build Status Build Status CII Best Practices

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the SIGs and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Discuss

We encourage you to open Issues, or use Slack for more real-time discussion

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Installation

Prerequisites

numpy >= 1.16.6
protobuf >= 3.12.2
six
typing-extensions >= 3.6.2.1

Official Python packages

ONNX released packages are published in PyPi.

pip install numpy protobuf==3.16.0
pip install onnx

Weekly packages are published in test pypi to enable experimentation and early testing.

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge numpy protobuf==3.16.0 libprotobuf=3.16.0
conda install -c conda-forge onnx

You can also use the onnx-dev docker image for a Linux-based installation without having to worry about dependency versioning.

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

Generally spreaking, you need to install protobuf C/C++ libraires and tools before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the verison of protobuf. The tested and recommended version is 3.16.0.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.16.0
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobug_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobug_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set PATH=<protobug_install_dir>/bin;%PATH%

Please note: if your protobug_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Linux

First, you need to install protobuf.

Ubuntu users: the quickest way to install protobuf is to run

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

Then you can build ONNX as:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"
git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Otherwise, you may need to install it from source. You can use the following commands to do it:

Debian/Ubuntu:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.16.0
git submodule update --init --recursive
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
make install

CentOS/RHEL/Fedora:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.16.0
git submodule update --init --recursive
mkdir build_source && cd build_source
cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .
  • Mac
export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.16.0/protobuf-cpp-3.16.0.tar.gz
tar -xvf protobuf-cpp-3.16.0.tar.gz
cd protobuf-3.16.0
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt
Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library.
    Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited.
    Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF.
    Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf.
    Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors.
    Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • Building ONNX on Ubuntu works well, but on CentOS/RHEL and other ManyLinux systems, you might need to open the CMakeLists file and replace all instances of /lib with /lib64.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

Issues
  • Serialization for Sequence and Map data types

    Serialization for Sequence and Map data types

    Purpose: Add serialization for inputs and outputs of Sequence and Map data types so relevant Sequence and Map operator unit tests can be enabled.

    • Proposing new onnx-data proto file with sequences and maps
    • Defining SequenceProto
    • Defining MapProto
    • related functions in cmd_tools, numpy_helper, and helper
    • Cleaning comments for whitespace
    • Adding example of Sequence test with SequenceInsert unit test

    Follow on from: https://github.com/onnx/onnx/pull/2249

    opened by vinitra-zz 107
  • Update resize op

    Update resize op

    1. Add "cubic" interpolation mode, and clarify the "linear", "bilinear" and "trilinear" (https://github.com/onnx/onnx/issues/1774)
    2. Update the tests: The current tests of resize linear follows the old behavior of TF, which is changed now. Related issue: https://github.com/tensorflow/tensorflow/issues/6720, https://github.com/onnx/onnx/issues/2070
    3. Add "coordinate_transformation_mode" attribute including "half_pixel", "align_corners", "asymmetric", "tf_crop_and_resize" and so on
    4. Add a new input 'sizes' as requested in #2062.
    5. Add two new attrs 'cubic_coeff_a' and 'exclude_outside' for the compatibility with TensorFlow. There is no standard implementation of cubic interpolation. TensorFlow (and also matlab) sets a=-0.5 (In the legacy version of TF, a=-0.75) and exclude_outside=True, while PyTorch (and OpenCV) sets a=-0.75 and exclude_outside=False

    Both cubic interpolation mode and align_corners attribute are supported by tensorflow, pytorch and so on.

    Note:

    • Some frameworks set output_dimension to round(input_dimension * scale) instead of floor(input_dimension * scale), some frameworks re-assign the scale to output_dimension / input_dimension. I think it is not good to add two extra attributes in resize op for these cases. This op has many attributes already, and the converters in frameworks have the ability and responsibility to make the scale compatible with the behavior of this op.

    • This proposal is fully compatible with TensorFlow and PyTorch. Please refer to the checking script for the attribute values

    operator 
    opened by daquexian 46
  • dimension denotation

    dimension denotation

    As per https://github.com/onnx/onnx/issues/406

    I settled on the word standard_denotation b/c denotation seems to imply "direct meaning". This helps to convey the idea that such denotation is for the algorithms not for some programmer's high level perception. In other words, such dimension annotations will be taken and propagated "literally".

    opened by tjingrant 41
  • Move all training operators to a preview domain

    Move all training operators to a preview domain

    This PR creates a new ONNX domain, ai.onnx.preview.training, for storing training operators while ONNX Training is in preview. Because ONNX Training Spec is in preview and subject to change, it's better to put related operators to a domain which does not interact with other operators. They main purpose is to allow development around these complicated operators and avoid polluting other namespace.

    operator training 
    opened by wschin 39
  • Align to Numpy broadcasting

    Align to Numpy broadcasting

    Change the specs

    • [x] Add
    • [x] Div
    • [x] Mul
    • [x] Pow
    • [x] Sub
    • [x] And
    • [x] Or
    • [x] Xor
    • [x] Equal
    • [x] Greater
    • [x] Less
    • [x] Gemm
    • [x] Prelu

    Update the Node test cases

    • [x] Add
    • [x] Div
    • [x] Mul
    • [x] Pow
    • [x] Sub
    • [x] And
    • [x] Or
    • [x] Xor
    • [x] Equal
    • [x] Greater
    • [x] Less
    • [x] Gemm
    • [x] Prelu

    Fix ONNX optimizer

    • [x] Fix Fuse Add into Conv pass

    Update the converted test cases (Optional)

    Will do it later. Keep tracking in https://github.com/onnx/onnx/issues/905

    Test

    • [x] Make sure Caffe2 CI is green
    opened by houseroad 35
  • Introduce shape inference

    Introduce shape inference

    as an optimizer pass, and a proof-of-concept shape inference implementation for Transpose.

    The way that we define the actual shape-inference logic is subject to change, but the way in which it's tied into the optimizer is pretty straightforward and probably won't need a big overhaul.

    opened by anderspapitto 35
  • Fix failed Window dll build and refine the corresponding logic

    Fix failed Window dll build and refine the corresponding logic

    The logic in onnx_pb.h is wrong, and the logic in CMakeLists.txt just follows the wrong one.

    ONNX_IMPORT is only set when ONNX_BUILD_SHARED_LIBS or ONNX_BUILD_MAIN_LIB is defined, but only used when ONNX_BUILD_SHARED_LIBS or ONNX_BUILD_MAIN_LIB is not defined. It is not the expected behavior and causes windows shared lib build fails (CI doesn't check it).

    CI log: https://ci.appveyor.com/project/daquexian/dnnlibrary/builds/23876196#L3723 (without this patch, build fails on unresolved symbols of protobuf generated files) https://ci.appveyor.com/project/daquexian/dnnlibrary/builds/23876935#L13092 (with this patch, build passes)

    build 
    opened by daquexian 33
  • ONNX Optimization Rewrite

    ONNX Optimization Rewrite

    I'd like to start off this convo by providing a skeleton of what the optimization framework will look like.

    We first define a set of enums which describe general attributes of the pass. For example we have PassType (e.g. fuse/nop), PassEfficiency (e.g. partial, complete), PassOptimizationType (e.g. memory, compute, stability). These attributes of the pass will help up build better PassManagers.

    A pass contains a couple of methods. It can give you it's name as well as it's attributes. It has the ability to initialize a pass given a graph as well as finalize it. Finally it contains the runPass method which works both on IR and Proto (although we should think about only supporting IR. I don't see a fundamental benefit to optimizing over Proto).

    The runPass method returns a PostPassAnalysis which provides some information about what the pass has done (e.g. did initialization? did finalization? number transformers applied?) this is useful when deciding to do things such as fixed point optimization.

    One fundamental type of pass we implement is PredicateBasedPass. A lot of code reuse is happening in the code framework wrt DescendOnGraphAttributes. The way we solve this is by using a PredicateBasedPass where one implements the predicate for when the transform will be applied as well as the transform. Our backend takes care of DescendOnGraphAttributes/PassAnalysis in a way agnostic to the user writing the pass.

    @houseroad What do you think so far? Any design ideas which look wrong?

    opened by ArmenAg 33
  • PyTorch export crash with onnx >= 1.8.0 unless import onnx first on windows

    PyTorch export crash with onnx >= 1.8.0 unless import onnx first on windows

    Similar to ~#2808~ https://github.com/apple/coremltools/issues/920 and https://github.com/onnx/onnx/issues/2940

    The script crashes when onnx is imported after pytorch. The script completes successfully when onnx is imported before pytorch, or when onnx installed version <= 1.7.0

    # import onnx  # uncomment and pass
    import torch
    from onnx import ModelProto
    
    class M(torch.nn.Module):
        def forward(self, x, y):
            return x + y
    
    x = torch.randn(2, 3)
    y = torch.randn(2, 3)
    
    import io
    f = io.BytesIO()
    torch.onnx.export(M(), (x, y), f, verbose=True, input_names=['x', 'y'])
    

    Both packages are installed like below

    pip install torch==1.8.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
    pip install onnx==1.8
    

    Based on previous issues, I suspect this is also related to different protobuf versions. Can someone explain what is going on underneath, and how to resolve & prevent this in the future?

    bug 
    opened by BowenBao 31
  • update mypy to latest (WIP)

    update mypy to latest (WIP)

    Description

    • Updates the mypy version from 0.600 to 0.910
    • Removes the protobuf stubs since latest mypy includes them
    • Removes six usages as Python 2 support is deprecated
    • Fixes issues as they arise

    Motivation and Context

    • Allows us to use modern mypy versions with onnx
    • Previous works and more context: #3211
    run release CIs 
    opened by stillmatic 30
  • ONNX in C/C++

    ONNX in C/C++

    Is there a way to open a model in ONNX format in C/C++ and access its details as easily as in Python? If not, is there any immediate plan for such support?

    opened by lburzawa 29
  • to extend OptionalHasElement and OptionalGetElement to accept tensor and sequence types

    to extend OptionalHasElement and OptionalGetElement to accept tensor and sequence types

    Description to extend OptionalHasElement and OptionalGetElement to accept tensor and sequence types

    Motivation and Context Before the 2 optional ops only accept optional types. This limits its use in cases where input can be both optional and other types. This PR generalizes the 2 ops so it become more flexible.

    operator 
    opened by liqunfu 0
  • import onnx using jupyter-notebook fails

    import onnx using jupyter-notebook fails

    I have installed on my jupyter-notebook onnx using !pip install onnx. When I try to import it as import onnx, it fails and gives the error "Couldn't build proto file into descriptor pool: duplicate file name (onnx/onnx-ml.proto)" Screenshot from 2022-08-08 20-05-05

    bug 
    opened by shahidzk1 1
  • [Tracking] Deprecate ONNX Interface for Framework Integration (ONNXIFI)

    [Tracking] Deprecate ONNX Interface for Framework Integration (ONNXIFI)

    Deprecate ONNX Interface for Framework Integration (ONNXIFI) because related code hasn't been updated for quite a long time and recently there is few vulnerability issues in related code (e.g., https://github.com/onnx/onnx/pull/4377). IMO, we should try to deprecate it if no one is really relying on it. Please let me know if anyone is using it. Thanks!

    infrastructure enhancement tracking 
    opened by jcwchen 1
  • How can I export decoder which takes a list of tensors as input to onnx

    How can I export decoder which takes a list of tensors as input to onnx

    Ask a Question

    Question

    Hello. First of all, I'm not good at English, so please understand me.

    i want to transform weights of monodepth2 to onnx. so I wrote codes.

    This model is divided into two parts, encoder and decoder.

    input of encoder is image tensor. shape is [1,3,192,640]. output is list of five tensors. input of decoder is output of encoder and when i evaluated this model, it worked well.

    but when i tried to transform this to onnx, it couldn't.

    when evaluating, works well. when exporting, cannot.

    import argparse, os
    import torch
    from torch.utils.data import DataLoader
    import onnx
    import numpy as np
    
    import networks
    from src.options import MonodepthOptions
    
    parser = argparse.ArgumentParser(description="onnx for monodepth2")
    parser.add_argument("--pretrained_model",
                        type=str,
                        help="put the path of pretrained model")
    
    args = parser.parse_args()
    
    
    encoder_path = os.path.join(args.pretrained_model,"encoder.pth")
    encoder = networks.ResnetEncoder(18, False)
    
    decoder_path = os.path.join(args.pretrained_model, "depth.pth")
    depth_decoder = networks.DepthDecoder(encoder.num_ch_enc)
    
    # encoder_dict = torch.load(encoder_path)
    # model_dict = encoder.state_dict()
    # encoder.load_state_dict({k: v for k, v in encoder_dict.items() if k in model_dict})
    
    # decoder_dict = torch.load(decoder_path)
    # depth_decoder.load_state_dict(decoder_dict)
    
    # encoder.cuda()
    # encoder.eval()
    
    depth_decoder.cuda()
    depth_decoder.eval()
    
    # # model input size에 맞게 b c h w 순으로 파라미터 설정
    
    # f = torch.rand((1,3,192,640)).cuda()
    # images = encoder(f)
    
    f0_onnx = torch.rand((1, 64, 160, 256)).cuda()
    f1_onnx = torch.rand((1, 64, 80, 128)).cuda()
    f2_onnx = torch.rand((1, 128, 40, 64)).cuda()
    f3_onnx = torch.rand((1, 256, 20, 32)).cuda()
    f4_onnx = torch.rand((1, 512, 10, 16)).cuda()
    
    images = (f0_onnx, f1_onnx, f2_onnx, f3_onnx, f4_onnx)
    
    export_onnx_file = "monodepth.onnx"
    torch.onnx.export(depth_decoder,
    				  images,
    				  export_onnx_file,
    				  export_params=True,
    				  do_constant_folding=True,
    				  opset_version=10,
    				  input_names = ['encoder_output_0', 'encoder_output_1', 'encoder_output_2', 'encoder_output_3', 'encoder_output_4'],
    				  output_names = ['decoder_output_0', 'decoder_output_1', 'decoder_output_2', 'decoder_output_final'],
    				  dynamic_axes={'encoder_output_0' : {0 : 'batch_size'},
    				  				'encoder_output_1' : {0 : 'batch_size'},
    				  				'encoder_output_2' : {0 : 'batch_size'},
    				  				'encoder_output_3' : {0 : 'batch_size'},
    				  				'encoder_output_4' : {0 : 'batch_size'},
    				  				'decoder_output_0' : {0 : 'batch_size'},
    				  				'decoder_output_1' : {0 : 'batch_size'},
    				  				'decoder_output_2' : {0 : 'batch_size'},
    				  				'decoder_output_final' : {0 : 'batch_size'}}
    				  )
    
    # onnx_decoder = onnx.load("monodepth.onnx")
    # onnx.checker.check_model(onnx_decoder)
    # print("Done: converting decoder to onnx format!")
    
    

    ant this is log

    [email protected]:~/ws/src/mono# python3 to_onnx.py --pretrained_model files/weights/models/weights_1 /usr/local/lib/python3.8/dist-packages/torchvision/models/_utils.py:135: UserWarning: Using 'weights' as positional parameter(s) is deprecated since 0.13 and will be removed in 0.15. Please use keyword parameter(s) instead. warnings.warn( /usr/local/lib/python3.8/dist-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=None. warnings.warn(msg) Traceback (most recent call last): File "to_onnx.py", line 51, in torch.onnx.export(depth_decoder, File "/usr/local/lib/python3.8/dist-packages/torch/onnx/init.py", line 350, in export return utils.export( File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 163, in export _export( File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1074, in _export graph, params_dict, torch_out = _model_to_graph( File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 727, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 602, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 517, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1175, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(*trace_inputs)) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) TypeError: forward() takes 2 positional arguments but 6 were given

    is there a way to solve this problem??

    Further information

    model : monodepth2 https://github.com/nianticlabs/monodepth2

    os : docker nvidia/cuda:11.7.0-devel-ubuntu18.04

    • Is this issue related to a specific model?
      Model name (e.g. mnist):
      Model opset (e.g. 7):

    Notes

    Any additional information, code snippets.

    question converters 
    opened by hyuny223 2
  • SVD and SVDVals ops

    SVD and SVDVals ops

    SVD and SVDVals cover pytorch, numpy, and tensorflow SVD.

    For computing just singular values or the whole factorization: Numpy and tensorflow - single operation with a boolean, compute_uv Pytorch - Splits into two operations, svd and svd vals.

    I decided to split into two operations so the U and Vh outputs don't have to be optional.

    Numpy: https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html Pytorch: https://pytorch.org/docs/stable/generated/torch.linalg.svd.html https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html Tensorflow: https://www.tensorflow.org/api_docs/python/tf/linalg/svd

    previous discussion: https://github.com/pytorch/pytorch/issues/81084 https://github.com/onnx/onnx/issues/3839

    opened by williamberman 0
Releases(v1.12.0)
  • v1.12.0(Jun 18, 2022)

    ONNX v1.12.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

    Key Updates

    ai.onnx opset version increased to 17 with following changes:

    • New operators (ai.onnx): - LayerNormalization (#4076) - SequenceMap (#3892) - Signal Operators: DFT, HannWindow, HammingWindow, BlackmanWindow, MelWeightMatrix, STFT (#3741)
    • Operator Updates (ai.onnx):
      - [Scan] Remove unused type constraint I for newer Scan (opset 9+)(#4012)

    Shape inference enhancements

    • Extend InferShapes to expose result of data propagation (#3879)
    • Update shape inference for constant of shape (#4141)
    • Catch missing input type in function shape inference (#4123)
    • Add shape inference for Expand using symbolic shape input (#3789)
    • Fix Expand shape inference: stop rank inference if the shape is symbolic (#4019)

    Bug fixes and infrastructure improvements

    • Fix a bug in _get_initializer_tensors() (#4118)
    • Fix bug of resizeShapeInference for Resize13 (#4140)
    • Fix bug in SCE function body (#4038)
    • Use correct pytest types in backend (#3990) (#3994)
    • Checker should validate the node's inputs/outputs have names when its formal parameter is Variadic (#3979)
    • Loose NumPy requirement to grant more flexibility (#4059)
    • Fix crash: Skip unused value_info for version_converter (#4079)
    • Use %d for integer in version_converter (#4182)
    • Extend parser to handle other types (#4136)

    Documentation updates

    • Add documentation about functions to IR.md (#4180)
    • Clarify add new op documentation (#4150)
    • Clarify NonZero behavior for scalar input in spec (#4113)
    • Update shape inference documentation (#4163)
    • Fix a minor typo in operator Gather documentation (#4125)
    • Fix typo in CIPipelines.md (#4157)
    • Fix typo in slice doc (#4117)
    • Fix grammar in documents (#4094)
    • Clearer description of Slice (#3908)
    • Add OperatorSetId definition in docs (#4039)
    • Clean up protocol buffer definitions (#4201)
    • Change the wrong words of second layer input (#4044)
    • Clarify that op_type is case sensitive (#4096)

    Installation

    You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

    Notes

    • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

    Contributors

    Thanks to these individuals for their contributions in this release since last 1.11.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2022-02-08&to=2022-05-24&type=c): @jcwchen, @gramalingam, @xuzijian629, @garymm, @diyessi, @liqunfu, @jantonguirao, @daquexian, @fdwr, @andife, @wschin, @xadupre, @xkszltl, @snnn

    Source code(tar.gz)
    Source code(zip)
  • v1.11.0(Feb 17, 2022)

    ONNX v1.11.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

    Key Updates

    ai.onnx opset version increased to 16 with following changes:

    • New Operators (ai.onnx):
    • Operator Updates (ai.onnx):
      • Identity, add optional type support.
      • If, add optional data type support for output.
      • LeakyRelu, add bfloat16 type support.
      • Loop, add optional data type support for initial value and output.
      • PRelu, add bfloat16 type support.
      • RoiAlign, add an attribute coordinate_transformation_mode, correct the default behavior.
      • Scan, add bfloat16 type support for output.
      • ScatterElements, add reduction attribute.
      • ScatterND, add reduction attribute.
      • Where, extend Where op to permit bfloat16 types.
      • GreaterOrEqual, add bfloat16 type support.
      • LessOrEqual, add bfloat16 type support.

    ai.onnx.ml opset version increased to 3 with following changes:

    New functionality:

    • A new Model Hub for users to get started with state-of-the-art pre-trained ONNX models from the ONNX Model Zoo or for researchers and model developers to share models. https://github.com/onnx/onnx/pull/3712
    • Add compose utility to help with creating and combining models out of several graphs. https://github.com/onnx/onnx/pull/3820
    • Add FunctionBuilder utility class to help construct function ops. https://github.com/onnx/onnx/pull/3882

    Shape inference enhancements

    • Extend optional type inference. #3756
    • Make shape inference handle MapProto. #3772
    • Improve rank inference for Expand op. #3807
    • Enhance shape inference: ParseData/Transpose/QuantizeLinear. #3806
    • Honor existing dim_param in shape inference. #3896
    • Shape inference for functions. #3722
    • Use symbolic input for shape inference of ConstantOfShape. #3784

    Bug fixes and infrastructure improvements

    • Use MSVC Runtime as dll for official ONNX Windows release. #3644
    • Simplify common version converter adapter design patterns. #3761
    • Use scalar for OneHot's depth to prevent confusion. #3774
    • Correct wrong subgraph test example for If operator. #3798
    • [Dup] Add SpaceToDepth test cases. #3786
    • Fix error in Pad op convert. #3778
    • Fix some examples for ArgMax. #3851
    • Shape inference should not propagate missing optional outputs. #3815
    • Check negative index for attributes of Slice-1. #3810
    • Cleanup type cast related warnings. #3801
    • Replace whitelist by safelist. #3900
    • Fix weekly/Linux CI failures: correct skip list and remove old numpy related code. #3916
    • Fix old ConvTranspose shape inference and softmax upgrader. #3893
    • Fix Linux i686 Release CI failure due to the latest NumPy. #3918
    • Simplify function definition of context-dependent functions. #3882
    • Migration to using main branch. #3925
    • Append dim even both dim value and param are not set. #3828
    • Bump to 10.15 in AzurePipeline because 10.14 was deprecated. #3941
    • Six: remove all references. #3926
    • For issue 3849 to confirm that type check is performed during checker. #3902
    • Remove testing ort-nightly for Mac Python 3.6 due to unsupported ort-nightly. #3953
    • Mypy: update to 0.760 and remove vendored protobuf stubs. #3939
    • Upgrade Windows version in AzurePipeline since 2017 was dep. #3957
    • Version converter for Softmax should not produce empty shape. #3861
    • Fix Cppcheck warning about memset on NULL backend_ids. #3970
    • Bug fix of extractor which misses local functions. #3954
    • Add bfloat16 type to a few ops missing it. #3960

    Documentation updates

    • ONNX Hub Docs. #3712
    • Clarify definition of a tensor in IR docs. #3792
    • Document that Where supports multidirectional broadcasting. #3827
    • Sync build documentation in CONTRIBUTING.md. #3859
    • [CI][Doc] Add CI Pipelines doc/node tests verification. #3780
    • Remind release manager to remove old onnx-weekly packages after release. #3923
    • Fix the bug of shape in docs. #3927
    • Clean up README. #3961
    • Remove documentation about Python 2. #3963

    Installation

    You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

    Notes

    • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

    Additional Notes

    • ONNX will drop Python 3.6 support in next release because it has reached EOL.
    • ONNX will upgrade its NumPy version to 1.21.5 before next release to resolve vulnerability issue for old NumPy 1.16.6.
    • There will be infrastructure change to Linux packaging system to replace manylinux2010 with manylinux2014 or manylinux2.

    Contributors

    Thanks to these individuals for their contributions in this release since last 1.10.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2021-07-30&to=2022-02-08&type=c): @jcwchen, @gramalingam, @garymm, @mhamilton723, @TomWildenhain-Microsoft, @neginraoof, @xuzijian629, @liqunfu, @gwang-msft, @chudegao, @AlexandreEichenberger, @rajeevsrao, @matteosal, @stillmatic, @askhade, @liuyu21, @jantonguirao, @shinh, @kevinch-nv, @shubhambhokare1, @hwangdeyu, @jiafatom, @postrational, @snnn, @jackwish

    Source code(tar.gz)
    Source code(zip)
  • v1.10.2(Oct 26, 2021)

  • v1.10.1(Aug 2, 2021)

    This release is a patch release based on v1.10.0.

    Bug fix:

    • Include requirements.txt in source distribution https://github.com/onnx/onnx/pull/3623
    Source code(tar.gz)
    Source code(zip)
  • v1.10.0(Jul 31, 2021)

    ONNX v1.10.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

    Key Updates

    • Added new Optional and SparseTensor types. https://github.com/onnx/onnx/pull/3407 https://github.com/onnx/onnx/pull/3398
    • Added model local functions to ModelProto. https://github.com/onnx/onnx/pull/3532
    • Shape inference enhancements for Reshape, Squeeze, NonZero, DynamicQuantizeLinear.
    • Introduce symbolic shape inference support. https://github.com/onnx/onnx/issues/3506
    • New version converter tests. https://github.com/onnx/onnx/pull/3344
    • Add aarch64 wheel build support. https://github.com/onnx/onnx/pull/3414
    • Update ONNX IR version to 8 and opset version to 15. https://github.com/onnx/onnx/pull/3587

    IR Updates

    • Added two new types to ONNX type system. Optional and SparseTensor https://github.com/onnx/onnx/pull/3407 https://github.com/onnx/onnx/pull/3398
    • Extend model proto to include model local functions. https://github.com/onnx/onnx/pull/3532

    Opset version 15

    • New Function Operators:
      • Bernoulli https://github.com/onnx/onnx/pull/3431
      • CastLike https://github.com/onnx/onnx/pull/3558
    • New Operators:
    • Operator Updates:
      • Add additional type constraints in BatchNormalization. https://github.com/onnx/onnx/pull/3545
      • Add bfloat16 support for Pow. https://github.com/onnx/onnx/pull/3412
      • Extend Shape to return a slice using optional attributes start,end. https://github.com/onnx/onnx/pull/3580

    API

    • Symbolic shape inference support. https://github.com/onnx/onnx/issues/3506
      • Symbol generation https://github.com/onnx/onnx/pull/3518
      • Data propagation https://github.com/onnx/onnx/pull/3551 https://github.com/onnx/onnx/pull/3593
    • Shape inference enhancements
      • Add shape inference for NonZero. https://github.com/onnx/onnx/pull/3364
      • Add shape inference for Dynamic QuantizeLinear. https://github.com/onnx/onnx/pull/3539
      • Update Reshape shape inference. https://github.com/onnx/onnx/pull/3592
      • Fix shape inference for Squeeze. https://github.com/onnx/onnx/pull/3516
        • Fix shape inference for Squeeze without axes. https://github.com/onnx/onnx/pull/3465
    • Expose model parser API in Python (onnx.parser). https://github.com/onnx/onnx/pull/3540
    • Extend model proto to include model local functions.

    Infrastructure

    • Update protobuf version to 3.16. https://github.com/onnx/onnx/pull/3571
    • Add README contents to package description. https://github.com/onnx/onnx/pull/3376
    • Add requirements.txt to onnx repo. https://github.com/onnx/onnx/pull/3448
    • Add aarch64 wheel build support. https://github.com/onnx/onnx/pull/3414
    • Version converter support for recursion into subgraphs. https://github.com/onnx/onnx/pull/3474
    • Update ONNX examples to python3. https://github.com/onnx/onnx/pull/3450

    Bug fixes

    • Spec clarification for MatMulInteger and QLinearMatMul. https://github.com/onnx/onnx/pull/3585
    • Extend strict_model for ONNX checker. https://github.com/onnx/onnx/pull/3348
    • Always set the output of Shape to be rank-1. https://github.com/onnx/onnx/pull/3394
    • BatchNormalization outputs updated for training mode. https://github.com/onnx/onnx/pull/3379
    • Bugfix for proto utils and update checker error messages. https://github.com/onnx/onnx/pull/3373
    • Fix compilation warnings. https://github.com/onnx/onnx/pull/3616

    Installation

    You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

    Notes

    • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

    Contributors

    Thanks to these individuals for their contributions in this release: @jcwchen, @askhade, @gramalingam, @neginraoof, @matteosal, @postrational, @garymm, @yuslepukhin, @fdwr, @jackwish, @manbearian, @etusien, @impactaky, @rajeevsrao, @prasanthpul, @take-cheeze, @chudegao, @mindest, @yufenglee, @annajung, @hwangdeyu, @calvinmccarter-at-lightmatter, @ashbhandare, @xuzijian629, @IceTDrinker, @mrry

    Source code(tar.gz)
    Source code(zip)
  • v1.9.0(Apr 19, 2021)

    ONNX v1.9.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! You may learn more about the project, who is involved and what tools are available at the onnx.ai site.

    Key Updates

    • Removed Optimizers from ONNX packages https://github.com/onnx/onnx/pull/3288
    • Selective schema loading by specific opset_version https://github.com/onnx/onnx/pull/3266
    • Updates to external data helpers (more options to control which attributes are converted to external data and whether a model should be saved) https://github.com/onnx/onnx/pull/3280
    • New adapter for opset version converter https://github.com/onnx/onnx/pull/3343

    Opset version 14

    • New operator:
      • HardSwish https://github.com/onnx/onnx/pull/3332
      • Trilu https://github.com/onnx/onnx/pull/3291
    • Extended supported types to include uint8, int8, uint16, and int16. https://github.com/onnx/onnx/pull/3334
    • Added allowzero attribute to Reshape operator https://github.com/onnx/onnx/pull/3113
    • Allowed recurrent operations to be batchwise https://github.com/onnx/onnx/pull/3217
    • Expanded CumSum to float16 and bfloat16 data types https://github.com/onnx/onnx/pull/3195
    • Expanded Relu to all signed data types https://github.com/onnx/onnx/pull/3141
    • Added training-mode support to BatchNorm https://github.com/onnx/onnx/pull/3333

    API

    • onnx.OnnxParser Parser for a textual syntax of ONNX models https://github.com/onnx/onnx/pull/3194

    Infrastructure

    • Removed Python 3.5 in all release pipelines https://github.com/onnx/onnx/pull/3353
    • Added Python 3.9 to all release pipelines https://github.com/onnx/onnx/pull/3352
    • Added weekly CI: provide ONNX TestPyPI packages for verification #3283
    • Reduced binary size for Linux and Mac package https://github.com/onnx/onnx/pull/3337
    • Added check for uploaded/generated backend test data in release CIs https://github.com/onnx/onnx/pull/3274
    • Enabled no exception build and updated exception handling for DataTypeUtils https://github.com/onnx/onnx/pull/3265

    Bug fixes

    • Added missing test for ConvInteger without padding https://github.com/onnx/onnx/pull/3288
    • Updated Resize op test to opset 13 https://github.com/onnx/onnx/pull/3361
    • Expanded ir_pb_converter to empty shape https://github.com/onnx/onnx/pull/3279

    Installation

    You can simply pip upgrade using the pip install onnx --upgrade or build from source following the instructions on Github.

    Notes

    • Be aware of protobuf version gap issue (like building onnx with protobuf>=3.12 is not compatible with older protobuf)

    Contributors

    Thanks to these individuals for their contributions in this release: @jcwchen, @askhade, @postrational, @etusien, @wschin, @prasanthpul, @gramalingam, @daquexian, @BowenBao, @pranav-prakash, @matteosal, @linkerzhang, @annajung, @neginraoof, @tianleiwu, @tomdol

    Source code(tar.gz)
    Source code(zip)
  • v1.8.1(Jan 30, 2021)

    This release is a patch release based on v1.8.0.

    Bug fixes:

    • https://github.com/onnx/onnx/pull/3169 To resolve memory crash on Windows, register python exceptions and update exceptions handling
    • https://github.com/onnx/onnx/pull/3171 Fix bugs in external data helpers and add add size thresholds for converting
    • https://github.com/onnx/onnx/pull/2961 Fix build issues on some distributions of linux due to hard dependency on python2
    • https://github.com/onnx/onnx/pull/3221 Fix mypy wrapper error while using ONNX as a submodule
    • Solve protobuf error while import onnx on MacOS Catalina.

    API change: onnx.shape_inference does not throw shape_inference error now. If you want to see the shape_inference errors, please use onnx.shape_inference.infer_shapes(onnx_model, strict_mode=True).

    Release:

    • Mac: The minimum supported version of MacOS has been moved from 10.9 to 10.12.
    • Pipelines: Linux and Mac release pipelines have been moved from Travis-CI under onnx/wheel_builder to GitHub Action under onnx/onnx
    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(Nov 7, 2020)

    ONNX v1.8 is now available with exciting enhanced features! You may learn more about the project, who is involved and what tools are available at the onnx.ai site. We would like to thank every community member for contributing to the project!

    Key Updates

    • Windows conda package is now available in v1.8.0 Release (last supported version was v1.1.1)
    • Training
      • Added Differentiable tags to make Gradient operator better defined https://github.com/onnx/onnx/pull/2723, https://github.com/onnx/onnx/pull/2893, https://github.com/onnx/onnx/pull/2911, https://github.com/onnx/onnx/pull/2954
      • Removed GraphCall; eliminated need to implement GraphCall https://github.com/onnx/onnx/pull/2964
      • Created a tool and example for users to use TrainingInfoProto for training https://github.com/onnx/onnx/pull/3008
    • Shape Inference and Checker
      • Large model (>2GB model) support added for checker and shape_inference https://github.com/onnx/onnx/pull/2744
      • Graph level shape inference fixes to patch the IR gap introduced since IR version 4 https://github.com/onnx/onnx/pull/3023
      • Node level shape inference fixes for operators
    • Version Converter
      • More operators supported https://github.com/onnx/onnx/pull/2664
    • General Features
      • Added serialization for inputs and outputs of Sequence and Map data types https://github.com/onnx/onnx/pull/2581
      • Added programmatic access to version-table and extend make-model https://github.com/onnx/onnx/pull/2918
      • Added size check to make_tensor https://github.com/onnx/onnx/pull/2987

    Opset version 13

    API

    • onnx.shape_inference now accepts model path and supports >2GB models for shape inference. https://github.com/onnx/onnx/pull/3012

    Infrastructure

    • CI improvements for reliability
    • Moved to AzurePipelines to speed up runs
    • pybind11 updated to 2.6.0 to prevent segmentation fault on Windows

    Bug fixes

    • https://github.com/onnx/onnx/pull/2888 Return empty string from ToDataTypeString() when tensor_data_type not found
    • https://github.com/onnx/onnx/pull/2946 Resolve segfault on Input without tensor data in ConstantofShape
    • https://github.com/onnx/onnx/pull/2950 Add nullptr check to type inference mtds to avoid segfaults
    • https://github.com/onnx/onnx/pull/2983 Fix type inference issue (scalar initializers and Resize)
    • https://github.com/onnx/onnx/pull/3000 Fix ConvTranspose: enhance attribute check
    • https://github.com/onnx/onnx/pull/3005 Fix shape inference of scalar ConstantOfShape
    • https://github.com/onnx/onnx/pull/3014 Fix shape inference
    • https://github.com/onnx/onnx/pull/3023 IR gap issue has been fixed in checker and shape inference

    Installation

    You can simply pip upgrade using the pip install onnx --upgrade or build from source following the instructions on Github.

    Notes

    • onnx.optimizer is moving to another repo: https://github.com/onnx/optimizer. It will be removed from onnx/onnx in ONNX 1.9.
    • onnx.version_converter has IR gap issue - cannot use input from initializer: https://github.com/onnx/onnx/pull/3007
    • onnx.shape_inference updates both output and value_info. It will only update the original output in future update: https://github.com/onnx/onnx/issues/3069

    Contributors

    Thanks to these individuals for their contributions in this release: jcwchen, askhade, wschin, vinitra, prasanthpul, gramalingam, daquexian, rajeevnalawadi, sveta-levitan, ashbhandare, chinhuang007, KsenijaS, shinh, BowenBao, shubhambhokare1, pranav-prakash, prabhat00155, pluradj, matteosal, jackwish, Yukigaru, H1Gdev, 462630221, natke, kevinch-nv, RandySheriffH, souptc, fdwr, HectorSVC, jspisak, codemzs, yuslepukhin, linkerzhang

    Source code(tar.gz)
    Source code(zip)
    onnx-1.8.0.tar.gz(4.98 MB)
    onnx-1.8.0.zip(4.97 MB)
  • v1.7.0(May 9, 2020)

    ONNX v1.7 is now available with exciting new features! We would like to thank everyone who contributed to this release! You may learn more about the project, who is involved and what tools are available at the onnx.ai site.

    Change Log

    Major changes and updates since the v1.6.0 release:

    Training Support, as a tech preview

    • A set of new training features are introduced to represent neural network models in the process of model training.
    • A protobuf message TrainingInfoProto is added to store training information, including the algorithm and initializers, as well as new operators Gradient and GraphCall and new functions describing most commonly used Loss functions and Optimizers, all in the domain ai.onnx.preview.training.
    • The new spec allows one to create a model training task or a partially trained model in one framework, then export it in ONNX and load into a runtime or another framework where the training can proceed, with the expectation of theoretically similar outcome to the model trained in the original framework.
    • Note the converters do not support training yet. The goal of this tech preview is to test the new spec and to enable converters to add full support in future releases.

    Operator changes

    • Opset has been updated to version 12.

    • Preview training opset has been added as version 1.

    • New operators:

      • ONNX
        • Celu (https://github.com/onnx/onnx/pull/2575) (https://github.com/onnx/onnx/pull/2573)
        • Einsum (https://github.com/onnx/onnx/pull/2504)
        • GreaterOrEqual (https://github.com/onnx/onnx/pull/2606)
        • LessOrEqual (https://github.com/onnx/onnx/pull/2606)
        • NegativeLogLikelihoodLoss (https://github.com/onnx/onnx/pull/2551) (https://github.com/onnx/onnx/pull/2573) (https://github.com/onnx/onnx/pull/2725)
        • SoftmaxCrossEntropyLoss (https://github.com/onnx/onnx/pull/2573) (https://github.com/onnx/onnx/pull/2667) (https://github.com/onnx/onnx/pull/2680) (https://github.com/onnx/onnx/pull/2690) (https://github.com/onnx/onnx/pull/2696) (https://github.com/onnx/onnx/pull/2700) (https://github.com/onnx/onnx/pull/2703) (https://github.com/onnx/onnx/pull/2725)
        • Pow (https://github.com/onnx/onnx/pull/2666)
      • ONNX preview training
        • Gradient (https://github.com/onnx/onnx/pull/2314)
        • GraphCall (https://github.com/onnx/onnx/pull/2314)
        • Adagrad (https://github.com/onnx/onnx/pull/1955)
        • Adam (https://github.com/onnx/onnx/pull/1970)
        • SG with Momentum (https://github.com/onnx/onnx/pull/1959)
    • Updated operators:

      • ONNX
        • ArgMax (https://github.com/onnx/onnx/pull/2461)
        • ArgMin (https://github.com/onnx/onnx/pull/2461)
        • Clip (https://github.com/onnx/onnx/pull/2532)
        • Constant (https://github.com/onnx/onnx/pull/2592)
        • Dropout (https://github.com/onnx/onnx/pull/2568) (https://github.com/onnx/onnx/pull/2725)
        • GatherND (https://github.com/onnx/onnx/pull/2585)
        • Max (https://github.com/onnx/onnx/pull/2608)
        • MaxPool (https://github.com/onnx/onnx/pull/2510)
        • Min (https://github.com/onnx/onnx/pull/2608)
        • ReduceMax (https://github.com/onnx/onnx/pull/2516)
        • ReduceMin (https://github.com/onnx/onnx/pull/2516)
    • General Features

      • Operator registration APIs are updated to support dynamic function body (sub-graph) registration. https://github.com/onnx/onnx/blob/d343755dfdfaae46ccdd1591076205e1bfaa67bf/onnx/defs/schema.h#L674
      • Functions’ body graph are extended to be able to rely on multiple external operator sets.https://github.com/onnx/onnx/blob/master/onnx/onnx-operators.proto#L77
      • Some of the operators (for example, all loss functions) added are actually “functions”, as it’s strongly advocated to add functions instead of (primitive) ops.
      • The model checker is enhanced (https://github.com/onnx/onnx/pull/2367)
        • Call shape-inference to do the extra-checking performed by the type-and-shape-inference methods of ops
        • Check that the typing constraints specified by the op schema are satisfied
        • Infer output types of nodes from the typing constraints specified by the op schema
      • Documentation enhancement
        • Add function description in IR.md (#2596)
        • Add external tensor data in IR.md (#2323)
        • Update documentation Split (#2544), QLinearConv (#2464), Loop (#2337), NonZero and Slice (#2429)

    ** Bug fixes **

    • Fix the attribute types section in IR.md (#2590)
    • Fix a bug in ScatterND shape inference (#2577)
    • Copy sizes in some optimizers to remain shape information (#2574)
    • Fix the intermediate zero calculation for DynamicQuantizeLinear (#2556)
    • Fix Slice op’s shape inference logic (#2526)
    • Correct the order of arguments of InferShapes (#2500)
    • Fix the optimize pass of fuse_consecutive_transposes (#2471)
    • Fix fuse_consecutive_concat order bug in onnx optimizer (#2447)
    • Keep symbolic dims in Concat with a single input (#2418)
    • Fix broken error message string formatting in softmax shape inferencing (#2403)
    • Fix bug in function body verifier (#2390)
    • Fix shape inference for Split with split attribute (#2328)

    Installation

    You can simply pip upgrade using the following command or build from source following the instructions on Github.

    pip install onnx --upgrade

    Commits and Pull Requests Since v1.6.

    You can find all the commits and pull requests on Github, https://github.com/onnx/onnx/pulls?q=is%3Apr+milestone%3A1.7+

    Additional Notes

    Python 2.7 support will be deprecated in ONNX 1.8 release. Please plan accordingly.

    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Sep 28, 2019)

    ONNX v1.6 is now available! We would like to thank everybody who has contributed to this release! You may learn more about the project, who is involved and what tools are available at the onnx.ai site.

    Changelog

    Major changes and updates since the v1.5.0 release:

    Graph representation

    • Sequence and map types are now available in ONNX (previously only available in ONNX-ML). (#2249)
    • Sparse tensor support has been added. It should significantly reduce the storage size for models with many zero weights. See here for example usage. (#2019)
    • ONNX IR version updated to version 6 to reflect support for new types.

    Operators

    Bug Fixes

    • Fix resize shape inference issue in opset10 (#2294)
    • Fix extra collect_snippets warning (#2307)
    • Fix link to community docs in readme (#2261)
    • Fix segfault in tile shape inference (#2221)
    • Fix errors in RoiAlign shape inference code (#2167)
    • Fix globalpool output shape (#2147)
    • Fix inconsistency in describing graph's initializer (#2115)
    • Fix NN defs file (#2083)
    • Fix torchvision installation (#2054)
    • Fix torchvision installation (#2054)
    • Fix bug that kernel_shape rather than effective_kernel_shape is used in dilated conv (#2043)
    • Fix auto_pad shape inference bug (#2028)
    • fix macro ONNX_DISALLOW_COPY_AND_ASSIGN bug (#2017)
    • Fix shape inference logic for TopK operator (#2005)
    • Fix a shapeinference bug in upsample v9/10 (#1969)

    Installation

    You can simply pip upgrade using the following command or build from source following the instructions on Github.

    pip install onnx --upgrade

    Commits and Pull Requests Since v1.5.0

    Fix spec and shape inference for Unsqueeze op (#2347) Bump NMS version for avoiding regression in existing models (#2348) Relax IF's shape inference rule (#2345) Clarify behavior in ConvTranspose (#2343) Fix node test case model for Gemm scalar bias case (#2342) Update pybind (#2340) Update gen_doc script to validate proto3 files (#2122) Fix some backend tests (#2335) Gemm optional bias (#2330) Changes for AIX platform (#1913) Updated test cases for reshape (#2127) Replace is by == (#2326) Updated docs for strides and dilations attributes (#2291) Revamped test cases for Gemm (#2060) Add more shape inference tests for Logical operators to improve coverage (#2133) Change incorrect use of ValueError to TypeError (#2304) Support dynamic 'pads' and 'value' in Pad operator (#2031) Update IR doc to clarify initializers are permitted as node inputs (#2320) Avoid uses of special chars (#2315) Regenerate ONNX proto and add release date to ver 6 IR (#2316) Add description of default type about y_zero_point (#2110) Support make_attribute empty string (#2129) More unsqueeze tests (#2200) Fix resize shape inference issue in opset10 (#2294) Sequence related ops (#2249) Add helper function update_inputs_outputs_dims to tools (#2148) Update documentation about required input output types (#2310) Shape inference for NMS (#2269) Fix extra collect_snippets warning (#2277) (#2307) Fix shapeinference function (#2296) fix the buffer overflow problem in shape inference logic of Squeeze op Support for negative indices in 'Gather' Fix collect_snippets warnings (#2277) Update printable_graph in helper.py to output details of initializers that do not have matching graph inputs. (#2135) test int64 input type for 'where' op (#2253) Supporting negative axes for all existing onnx ops (#2281) Update managingexperimentalops.md (#1981) Fix link to community docs in readme (#2261) move map and sequence types to onnx domain Improve compatiblity with proto3 and enable reading attributes (#2288) Remove type info for loop variadic input in Loop op used to compose the Range op (#2287) Add Foundation WG to working-groups.md (#2276) Fix testdata model for CumSum. Add exclusive attribute. (#2271) Support GatherND operator in ONNX (#2106) Support ScatterND operator in ONNX (#2220) Add Det to ONNX (#2233) Update the description of nearest_mode of resize op (#2257) Adding sparse tensor to ONNX (#2019) Support Range operator in ONNX (#2242) Update resize op (#2057) Add function to fuse dynamic quantization graph into 1 node (#2187) Update logo_request.md (#2231) Update Clip in opset 11 to support min/max as inputs instead of attributes (#2096) Fix segfault in tile shape inference (#2221) update onehot shape inference to reflect the spec for depth input (#2224) Add GatherElements Op and Rename ScatterElements (#2143) Unique (#2141) Clarify dimension variable scoping (#2211) Liqun/topk sort (#2126) Update document for NMS (#2193) Handle negative 'axis' value in Split type and shape inferencing (#2177) depth to space shuffle order (#2163) minor updates to fix links in readme (#2189) Add check to disallow squeezing input axes which are not 1 (#2204) 2019-07-28 Clarify ambiguity in gather spec regarding indices expectation (#2202) Fix some minor issues in IR.md and Versioning.md (#2108) Skip install typing package for python >=3.5 (#2199) Member Company logo guidelines (#2196) remove link to outdated issue for contributions wanted (#2186) Create sigs.md (#2103) mintor format update (#2180) add more types support for Equal op (#2176) Update AddNewOP document. (#2172) Add missing space (#2150) python api example typo fix (#2155) Fix errors in RoiAlign shape inference code (#2167) TensorProto::INT8 & INT16 were missed here (#2164) Fix LabelEncoder's shape inference (#2170) Fixing a unit test in Cumsum Operator (#2157) [New Operator] CumSum (#2030) Fix globalpool output shape (#2147) Expose ONNX_ML build option to python (#2138) Missing newline fix (#2128) Avoid unnecessary copies of names by checker (#2098) update qlinear conv test (#2120) Add shape inference for LinearClassifier (#2077) Fix inconsistency in describing graph's initializer. The initializer (#2115) Update codeowners to have community folder changes assigned to steering committee (#2104) Fix Resize/Upsample Shape inference function (#2085) Clarify shape inference requirements for new operators (#2088) Fix NN defs file (#2083) Fix type s/depracted/deprecated/ (#2092) Add shape inference for Tile op (#2076) [New Operator] Round (#2053) Add dilations support in ConvTranspose shape inference and update docs (#2068) Fix typo (#2069) Add a missing step when upgrading an operator (#2071) Clarify the axis/size in pads Fix wrong condition and add --user in update_doc.sh (#2050) Add bit-shift operators for supporting hashing (#1931) Add shape inference logic for Expand op (#2041) update qops tests (#2040) Fix torchvision installation (#2054) Fix bug that kernel_shape rather than effective_kernel_shape is used in dilated conv (#2043) Changes done internally at Facebook (#2035) Explicitly specify type of integers in the input tensor. (#2034) Version Conversion of Min Fix auto_pad shape inference bug (#2028) Version Conversion from opset 8 to 9 (#2007) fix macro ONNX_DISALLOW_COPY_AND_ASSIGN bug (#2017) fix array range bug (#2015) Relax constraint on subgraph input/output type and shape (#2009) Fix shape inference logic for TopK operator (#2005) Nullary variadic (#1889) Removed setting MD/MDd flags manually through cmake. The MTd/MT part is still necessary. Looks like CI fails without it. (#1995) Move NonMaxSupression to object_detection folder (#2001) Prevent using invalid iterator Add shape inference for legacy auto_pad modes (#1988) Move Quantization working group to completed state (#1980) Define the IR acronym (#1985) fix shape inference (#1984) fixing some of Mod test cases (#1962) Lint the docs name (#1982) Fix a shapeinference bug in upsample v9/10 (#1969) Create managingexperimentalops (#1974) Create archivefileformat doc based on the wiki equivalent (#1973) Create NLPinONNXproposal (#1975) Create ONNXIFIproposal (#1976) Create onnxreleases (#1977) Create functionsproposal (#1978) Create typeannotations.md (#1979)

    Source code(tar.gz)
    Source code(zip)
    onnx-1.6.0.tar.gz(2.98 MB)
  • v1.5.0(Apr 24, 2019)

    ONNX v1.5 is now available! You may learn more about the project, who is involved and what tools are available at the onnx.ai site. We would like to thank every community member for contributing to the project!

    TL;DR

    The major changes/updates since v1.4 release:

    • Opset 10 adds operators to support object detection models such as Yolo v3, Faster RCNN, and SSD. Sample models will be added to the ONNX Model Zoo in upcoming weeks
    • ONNX file format is updated to version 5
    • Quantization support (with first set of operators)
    • ONNX Function is promoted to an official feature to support composing operators, allowing for support of more operators from other frameworks while limiting the introduction of new operators in the ONNX spec
    • Updates to existing ops, including shape inference fixes
    • All experimental ops are removed, and the concept of experimental ops has been deprecated
    • Python 3.7 wheels are now shipped

    How do I get the latest ONNX?

    You can simply pip upgrade using the following command or of course build from source from the latest on Github: pip install onnx --upgrade

    Commits since the v1.4 release:

    • Fix shape inference for slice (#1950)
    • Fix shape inference for ConstantOfShape op (#1951)
    • Add NonMaxSuppression operator (#1703)
    • add node tests for quantized ops (#1944)
    • Fix test stat coverage script (#1948)
    • Add IsInf to detect infinity values (#1884)
    • Fix shape inference for matmul (#1941)
    • Shape Inference Tests for QOps (#1929)
    • Prevent unused variables from generating warnings across all platforms.  (#1930)
    • add title (#1919)
    • add quantization ops in onnx (#1908)
    • Create working*groups.md (#1916)
    • use ONNX_NAMESPACE::to_string instead of std::to_string (#1915)
    • Remove all the experimental ops (#1909)
    • opset converter backward compatibility support for opset versions 9 and 8 (#1847)
    • Create CODEOWNERS for automatic reviewer assignment for PRs (#1910)
    • Revert "quantization support in onnx (#1872)" (#1911)
    • quantization support in onnx (#1872)
    • Update LICENSE formatting and clarify # of WG chairs (#1907)
    • update the squeeze and unsqueeze doc (#1905)
    • fix the ir_version onnx*operators.proto (#1903)
    • fix testcase names of maxpool_2d_ceil and averagepool_2d_ceil (#1896)
    • Fix wrongly handled attribute in MVN and test generating scripts (#1877)
    • Add dilation attribute to MaxPool (#1864)
    • update copyright for open governance (#1885)
    • open governance (#1881)
    • Revert "Adding Reverse op (#1804)" (#1882)
    • Adding Reverse op (#1804)
    • update both core and ml docs (#1879)
    • fix the problems introduced in previous PRs in operator registration (#1878)
    • Skip the schema check on ops in non*standard domain (#1876)
    • Introduce Function Body Helper  (#1868)
    • Support down sampling for Upsample with scales < 1. (#1773)
    • Remove scaledtanh (#1866)
    • Add Ceil support for Max and Average Pooling (#1860)
    • Add testcase generator for functions (#1862)
    • Promote Thresholded Relu Op (#1856)
    • Update Slice with dynamic input & optional input steps (#1836)
    • Merge function into opschema (#1834)
    • Handle string comparision represented as np.objects (#1851)
    • remove global variable in header file (#1850)
    • fix the issue that the version was not bumped when changing its type constraint declaration. (#1848)
    • Change TopK operator to allow dynamic 'k' (#1829)
    • Remove exp op: Affine, ImageScaler,ParametricSoftplus, Crop. (#1832)
    • Fix shape inference when auto_pad is notset again (#1830)
    • More extendable Runner (#1809)
    • Infer shape of the second output of Dropout op (#1822)
    • Clarify dtype of Dropout's mask output (#1826)
    • Fix shape inference when auto_pad  is notset (#1824)
    • update test datat (#1825)
    • Add stringnormalizer operator to ONNX (#1745)
    • Support defined ONNX_ML in parent cmake files (#1821)
    • Delete OpsetVersionConverter.md which is a duplicate of VersionConverter.md (#1818)
    • [ONNXIFI]Add extension to be implementable (#1796)
    • Revert "Implement Op Annotation's for ONNX (#1648)" (#1812)
    • Enable ONNX_ML by default (#1810)
    • fix Greater and Less doc (#1811)
    • Implement Op Annotation's for ONNX (#1648)
    • Versioning doc update for Opset 9 (#1805)
    • add dilation case for ConvTranspose op (#1797)
    • allow removed experimental ops in the checker for now (#1792)
    • [ONNXIFI]Add extension of onnxSetIOAndRunGraph (#1781)
    • Bump docker image version from 230 to 238 (#1786)
    • Fix: setup.py is using wrong cmake build type (#1784)
    • Fix Cast testcase data (#1776)
    • Add ppc64le build (#1768)
    • Update Broadcasting.md (#1769)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.1(Jan 23, 2019)

  • v1.4.0(Jan 23, 2019)

    We are excited to announce the v1.4 release of ONNX is now available! For those who aren't aware of or know about the ONNX, you can learn more about the project, who is involved and what tools are available at the onnx.ai site.

    TL;DR

    • The ONNX project now has more than 27 companies on board and 31 runtimes, converters, frameworks and other tools officially supporting ONNX.
    • This release added several big features including support for large models (larger than 2GB) and store the data externally, enhanced support for control flow operators, added a test driver for ONNXIFI enabling C++ tests.
    • IR version is bumped from 3 to 4 and the opset version from 8 to 9.
    • All told this release included 270+ commits since the last release

    How do I get the latest ONNX?

    You can simply pip upgrade using the following command or of course build from source from the latest on Github(our source of the truth):

    pip install onnx --upgrade

    Quick update on what's happened since our last release:

    December 4, 2018 - ONNX Runtime for inferencing machine learning models open sourced by Microsoft ONNX Runtime, a high-performance inference engine for machine learning models in the ONNX format, is now open source. ONNX Runtime is the first publicly available inference engine that fully implements the ONNX specification, including the ONNX-ML profile. Python, C#, and C APIs are available for Linux, Windows, and Mac. ONNX Runtime can deliver an average performance gain of 2X for inferencing. Partners in the ONNX community including Intel and NVIDIA are actively integrating their technology with ONNX Runtime to enable more acceleration. READ MORE

    November 29, 2018 - ONNX.js for running ONNX models on browsers and Node.js ONNX.js, an open source Javascript library for running ONNX models on browsers and on Node.js, is now available. It allows web developers to score pre-trained ONNX models directly on browsers, and has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs. ONNX.js is the first solution to utilize multi-threading in a Javascript-based AI inference engine (via Web Workers), offering significant performance improvements over existing solutions on CPU. READ MORE

    October 24, 2018 - CEVA Adds ONNX Support to CDNN Neural Network Compiler CEVA, Inc., the leading licensor of signal processing platforms and artificial intelligence processors for smarter, connected devices, today announced that the latest release of its award-winning CEVA Deep Neural Network (CDNN) compiler supports the Open Neural Network Exchange (ONNX) format. READ MORE

    October 16, 2018 - ONNX Runtime for inferencing machine learning models now in preview We are excited to release the preview of ONNX Runtime, a high-performance inference engine for machine learning models in the Open Neural Network Exchange (ONNX) format. ONNX Runtime is compatible with ONNX version 1.2 and comes in Python packages that support both CPU and GPU to enable inferencing using Azure Machine Learning service and on any Linux machine running Ubuntu 16. READ MORE

    September 6, 2018 - Synopsys Announces Support for the Open Neural Network Exchange Format in ARC MetaWare EV Development Toolkit Synopsys, Inc. today announced support for the Open Neural Network Exchange (ONNX) format in the upcoming release of its DesignWare® ARC® MetaWare EV Development Toolkit, a complete set of tools, runtime software and libraries to develop vision and artificial intelligence (AI) applications for ARC EV6x Embedded Vision Processor IP. READ MORE

    Commits since the v1.3 release (by area):

    New Operator and Operator Updates:
    Adding generator op ConstantLike (#1406) Supporting int32 and int64 in Less and Greater (#1390)
    fix AvgPool doc. add default value for count_include_pad (#1391)
    Add DynamicSlice experimental op (#1377)
    fix the doc for softmax (#1374)
    Fix the shape inference for concat (#1361)
    Add several hyperbolic function ops. (#1499)
    Add OneHot op to ONNX. (#1567)
    Fix MaxUnpool shape inference when output_shape is provided as input (#… Add type shape inferencing for the If operator (#1571) fix ConvTranspose spec (#1566)
    Change upsample operator to allow dynamic 'scales' (#1467)
    Fix output type bug in MaxUnpool definition. (#1553)
    Add Compress Op (#1454) Add MaxUnpool op to ONNX. (#1494) support more types for Gemm, Flatten and PRelu (#1472)
    deprecate no-spatial mode of BN (#1637) Add Where op. (#1569)
    Fix output_shape of a testcase for ConvTranspose (#1437)
    Adding EyeLike generator op. (#1428)
    Clarify the spec for convolution transpose input shape (#1413) Separate types of inputs 1 and 2 in OneHot op. (#1610)
    make output shape clear enough for Softmax family (#1634)
    fix batchnorm doc (#1633)
    Add Scatter op to ONNX (#1517) Add Erf operator for computing error function (#1675) Add IsNaN operator. (#1656) Add Sign Op (#1658) Update scan (#1653) add isnan data (#1685)
    Clarify some aspects of the Loop spec. (#1587) repaire convtranspose shape inference (#1660) Remove ConstantLike op. Updates to ConstantOfShape op. (#1716) add constantofshape (#1582) Add Shrink operator (#1622)
    Scan test update (#1732) Scan output axes (#1737) Add NonZero op. (#1714)fix the test cases for constantofshape (#1746) Add sample implementation support (#1712)
    Update definition of Cast Op to support casting to/from string (#1704)
    Update ConstantOfShape op (#1744)
    Add TfIdfVectorizer operator to ONNX (#1721)

    ONNXIFI:
    ONNXIFI cpp test driver (#1290)
    Remove ONNXIFI_CHECK_RESULT from onnxRelease* functions (#1397)
    Change onnxifi test driver classname (#1396)
    Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#…
    [ONNXIFI]Fix gtest assert (#1482)
    [ONNXIFI]Reliable memory of shape in test driver (#1480) onnxifi test driver bugs fixed (#1462)
    [ONNXIFI]gtest:expect to assert (#1456)[ONNXIFI]Fix the crash when weightCount = 0 (#1451)
    [ONNXIFI]Make TEST_P be able to show the test case name directly (#1487)
    [onnxifi] Make sure that backend handles run async. (#1599)
    Fix onnxifi test (#1617)

    Miscellaneous:
    bump up the node test to opset 9 (#1431)
    remove unindexed ConstantLike test case (#1432)
    Add node name for error & Fix typo (#1426) Fix the typo in the doc (#1427)
    Adding checker/typeshape inference logic for Function (#1423) [cmake] Allow adding extra source files to the onnx lib (#1439) Add the ability to deprecate an OpSchema (#1317)
    [Anderspapitto patch] fix the shape inference for broadcasting (#1368) external_data: Store large tensor values in separate files (#678)
    Add opaque type support (#1408) Fix checker logic (#1459)
    Add version table to Versioning.md to provide a clear mapping (#1418)
    serialized model data in test driver, ir version is now corrected (#1455 refresh onnx-ml.proto (#1448)
    Fix ONNX_NAMESPACE definition (#1444) Add BFLOAT16 data type (FLOAT32 truncated to 16 bits) (#1421)
    Use strings directly for casing as np.object w/o redundant StringHold
    Remove default value for 'dtype' attribute in ConstantLike op. (#1461)
    Fix TensorProto int32_data comment (#1509)
    fix ninja external (#1507)
    Shut up warnings about markers. (#1505) add the script (#1501)
    Minor cleanup in circleci build scripts (#1498)
    fix onnx checker to support proto3 models. (#1495)
    Add config files for CircleCI (#1490) Change function ownership to ONNX (#1493) maintain the integration of gtest arguments (#1491)
    Skip some warning for clang-cl (#1484)
    Make ONNX compatible with gcc-8 (#1488)
    Build with old version protobuf on Windows (#1486)
    Clean memory when failed test (#1476) Change Function registry flow; Get rid of whole-archive in compile (#… fix the bug of loading model input/output proto (#1477)
    Operator set versioning - tighten wording regarding breaking changes (#… add skip in gtest & update gtest version (#1473)
    Opaque type ToString() does not wrap the result into the supplied (#1468 Fix compiler warnings on unhandled bfloat16 switch case (#1470)
    Move the definition of the singleton DomainToVersionRange to .cc file (
    fix some issue with namespace (#1533)
    Remove Opaque type parameters as not needed. Adjust DataType handling. ( Use vector instead of set to keep the order of the opt passes (#1524) Pin awscli to last known good version (#1518)
    Update docker image version used in CircleCI (#1511)
    Fix the mapping for Complex128 data type (#1422)
    add default value to doc (#1410)
    Fixup handling of captured values as graph outputs (#1411)
    [build] Add ONNX_API for protos in all cases (#1407)
    [compiler flag] Issue a warning if class has virtual method but missi…
    Add a virtual destructor to GraphInferencer (#1574)
    Add Scan type/shape inferencing (#1503) Add hook to InferenceContext to allow running type/shape inferencing … ( Implemented shape inference for Gather (#1525) add eliminate nop monotone argmax pass (#1519) Enable -Wall -Wextra -Werror for CI (#1547) Introduce SparseTensor ML proto (#1554)
    In driver test check the return status of onnxGetBackendIDs (#1597)
    Make CI log less verbose (#1595) Loop type shape inferencing (#1591) add uint8 (#1590)
    Add domain as an optional parameter for make_node function (#1588) Remove unreachable code in shape_inference.h (#1585)
    fix a newline in Scan doc (#1541)allow variadic parameters of different types (#1615)
    Fix a bug in vector address access (#1598)
    Handle new types in the switch. (#1608)
    Bump docker image version to 230 used in CircleCI (#1606)
    type proto does not exactly match the type str, (#1545) Fix 'line break after binary operator' flake8 warnings. (#1550)
    remove inappropriate consts (#1632)
    Shape inference fix for broadcast, concat and scan (#1594) mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (#1466)
    Add a capability to input/output unicode strings (#1734) Include guidance on adding new operators (#1416) Clarify namescopes in the presence of nested subgraphs (#1665)
    use an empty initializer to create map (#1643) Remove redundant const (#1639)
    Show the op's type and name when the shape inference is failed. (#1623)
    link the tutorial (#1650)
    Upgrade label encoder to support more input types (#1596) Add Doc about Adding New Operator into ONNX (#1647) Fix unused var warning (#1669)
    Changes done internally at Facebook (#1668) Replace np.long by np.int64 (#1664)
    Infer shape from data in Constant nodes (#1667)
    fix the const map initializatoin (#1662)
    Add scan test case (#1586) Add bfloat16 support. (#1699)
    ONNX does not maintain versions for experimental ops (#1696)
    Correct type of value_info in Graph (#1694)
    Fix typos (#1686)
    Use int instead of enum to store data type (#1626)
    fix broken link in VersionConverter.md (#1683)
    add a shape inference test for group conv (#1719)
    Set symbol visibility to hidden for non-Windows (#1707)
    [Minor] Fix Windows line ending in test coverage generating script (#… Support rtol and atol at the model granularity (#1723)
    turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion
    typos fixed: iutput -> input (#1726)
    print some information (#1724)
    Update README.md (#1722)
    Handle negative axis in scan shape inference (#1748)
    remove stale test cases (#1434)
    Show string names of data types instead of int IDs (#1749)
    Relax constraint that the initializers must be a subset of graph inputs (#1718)
    Fix typo in scan shape inferencing (#1753)

    Cheers!
    -The ONNX Team

    Source code(tar.gz)
    Source code(zip)
    onnx-1.4.0.tar.gz(2.65 MB)
  • v1.2.3(Sep 10, 2018)

  • v1.3.0(Aug 30, 2018)

    • ONNXIFI 1.0
    • Operator Set 8
      • Control Flow Operators graduated from experimental
      • Added new operator Expand
      • Updated operators Max, Min, Mean and Sum to support broadcasting
      • Support output indices in operator MaxPool
      • Varies documentation improvements
    • Introduced Function concept for representing composed operators [experimental]
    • Enhanced shape inference
      • Support shape inference for Reshape operator with constant new shape
    • More ONNX optimization passes
      • Available passes are here
    • More operator backend tests
    • Opset Version Converter
      • Supported operators include: Add, Mul, Gemm, Relu, BatchNorm, Concat, Reshape, Sum, MaxPool, AveragePool, Dropout
      • All models in model zoo are covered, except tiny-yolo-v2 (PRelu needs adapter, WIP)
    • Quantization coming soon
      • We are currently working with the community to collect more feedback and finalize. We expect this to happen quickly and will be released as quickly as possible and out of cycle if needed.
    Source code(tar.gz)
    Source code(zip)
    onnx-1.3.0.tar.gz(2.76 MB)
  • v1.2.2(Jun 18, 2018)

    This release is a patch release based on v1.2.1:

    Bug fixes:

    • #1040 - Update proto files
    • #1044 - Fix Operator tests (test data fix)
    • #1052 - Fix Proto3 issues
    • #1053 - Type and shape inference code fix
    • #1057 - Op schema code fix
    • #1058 - Remove empty model (test data fix)
    • #1060 - Type and shape inference code fix
    • #1063 - PReLU version fix
    • #1064 - Pytorch generated test case removal (test data fix)
    • #1069 - Remove erroneous documentation around maps and sequences (description only)
    • #1070 - Add more check for type and shape inference code
    • #1090 - Fix local region definition in LRN spec (description only)
    • #1102 - Add float16 support back for math and reduction ops
    • #1103 - Make RNN/LSTM/GRU treatment of recurrent weights consistent
    • #1104 - Remove/replace /MX with /WX for MSVC build (build fix)
    • #1105 - Add ignoring flags (build fix)
    • #1107 - Fix the LRS’s doc (description only)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(May 25, 2018)

    ONNX 1.2.1 release.

    The following changes have been made since the 1.1.2 release:

    IR Changes

    • Adds function and attribute reference (PR #802).
    • Adds dimension denotation (PR #443) and type denotation (PR #879).

    Operator Changes

    The operator set version of onnx 1.2 is 7 for ONNX domain and 1 for ONNX_ML domain.

    • Type and shape inference function added for all operators.
    • Adds new operators. o Upsample (PR #861) – promoted from experimental, attributes and behavior updated to support arbitrary # of dimensions. o Identity (PR #892) – promoted from experimental. o Acos, Asin, Atan, Cos, Sin, Tan (PR #869). o Multinomial (PR #897)
    • Removes FC (experimental) op (PR #977).
    • Moves to numpy broadcasting semantics (PR #907).
    • Clarifies “optional” semantics for input/output and adjust RNN/GRU/LSTM/BatchNormalization/Dropout accordingly (PR #1006, PR #1014).
    • AveragePool – formulas for output shape updated (PR #751), extended to support average count including padding (PR #884)
    • BatchNormalization – clarify outputs can be n-dim (PR #733)
    • Cast – change to attr from string to int (PR #727)
    • ConstantFill (exp) – change value attr from optional to default value of 0 (PR #808)
    • InstanceNormalization – clarify outputs can be n-dim (PR #733)
    • MaxPool – formulas for output shape updated (PR #751)
    • AveragePool, MaxPool, Conv – update to support dimension denotation (PR #443)
    • Reshape – add output shape as an input (PR #608)
    • Size – change output from int to scalar tensor (PR #759)
    • Tile – replace tiles and axis inputs with repeats to match numpy (PR #757)
    • ZipMap – update type constrains from map to seq (PR #818)
    • Affine – add default values for alpha and beta attributes (PR #820)
    • FeatureVectorizer – update behavior (PR #843)
    • LinearClassifier – coefficient attribute is now required (PR #836)
    • RandomNormalLike, RandomUniformLike – change input type constraints and change behavior to copy shape instead of compute it (PR #846)
    • Selu – change default value of attributes to match other frameworks (PR #839)
    • ArgMax, ArgMin – specify default values for axis attribute (PR #847)
    • DepthToSpace, SpaceToDepth – blocksize attribute is now required (PR #847)
    • GRU, LSTM, RNN – specify default value for activation_* attributes (PR #847)
    • Reduce* – specify default behavior for axes attribute (PR #847)
    • Concat, Gather, Squeeze, Unsqueeze – accept any tensor type (PR #957)
    • Add, Div, Mul, Pow, Sub – enhance 1-element broadcast case (PR #902)
    • Pad – clarify pads attribute (PR #962)
    • LRN – specify default values and clarify behavior (PR #965)
    • ConvTranspose – clarify padding behavior and remove restriction on output_padding attribute (PR #1012)
    • All ops – updated type constraints (PR #666)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Apr 25, 2018)

    This release is a patch release based on v1.1.0 (v1.1.1):

    Bug fixes:

    • #775 - Align Python and C++ schema API for ONNX-ML
    • #781 - Fix some checker implementation not ideal for ONNX-ML
    • #799 - Update specs for ONNX ML
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Mar 21, 2018)

    Change log for the release:

    • Operators fixed and added - cast, reshape, Pool, Shape, Size, concat, Pow, Slice, TopK, structured, reducible control flow( experimental), Unsqueeze (PR #497) (PR #436) (PR #496) (PR #529) (PR #513) (PR #525) (PR #390) (PR #532) (PR #587) (PR #569) (PR #552)

    • Test cases added and fixed - global avg, max pool, Slice, cast, pow, Concat, Reshape, TopK, softplus, softsign, softmax, logsoftmax, hardmax transpose, Max, Min, Mean, Sum. +9 math operators, reciprocal, logic operators, Clip, Div, Mul, Pow, Sub; Elu, LeakyRelu, Selu, HardSigmoid, gather, Conv
      (PR #468) (PR #472) (PR #487) (PR #500) (PR #507) (PR #516) (PR #529) (PR #506) (PR #509) (PR #546) (PR #548) (PR #543) (PR #574) (PR #596)

    • Build Issues on various platforms - ** Provide option to enforce /MD or /MT when building with MSVC (PR #602) ** Fix ONNX library build for Windows ** Add to_string for Android (PR #597) ** Handle situations where protobuf is built on the fly (PR #592) ** fix CMakeLists on Windows (PR #589) ** travis tweaks to make sure the correct versions of python are installed (PR #584) ** Improve CMakefile of ONNX (PR #563) ** Don't include pybind11 if its target is alreadt exported (PR #550) ** Call gen_proto.py in cmake (PR #538) ** Couple cmake fixes (PR #521) ** Fix build on mac (PR #514) ** setup cmake (PR #469) ** Remove onnx-caffe2 reference (PR #558)

    • Naming and Convention changes - ** Add ONNX_NAMESPACE around rnn/old.cc (PR #605) ** Change the model file extension from .pb to .onnx (PR #541) ** Make onnx namespace configurable (PR #484)

    • Bug Fixes - ** Fix get_attribute_value can not get g field bug (PR #599) ** Fix treatment of optional inputs.

    • Test Framework Changes - ** Add outputs_info into run_node backend interface (PR #588)

    • IR Changes - ** Add option to use customized protoc (PR #594) ** preserve value infos if they are needed (PR #561) ** Check whether perm exists before using it (PR #559) ** Adding int32, int64 and double input data types for featurevectorizer (PR #547) ** Sort the attributes of NodeProto generated by make_node (PR #479)

    • Other Changes - ** Change the cached model checking logic (PR #545) ** Fix the way we find protobuf library (PR #539) ** Modulize ONNX libraries (PR #528) ** Printable Graph support for nested graphs + sugar (PR #483) ** Lexical scoping in checker (PR #485) ** osx travis support (PR #566)

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jan 26, 2018)

  • v1.0(Dec 6, 2017)

    This release is the first stable version of ONNX.

    This version also includes the ONNX-ML profile that extends ONNX with classic ML constructs. This is an optional profile.

    The following changes have been made since the 0.2 release:

    Spec Changes

    • Adds versioning documentation
    • Adds release management notes
    • Operator specs include samples

    IR Changes

    • Adds operator sets, imports and experimental operator support.
    • Adds an AttributeType enum, doc_string fields, domain for NodeProto.
    • Adds named metadata properties to models.
    • Remove sparse tensor protos.
    • Checker now available in C++ with Python wrapper.

    Operator Changes

    • Adds Identity, Affine, ThresholdRelu, ScaledTanh, ParametricSoftplus, ImageScaler, MeanVarianceNormalization, Crop, Embedding, HardSigmoid, Mean, Clip, LogSoftmax, Hardmax, Softsign, Softplus, MatMul, InstanceNormalization, LRN, ReduceSumSquare, ReduceLogSum, ReduceL1, ReduceL2, RNN, GRU, LSTM, SpaceToDepth, DepthToSpace, Tile.
    • Adds And, Or, Xor, Greater, Less, Equal, Not.
    • Removes Caffe2ConvTranspose, SpatialBN, LRN, ChannelShuffle, RecurrentNetwork.
    • Replaces Normalization with LpNormalization.
    • Adds type constraints.
    • Much improved tests for operators and reporting.
    Source code(tar.gz)
    Source code(zip)
  • v0.2(Oct 10, 2017)

    Spec changes

    • Type and shape annotations for the model (required for inputs/outputs, optional for internal values)

    Breaking changes

    onnx.proto underwent breaking changes that makes earlier serialized protobufs invalid. We commit to have all changes to the protobuf structure backward-compatible after this (v0.2) release.

    Specific changes:

    • Introduction of ModelProto to represent top-level model in addition to GraphProto
    • Related API changes renaming graph to model
    • Addition of type and optional shape annotations for inputs and outputs of the graph

    Operator spec changes

    • Added Gemm
    • Added Pad
    • Added Constant (graduated from experimental to non-experimental)
    • In Conv and ConvTranspose renamed attribute “filter” to “weights”
    • In Elu added “alpha” attribute
    • Concat fixed output number from 2 to 1
    • Dropout changed output number from 2 to (1 or 2)
    • Added OptimizedRNN operator representing entire RNN stack similarly to CuDNN
    • ATen support as an experimental operator that allows to directly represent any PyTorch's tensor functions (which leverage ATen).

    New Tutorials

    • Usage of ATen operator for quick exporting from PyTorch to Caffe2: https://github.com/caffe2/caffe2/blob/master/caffe2/contrib/aten/docs/pytorch_to_caffe2.md
    • End-to-end demo of training in PyTorch and deployment to Caffe2: https://github.com/bwasti/AICamera/blob/master/Exporting%20Squeezenet%20to%20mobile.ipynb
    Source code(tar.gz)
    Source code(zip)
Owner
Open Neural Network Exchange
ONNX is an open ecosystem for interoperable AI models. It's a community project: we welcome your contributions!
Open Neural Network Exchange
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Amazon Archives 4.4k Jul 30, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 166.8k Aug 3, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Siavash Eliasi 33 May 31, 2022
An open-source, low-code machine learning library in Python

An open-source, low-code machine learning library in Python ?? Version 2.3.6 out now! Check out the release notes here. Official • Docs • Install • Tu

PyCaret 6k Jul 29, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Nicholas Gillian 780 Aug 5, 2022
Distributed machine learning platform

Veles Distributed platform for rapid Deep learning application development Consists of: Platform - https://github.com/Samsung/veles Znicz Plugin - Neu

Samsung 898 Jul 25, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.7k Aug 2, 2022
A lightweight C++ machine learning library for embedded electronics and robotics.

Fido Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic

The Fido Project 412 Jun 25, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14k Jul 31, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Aug 1, 2022
Feature Store for Machine Learning

Overview Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Please see ou

Feast 3.4k Jul 31, 2022
Machine Learning Platform for Kubernetes

Reproduce, Automate, Scale your data science. Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applica

polyaxon 3.1k Aug 4, 2022
In-situ data analyses and machine learning with OpenFOAM and Python

PythonFOAM: In-situ data analyses with OpenFOAM and Python Using Python modules for in-situ data analytics with OpenFOAM 8. NOTE that this is NOT PyFO

Argonne Leadership Computing Facility - ALCF 105 Aug 5, 2022
CNStream is a streaming framework for building Cambricon machine learning pipelines

CNStream is a streaming framework for building Cambricon machine learning pipelines

Cambricon Technologies 173 Aug 1, 2022
SecMML: Secure MPC(multi-party computation) Machine Learning Framework

SecMML 介绍 SecMML是FudanMPL(Multi-Party Computation + Machine Learning)的一个分支,是用于训练机器学习模型的高效可扩展的安全多方计算(MPC)框架,基于BGW协议实现。此框架可以应用到三个及以上参与方联合训练的场景中。目前,SecMM

null 77 Jul 14, 2022
In this tutorial, we will use machine learning to build a gesture recognition system that runs on a tiny microcontroller, the RP2040.

Pico-Motion-Recognition This Repository has the code used on the 2 parts tutorial TinyML - Motion Recognition Using Raspberry Pi Pico The first part i

Marcelo Rovai 16 Jun 18, 2022
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

NetEase Youdao 176 Jul 21, 2022
A flexible, high-performance serving system for machine learning models

XGBoost Serving This is a fork of TensorFlow Serving, extended with the support for XGBoost, alphaFM and alphaFM_softmax frameworks. For more informat

iQIYI 120 Aug 1, 2022