A lightweight, portable pure
inference engine for embedded devices with hardware acceleration support.
The library's .c and .h files can be dropped into a project and compiled along with it. Before use, should be allocated
struct onnx_context_t * and you can pass an array of
struct resolver_t * for hardware acceleration.
The filename is path to the format of
struct onnx_context_t * ctx = onnx_context_alloc_from_file(filename, NULL, 0);
Then, you can get input and output tensor using
struct onnx_tensor_t * input = onnx_tensor_search(ctx, "input-tensor-name"); struct onnx_tensor_t * output = onnx_tensor_search(ctx, "output-tensor-name");
When the input tensor has been setting, you can run inference engine using
onnx_run function and the result will putting into the output tensor.
Finally, you must free
struct onnx_context_t * using
cd libonnx make
- The chinese discussion posts
- The onnx operators documentation
- The tutorials for creating ONNX models
- The pre-trained onnx models
This library is free software; you can redistribute it and or modify it under the terms of the MIT license. See MIT License for details.