TengineGst is a streaming media analytics framework, based on GStreamer multimedia framework, for creating varied complex media analytics pipelines.

Overview

TengineGst

简介

TengineGst 是 OPEN AI LAB 基于 GStreamer 多媒体框架的分析推理框架,用于创建各类多媒体处理管道,可以方便的利用各种成熟的插件快速搭建起稳定的应用,并使用 Tengine 优化推理操作,更快更优更专注的开发核心 AI 业务。 完整的解决方案利用了:

  • 用于管道管理的开源 GStreamer 框架;
  • GStreamer 用于输入和输出的插件,如媒体文件和来自摄像头或网络的实时流媒体;
  • GStreamer 各种成熟插件,例如编解码、图形处理等;
  • 从主流训练框架 Caffe、TensorFlow、ONNX、Darknet 等转换而来的 Tengine 深度学习模型 tmfile。

TengineGst 中深度学习推理的插件:

  • 推理插件利用Tengine 使用深度学习模型进行高性能推理;
  • 推理结果的可视化,带有检测对象的边界框和标签,位于视频流之上;
  • 推理结果可以通过MQTT等标准协议推送出去。

架构

架构 数据流 pipeline

插件包括

  • streammux:多路流合并成一路由一路算法处理多路;
  • streamdemux:一路推理的结果分离出相对应的各路,与 streammux 配合使用;
  • videoanalysis:主要的推理插件,提供了标准二次插件接口,可以动态加载推理业务,在大多数不想写插件的时候,只需要实现一个业务动态库,由此插件把推理业务交给推理业务库即可。配合类“inferservice”的业务库使用。如果特别熟悉 GStreamer 插件开发,可以自己写一个插件来直接进行推理业务;
  • mqtt:把推理结果泵向 mqtt broker 的功能;
  • postprocess:简单的把推理结果叠加到视频流的功能。

业务插件

inferservice:调用Tengine 推理框架,加载模型,推理结果,并把结果输出到分析插件按照类 inferservice 库的框架编译的库,也可以作为 videoanalysis 插件的业务库传入,修改 businessdll 属性为业务库地址,即可以支持不同算法业务。

需要安装依赖

sudo apt update
sudo apt install pkg-config
sudo apt install pkgconf
sudo apt install -y build-essential cmake
sudo apt install gstreamer1.0-tools libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev gstreamer1.0-libav gstreamer1.0-plugins-bad gstreamer1.0-plugins-good
apt install libssl-dev

目前在x86/khadas 环境编译,其他设备只需替换不同 Tengine 推理库即可。

编译方法

依赖模块进行编译

Tengine
//参考:https://github.com/OAID/Tengine/tree/tengine-lite/doc/docs_zh/source_compile,编译不同 npu 版本,下面例子编译x86
cd Tengine
mkdir build 
cd build
cmake ..
make
make install
cp -r install/include/ /usr/local/
cp install/lib/* /usr/local/lib/
zlog:
wget https://github.com/HardySimpson/zlog/archive/refs/tags/1.2.15.tar.gz
tar zxvf 1.2.15.tar.gz
cd zlog-1.2.15
make PREFIX=/usr/local
sudo make PREFIX
=/usr/local install
mosquitto
git clone https://github.com/eclipse/mosquitto.git
cd mosquitto && cd lib
make && make install
turbo-jpeg
git clone https://github.com/libjpeg-turbo/libjpeg-turbo.git
cd libjpeg-turbo
mkdir build
cmake ..
make && make install

opencv

wget https://github.com/opencv/opencv/archive/3.4.16.zip
unzip 3.4.16.zip
cd opencv-3.4.16
mkdir build
cd build
cmake ..
make && make install

编译工程

cmake 目录下 cross.cmake 文件有交叉编译开关。交叉编译环境需调整交叉编译路径。对应的 khadas 支持库已经提供,打开开关即可。

具体编译方法:

mkdir build
cd build
cmake ..
make

示例

khadas 交叉编译,需要 env-run.tar.gz 解压出来,可以减少编译步骤,这时需要把子目录 run 里面的库拷贝到设备里面,模型文件参看源码里面指定的路径,拷贝到相应的目录。

tar zxvf env-run.tar.gz

编译完成,build 目录会产生 aarc64/Release/lib 目录,里面即是所有的插件。把里面的插件库拷贝到GStreamer 目录下。类似命令:

// khadas
cp aarch64/Release/lib/libgst* /usr/lib/aarch64-linux-gnu/gstreamer-1.0/
// x86
cp aarch64/Release/lib/libgst* /usr/lib/x86_64-linux-gnu/gstreamer-1.0/

执行类似命令检查插件 gst-inspect-1.0 mqtt 。因为示例插件,需要把子目录 run 里面的库拷贝到设备的 /usr/lib/aarch64-linux-gnu/ 目录下 模型文件拷贝到:/home/khadas/(见插件 inferservice) 即可,这个发布的时候,可以随意指定路径。

测试命令

gst-launch-1.0 rtspsrc location="rtsp://**" ! rtph264depay ! capsfilter caps="video/x-h264" ! h264parse ! avdec_h264 !  videoanalysis businessdll=/dir/libinferservice.so  ! postprocess ! mqtt username=** userpwd=** servip=** servport=1883 ! fakevideosink

当推理得到结果,就会把结果通过 mqtt 插件发送到 mqtt brokermqtt 测试工具可以用 MQTTBox,订阅主题 detect_result 可以查看推理结果。

致谢

License

You might also like...
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

A hierarchical parameter server framework based on MXNet. GeoMX also implements multiple communication-efficient strategies.

Introduction GeoMX is a MXNet-based two-layer parameter server framework, aiming at integrating data knowledge that owned by multiple independent part

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

model infer framework with multithreads based on PaddleX
model infer framework with multithreads based on PaddleX

model_infer_multiThreads (最近更新:2021-10-28 增加了原生的所有api接口,支持clas/det/seg/mask) 该repo基于PaddleX模型推理动态链接库的接口代码进行修改,支持多线程并行访问。大部分代码均来自paddleX的model_infer.cp

An open source iOS framework for GPU-based image and video processing
An open source iOS framework for GPU-based image and video processing

GPUImage Brad Larson http://www.sunsetlakesoftware.com @bradlarson [email protected] Overview The GPUImage framework is a BSD-licensed iO

An Open Source Machine Learning Framework for Everyone
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

header only, dependency-free deep learning framework in C++14
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

OpenVSLAM: A Versatile Visual SLAM Framework
OpenVSLAM: A Versatile Visual SLAM Framework

OpenVSLAM: A Versatile Visual SLAM Framework NOTE: This is a community fork of xdspacelab/openvslam. It was created to continue active development of

Comments
  • 赏金任务新鲜出炉(快来薅羊毛~~)

    赏金任务新鲜出炉(快来薅羊毛~~)

    任务激励

    • 达成任意一项任务即成为 Tengine 开源贡献者、活动贡献者认证证书;
    • 根据达成任务的难度还能获取相关积分和 Tengine 积分计划共享奖品;
    • 简单
    • 一般
    • 困难

    任务详情

    分类 | 任务名 | 详情 | 难易程度 -- | -- | -- | -- 案例 | 人脸属性案例 | 提供基于TengineGst 的人脸属性案例,案例包括face-detection、age recgnition,gender recogintion,emotions recognition,facial-landmarks,head pose estimation, 最终通过命令行调用 | 难 案例 | 动作识别案例 | 提供基于TengineGst 的动作识别案例,模型采用开源模型即可 | 一般 案例 | 人体姿态案例 | 提供基于TengineGst 的人体姿态案例,模型采用开源模型即可 | 一般 框架 | C++/python 构建执行案例 | 目前TengineGst 的案例通过Command line执行,需添加C++以及python执行pipeline的方法,提升TengineGst的易用性 | 一般 插件 | python-callback插件 | 1.提供插件实现通过Callback 函数代用用户自定义python 代码,的功能,可用于metadata转换、后处理以及其他功能, 2.提供测试 pipeline | 容易 插件 | FPS 插件 | 1.提供FPS 插件,插入到pipeline的尾部;用于测试与调试pipeline 性能。每隔一秒钟打印FPS,并在程序退出时打印平均FPS 2.提供测试pipeline | 容易

    悬赏任务说明:

    • 奖励标准:提交时间和任务完成质量,评选规则由 Tengine 开源委员会评选和决定;
    • 本期悬赏任务截止时间:2021.12.27;奖励和贡献者证书在 2021.12.31 统一发送;
    • 奖励发送&加入贡献者交流群,请添加Tengine小助手微信号:Tengine666 备注:任务
    • Tengine 开源委员会保留最终解释权。
    opened by wuwenli81 0
Owner
OAID
Open AID, created by OPEN AI LAB
OAID
MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

Cross-platform, customizable ML solutions for live and streaming media.

Google 20k Jan 9, 2023
Gstreamer plugin that allows use of NVIDIA Maxine SDK in a generic pipeline.

GST-NVMAXINE Gstreamer plugin that allows use of NVIDIA MaxineTM sdk in a generic pipeline. This plugin is intended for use with NVIDIA hardware. Visi

Alex Pitrolo 18 Dec 19, 2022
oneAPI Data Analytics Library (oneDAL)

Intel® oneAPI Data Analytics Library Installation | Documentation | Support | Examples | Samples | How to Contribute Intel® oneAPI Data Analytics Libr

oneAPI-SRC 534 Dec 30, 2022
Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner

Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner. Detecting intrusions (DoS and DDoS attacks), frauds, fake rating anomalies.

Stream-AD 696 Dec 18, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
copc-lib provides an easy-to-use interface for reading and creating Cloud Optimized Point Clouds

copc-lib copc-lib is a library which provides an easy-to-use reader and writer interface for COPC point clouds. This project provides a complete inter

Rock Robotic 25 Nov 29, 2022
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. All NLP modules are based on Timbl, the Tilburg memory-based learning software package.

Frog - A Tagger-Lemmatizer-Morphological-Analyzer-Dependency-Parser for Dutch Copyright 2006-2020 Ko van der Sloot, Maarten van Gompel, Antal van den

Language Machines 70 Dec 14, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 5, 2023
A hierarchical parameter server framework based on MXNet. GeoMX also implements multiple communication-efficient strategies.

Introduction GeoMX is a MXNet-based two-layer parameter server framework, aiming at integrating data knowledge that owned by multiple independent part

null 86 Oct 21, 2022
Inference framework for MoE layers based on TensorRT with Python binding

InfMoE Inference framework for MoE-based models, based on a TensorRT custom plugin named MoELayerPlugin (including Python binding) that can run infere

Shengqi Chen 34 Nov 25, 2022