deploy yolox algorithm use deepstream

Overview

YOLOX(Megvii-BaseDetection) Deploy DeepStream 💕 💥

This project base on https://github.com/Megvii-BaseDetection/YOLOX and https://zhuanlan.zhihu.com/p/391693130

News

Deploy yolox to deep stream, FPS > 70 - 2021-7-21

System Requirements

cuda 10.0+

TensorRT 7+

OpenCV 4.0+ (build with opencv-contrib module)

OpenMP

DeepStream 5.0+

Installation

Make sure you had install dependencies list above

# clone project and submodule
git clone {this repo}

cd {this repo}/nvdsinfer_custom_impl_yolox/

make

use to parse infer postprocess.

Run

cd {this repo}

deepstream-app -c deepstream_app_config.txt

How to build engine?

https://zhuanlan.zhihu.com/p/391693130

About License

For the 3rd-party module and Deepstream, you need to follow their license

For the part I wrote, you can do anything you want

Issues
  • Some bugs when you use your own dataset to train model

    Some bugs when you use your own dataset to train model

    Very nice work to save my lots of time!

    But there are some bugs for own trained model in your code:

    https://github.com/nanmi/YOLOX-deepstream/blob/96f44f9a5b5e450276a029e7580667136cbb2320/nvdsinfer_custom_impl_yolox/nvdsparsebbox_yolox.cpp#L168

    should be:

    const int basic_pos = anchor_idx * (num_class + 4 + 1);

    opened by lantudou 13
  • Results not shown

    Results not shown

    Thanks for your job

    I am currently using xavier nx. I want to use deepstream there, but the detection result is not displayed.

    I got the model_trt.engine pth file using trt.py provided at https://github.com/Megvii-BaseDetection/YOLOX address.

    Do I have to get the engine file using the method you mentioned at https://zhuanlan.zhihu.com/p/391693130?

    Thank you

    opened by JustdoITcom 9
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    我使用了您的代码来部署我自己训练的yolox模型,在编译以后,运行deepstream_app_config.txt的过程中报了这个错,这是怎么回事? **PERF: FPS 0 (Avg) **PERF: 0.00 (0.00) ** INFO: <bus_callback:181>: Pipeline ready

    Opening in BLOCKING MODE Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 ** INFO: <bus_callback:167>: Pipeline running

    NvMMLiteOpen : Block : BlockType = 4 ===== NVMEDIA: NVENC ===== NvMMLiteBlockCreate : Block : BlockType = 4 Segmentation fault (core dumped)

    opened by zzzfo 5
  • Installation error in jetson-nano

    Installation error in jetson-nano

    When I ran: %cd YOLOX-deepstream/nvdsinfer_custom_impl_yolox/ !make


    In file included from nvdsparsebbox_yolox.cpp:25:0: /opt/nvidia/deepstream/deepstream-5.1/sources/includes/nvdsinfer_custom_impl.h:375:19: error: ‘IPluginFactory’ in namespace ‘nvcaffeparser1’ does not name a type nvcaffeparser1::IPluginFactory *pluginFactory; ^~~~~~~~~~~~~~ /opt/nvidia/deepstream/deepstream-5.1/sources/includes/nvdsinfer_custom_impl.h:376:19: error: ‘IPluginFactoryExt’ in namespace ‘nvcaffeparser1’ does not name a type nvcaffeparser1::IPluginFactoryExt *pluginFactoryExt; ^~~~~~~~~~~~~~~~~ /opt/nvidia/deepstream/deepstream-5.1/sources/includes/nvdsinfer_custom_impl.h:386:16: error: ‘IPluginFactory’ in namespace ‘nvuffparser’ does not name a type nvuffparser::IPluginFactory *pluginFactory; ^~~~~~~~~~~~~~ /opt/nvidia/deepstream/deepstream-5.1/sources/includes/nvdsinfer_custom_impl.h:387:16: error: ‘IPluginFactoryExt’ in namespace ‘nvuffparser’ does not name a type nvuffparser::IPluginFactoryExt *pluginFactoryExt; ^~~~~~~~~~~~~~~~~ Makefile:47: recipe for target 'nvdsparsebbox_yolox.o' failed make: *** [nvdsparsebbox_yolox.o] Error 1

    opened by ronnnhui 4
  • Seems like nvdsinfer_custom_impl.h is missing

    Seems like nvdsinfer_custom_impl.h is missing

    while building, I am getting

    nvdsparsebbox_yolox.cpp:25:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory
       25 | #include "nvdsinfer_custom_impl.h"
          |          ^~~~~~~~~~~~~~~~~~~~~~~~~
    compilation terminated.
    make: *** [Makefile:43: nvdsparsebbox_yolox.o] Error 1
    
    opened by shubhambaid 3
  • nvdsparsebbox_yolox.cpp:26:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory

    nvdsparsebbox_yolox.cpp:26:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory

    make

    g++ -c -o nvdsparsebbox_yolox.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I../../includes -I/usr/local/cuda-10.2/include -I/home/nvidia/data/zhangbo/library/opencv-4.5.1/include/opencv4 -I/usr/lib/gcc/x86_64-linux-gnu/7/include/ -fopenmp nvdsparsebbox_yolox.cpp nvdsparsebbox_yolox.cpp:25:10: fatal error: opencv2/opencv.hpp: No such file or directory 25 | #include <opencv2/opencv.hpp> | ^~~~~~~~~~~~~~~~~~~~ compilation terminated. make: *** [Makefile:43: nvdsparsebbox_yolox.o] Error 1

    Then i

    sudo ln -s /usr/include/opencv4/opencv2 /usr/include/

    but

    g++ -c -o nvdsparsebbox_yolox.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I../../../includes -I/usr/local/cuda-10.2/include -I/home/nvidia/data/zhangbo/library/opencv-4.5.1/include/opencv4 -I/usr/lib/gcc/x86_64-linux-gnu/7/include/ -fopenmp nvdsparsebbox_yolox.cpp nvdsparsebbox_yolox.cpp:26:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory 26 | #include "nvdsinfer_custom_impl.h" | ^~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. make: *** [Makefile:43: nvdsparsebbox_yolox.o] Error 1

    opened by Abandon-ht 3
  • make error

    make error

    g++ -c -o nvdsparsebbox_yolox.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I../../includes -I/usr/local/cuda-11.1/include -I/usr/local/include/opencv4/opencv2 -I/usr/lib/gcc/x86_64-linux-gnu/9/include/ -fopenmp nvdsparsebbox_yolox.cpp In file included from nvdsparsebbox_yolox.cpp:25: ../../includes/nvdsinfer_custom_impl.h:128:10: fatal error: NvCaffeParser.h: 没有那个文件或目录 128 | #include "NvCaffeParser.h" | ^~~~~~~~~~~~~~~~~ compilation terminated. make: *** [Makefile:44:nvdsparsebbox_yolox.o] 错误 1

    opened by geekplusaa 1
  • Tracker Error

    Tracker Error

    Thank you for your ds parsing impl.

    It works perfectly without tracker(iou, klt, nvdcf).

    But when I try to use the tracker, I get an error.

    like this..

    tracker_error

    I'd like to use a tracker. What should I do?


    I used nvidia docker container. (deepstream:5.1-21.02-devel)

    GPU : 1080ti, 2080ti

    opened by KoPub 0
  • run YOLOX detector+DeepSORT w/ deepstream

    run YOLOX detector+DeepSORT w/ deepstream

    Hello,

    I want to run yolox tracking with deepstream and I seem to have succeeded in running the samples detector+deepSORT, but I am stuck in configuring the program to use YOLOX instead of the primary_Detector.

    Any ideas how I can run it successfully?

    My board is Nvidia Xavier AGX dev kit.

    My main concern is the error I receive when trying to build the yolox project: no header opencv2.h found.

    opened by tulbureandreit 7
  • deepstream-app -c deepstream_app_config.txt

    deepstream-app -c deepstream_app_config.txt

    ** WARN: <parse_tracker:1198>: Unknown key 'display-tracking-id' for group [tracker] MobaXterm X11 proxy: Authorisation not recognised No EGL Display nvbufsurftransform: Could not get EGL display connection [email protected]:/data/download/HanYong/YOLOX/YOLOX-deepstream# unset DISPLAY [email protected]:/data/download/HanYong/YOLOX/YOLOX-deepstream# deepstream-app -c deepstream_app_config.txt ** WARN: <parse_tracker:1198>: Unknown key 'display-tracking-id' for group [tracker] Opening in BLOCKING MODE

    Runtime commands: h: Print this help q: Quit

        p: Pause
        r: Resume
    

    ** INFO: <bus_callback:181>: Pipeline ready

    Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 279 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 279 ** INFO: <bus_callback:167>: Pipeline running

    NvMMLiteOpen : Block : BlockType = 4 ===== NVMEDIA: NVENC ===== NvMMLiteBlockCreate : Block : BlockType = 4 H264: Profile = 66, Level = 0 avg bitrate=0 for CBR, force to CQP mode

    **PERF: FPS 0 (Avg) **PERF: 308.69 (307.27) **PERF: 356.50 (335.74) ** INFO: <bus_callback:204>: Received EOS. Exiting ...

    Quitting App run successful

    这里我得到output.mp4检测后视频没有检测框出现,这是什么原因???

    opened by SunYiLing123 1
  • 运行 deepstream-app -c deepstream_app_config.txt 报错

    运行 deepstream-app -c deepstream_app_config.txt 报错

    0:00:04.102224452 9223 0x12d86390 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferCont ext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized en gine model: /data/download/HanYong/YOLOX/YOLOX-deepstream/model_trt.engine 0:00:04.109017626 9223 0x12d86390 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferCont ext[UID 1]: Error in NvDsInferContextImpl::parseLabelsFile() <nvdsinfer_context_impl.cpp:441> [UID = 1]: Could not open labels file:/ data/download/HanYong/YOLOX/YOLOX-deepstream/labels.txt ERROR: parse label file:/data/download/HanYong/YOLOX/YOLOX-deepstream/labels.txt failed, nvinfer error:NVDSINFER_CONFIG_FAILED ERROR: init post processing resource failed, nvinfer error:NVDSINFER_CONFIG_FAILED ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CONFIG_FAILED ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED 0:00:04.163979046 9223 0x12d86390 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance 0:00:04.164053670 9223 0x12d86390 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary_gie> error: Config file path: /data/download/HanYong/YOLOX/YOLOX-deepstream/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED ** ERROR: main:651: Failed to set pipeline to PAUSED Quitting ERROR from primary_gie: Failed to create NvDsInferContext instance Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /Gs tPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie: Config file path: /data/download/HanYong/YOLOX/YOLOX-deepstream/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED App run failed

    你好,这个是什么原因

    opened by SunYiLing123 3
  • deploy in xavier

    deploy in xavier

    Thanks for your excellent job; Can the code deploy in jetson playtform success? I convert pth to onnx by export_onnx.py in origin repo in YOLOX and convert to engine by trtexec in jetson but get none result when detect the dog.jpg because has no bboxes output.

    No bbox conf bigger than 0.3

    So how can i get the correct engine file

    thanks

    opened by AlanNewImage 31
Owner
Atypical AI worker
null
A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection

A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection, how to set up the GigE camera, and deployment for the DeepStream apps.

NVIDIA AI IOT 7 May 27, 2022
YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream

Deepstream 5.1 YOLOv4 App This Deepstream application showcases YOLOv4 running at high FPS throughput! P.S - Click the gif to watch the entire video!

Akash James 31 Apr 21, 2022
Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Lite.AI ?????? is a user-friendly C++ lib for awesome?????? AI models based on onnxruntime, ncnn or mnn. YOLOX??, YoloV5??, YoloV4??, DeepLabV3??, ArcFace??, CosFace??, Colorization??, SSD??, etc.

Def++ 1.7k Jun 27, 2022
Lite.AI 🚀🚀🌟 is a user friendly C++ lib of 60+ awesome AI models. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, RetinaFace🔥, SSD🔥, etc.

Lite.AI ?? ?? ?? Introduction. Lite.AI ?? ?? ?? is a simple and user-friendly C++ library of awesome ?? ?? ?? AI models. It's a collection of personal

Def++ 1.7k Jul 2, 2022
Android yolox hand detect by ncnn

The yolox hand detection This is a sample ncnn android project, it depends on ncnn library and opencv https://github.com/Tencent/ncnn https://github.c

FeiGeChuanShu 11 Apr 1, 2022
Lite.AI.ToolKit 🚀🚀🌟: A lite C++ toolkit of awesome AI models such as RobustVideoMatting🔥, YOLOX🔥, YOLOP🔥 etc.

Lite.AI.ToolKit ?? ?? ?? : A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc.

DefTruth 1.7k Jun 27, 2022
YOLOX + ROS2 object detection package

YOLOX-ROS YOLOX+ROS2 Foxy Supported List Base ROS1 C++ ROS1 Python ROS2 C++ ROS2 Python CPU ✅ CUDA ✅ CUDA (FP16) ✅ TensorRT (CUDA) ✅ OpenVINO ✅ MegEng

Ar-Ray 127 Jun 24, 2022
YoloX for a Jetson Nano 4 using ncnn.

YoloX Jetson Nano YoloX with the ncnn framework. Paper: https://arxiv.org/pdf/2107.08430.pdf Special made for a Jetson Nano, see Q-engineering deep le

Q-engineering 7 May 25, 2022
YoloX for a bare Raspberry Pi 4 using ncnn.

YoloX Raspberry Pi 4 YoloX with the ncnn framework. Paper: https://arxiv.org/pdf/2107.08430.pdf Special made for a bare Raspberry Pi 4, see Q-engineer

Q-engineering 4 Mar 31, 2022
Deploy SCRFD, an efficient high accuracy face detection approach, in your web browser with ncnn and webassembly

ncnn-webassembly-scrfd open https://nihui.github.io/ncnn-webassembly-scrfd and enjoy build and deploy Install emscripten

null 37 Jun 9, 2022
Deploy OcrLite in your web browser with ncnn and webassembly

ncnn-webassembly-ocrlite Requirements ncnn webassembly opencv-mobile webassembly 3.4.13 Build Install emscripten git clone https://github.com/emscript

SgDylan 22 Sep 5, 2021
shufflev2-yolov5: lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

pogg 1.1k Jun 25, 2022
VNOpenAI 22 Jun 15, 2022
Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and deploy without Python.

Python Inference Script(PyIS) Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and d

Microsoft 10 Feb 23, 2022
Movenet cpp deploy; model transformed from tensorflow

MoveNet-PaddleLite Adapted from PaddleDetection; Movenet cpp deploy based on PaddleLite; Movenet model transformed from tensorflow; 简介 Movenet是近年的优秀开源

null 9 May 28, 2022
Deploy ultralytics Yolov5 pretained model with C++ language

Introdution Deploy ultralytics Yolov5 pretained model with C++ language ; Env GCC 7.5 Opencv 4.5.4 Get ONNX Model go to yolov5 release page download y

Xee 32 Jun 11, 2022
TengineFactory - Algorithm acceleration landing framework, let you complete the development of algorithm at low cost.eg: Facedetect, FaceLandmark..

简介 随着人工智能的普及,深度学习算法的越来越规整,一套可以低代码并且快速落地并且有定制化解决方案的框架就是一种趋势。为了缩短算法落地周期,降低算法落地门槛是一个必然的方向。 TengineFactory 是由 OPEN AI LAB 自主研发的一套快速,低代码的算法落地框架。我们致力于打造一个完全

OAID 88 May 16, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Jiabao Lei 36 May 21, 2022