TensorRT int8 量化部署 yolov5s 4.0 模型,实测3.3ms一帧!

Overview

tensorrt模型推理

git clone https://github.com/Wulingtian/yolov5_tensorrt_int8.git(求star)

cd yolov5_tensorrt_int8

vim CMakeLists.txt

修改USER_DIR参数为自己的用户根目录

vim http://yolov5s_infer.cc 修改如下参数

output_name1 output_name2 output_name3 yolov5模型有3个输出

我们可以通过netron查看模型输出名

pip install netron 安装netron

vim netron_yolov5s.py 把如下内容粘贴

import netron

netron.start('此处填充简化后的onnx模型路径', port=3344)

python netron_yolov5s.py 即可查看 模型输出名

trt_model_path 量化的的tensorrt推理引擎(models_save目录下trt后缀的文件)

test_img 测试图片路径

INPUT_W INPUT_H 输入图片宽高

NUM_CLASS 训练的模型有多少类

NMS_THRESH nms阈值

CONF_THRESH 置信度

参数配置完毕

mkdir build

cd build

cmake ..

make

./YoloV5sEngine 输出平均推理时间,以及保存预测图片到当前目录下,至此,部署完成!

You might also like...
C++ library based on tensorrt integration
C++ library based on tensorrt integration

3行代码实现YoloV5推理,TensorRT C++库 支持最新版tensorRT8.0,具有最新的解析器算子支持 支持静态显性batch size,和动态非显性batch size,这是官方所不支持的 支持自定义插件,简化插件的实现过程 支持fp32、fp16、int8的编译 优化代码结构,打印

A multi object tracking Library Based on tensorrt
A multi object tracking Library Based on tensorrt

YoloV5_JDE_TensorRT_for_Track Introduction A multi object detect and track Library Based on tensorrt 一个基于TensorRT的多目标检测和跟踪融合算法库,可以同时支持行人的多目标检测和跟踪,当然也可

(ROS) YOLO detection with TensorRT, utilizing tkDNN

tkDNN-ROS YOLO object detection with ROS and TensorRT using tkDNN Currently, only YOLO is supported. Comparison of performance and other YOLO implemen

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

Real-time object detection with YOLOv5 and TensorRT

YOLOv5-TensorRT The goal of this library is to provide an accessible and robust method for performing efficient, real-time inference with YOLOv5 using

Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

An R3D network implemented with TensorRT

r3d_TensorRT An r3d network implemented with TensorRT8.x, The weight of the model comes from PyTorch. A description of the models in Pytroch can be fo

The MOT implement by Solov2+DeepSORT with C++ (Libtorch, TensorRT).

Tracking-Solov2-Deepsort This project implement the Multi-Object-Tracking(MOT) base on SOLOv2 and DeepSORT with C++。 The instance segmentation model S

In this repo, we deployed SOLOv2 to TensorRT with C++.

Solov2-TensorRT-CPP in this repo, we deployed SOLOv2 to TensorRT with C++. See the video:https://www.bilibili.com/video/BV1rQ4y1m7mx Requirements Ubun

Comments
  • make编译错误

    make编译错误

    (tensorrt) [email protected]:~/yolov5/yolov5_tensorrt_int8/build$ make Scanning dependencies of target YoloV5sEngine [ 50%] Building CXX object CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o /home/bowen/yolov5/yolov5_tensorrt_int8/yolov5s_infer.cc:18:22: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings] 18 | char* output_name1 = "output"; | ^~~~~~~~ /home/bowen/yolov5/yolov5_tensorrt_int8/yolov5s_infer.cc:19:22: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings] 19 | char* output_name2 = "417"; | ^~~~~ /home/bowen/yolov5/yolov5_tensorrt_int8/yolov5s_infer.cc:20:22: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings] 20 | char* output_name3 = "437"; | ^~~~~ /home/bowen/yolov5/yolov5_tensorrt_int8/yolov5s_infer.cc:21:24: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings] 21 | char* trt_model_path = "/home/bowen/yolov5/yolov5_tensorrt_int8/models/yolov5s-4.0-int8-relu.trt"; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX executable YoloV5sEngine /usr/bin/ld: CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o: in function renderBoundingBox(cv::Mat, std::vector<Bbox, std::allocator<Bbox> > const&)': yolov5s_infer.cc:(.text+0x7a2): undefined reference tocv::Mat::Mat(cv::Mat&&)' /usr/bin/ld: CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o: in function preprocess_img(cv::Mat&)': yolov5s_infer.cc:(.text+0x17dd): undefined reference tocv::Mat::Mat(int, int, int)' /usr/bin/ld: yolov5s_infer.cc:(.text+0x184f): undefined reference to cv::Mat::Mat(int, int, int, cv::Scalar_<double> const&)' /usr/bin/ld: CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o: in functionmain': yolov5s_infer.cc:(.text+0x1eaa): undefined reference to `cv::Mat::Mat()' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/YoloV5sEngine.dir/build.make:84:YoloV5sEngine] 错误 1 make[1]: *** [CMakeFiles/Makefile2:76:CMakeFiles/YoloV5sEngine.dir/all] 错误 2 make: *** [Makefile:84:all] 错误 2 (tensorrt) [email protected]:~/yolov5/yolov5_tensorrt_int8/build$

    opened by neverstoplearn 2
  • make 编译报错,用的是提供的opencv-3.4.2

    make 编译报错,用的是提供的opencv-3.4.2

    [ 50%] Building CXX object CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o In file included from /usr/local/opencv-3.4.2/include/opencv2/calib3d.hpp:48:0, from /usr/local/opencv-3.4.2/include/opencv2/opencv.hpp:56, from /home/wangyuanwen/tensorrt_test/yolov5_tensorrt_int8/yolov5s_infer.cc:6: /usr/local/opencv-3.4.2/include/opencv2/features2d.hpp:50:10: fatal error: opencv2/flann/miniflann.hpp: No such file or directory #include "opencv2/flann/miniflann.hpp" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. CMakeFiles/YoloV5sEngine.dir/build.make:62: recipe for target 'CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o' failed make[2]: *** [CMakeFiles/YoloV5sEngine.dir/yolov5s_infer.cc.o] Error 1 CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/YoloV5sEngine.dir/all' failed make[1]: *** [CMakeFiles/YoloV5sEngine.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    opened by Aruen24 5
  • 关于如何生成LibMyTtrEngine-trt721.so

    关于如何生成LibMyTtrEngine-trt721.so

    文件太大,我就放到百度盘了(链接: https://pan.baidu.com/s/1sF8vZ1JyBvk5Z_IUBP3CgA 密码: qlgk)

    操作步骤如下: cd Generate_LibMyTtrEngine-trt721/src vim CMakeLists.txt 设置cuda目录(例如:/usr/local/cuda-11.0/include) make build cd build cmake .. make 在Generate_LibMyTtrEngine-trt721/bin目录下生成libMyTtrEngine-trt721.so动态库

    opened by Wulingtian 0
Owner
null
yolov5s nnie

yolov5-nnie yolov5s nnie YOLOv5 pytorch -> onnx -> caffe -> .wk 1、模型是yolov5s,将focus层替换成了stride为2的conv层。reshape和permute层也做了调整。具体的修改过程可以参考这个大佬的文章:https:

wllkk 26 Sep 16, 2022
Implement yolov5 with Tensorrt C++ api, and integrate batchedNMSPlugin. A Python wrapper is also provided.

yolov5 Original codes from tensorrtx. I modified the yololayer and integrated batchedNMSPlugin. A yolov5s.wts is provided for fast demo. How to genera

weiwei zhou 46 Dec 6, 2022
TensorRT implementation of RepVGG models from RepVGG: Making VGG-style ConvNets Great Again

RepVGG RepVGG models from "RepVGG: Making VGG-style ConvNets Great Again" https://arxiv.org/pdf/2101.03697.pdf For the Pytorch implementation, you can

weiwei zhou 69 Sep 10, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Dec 30, 2022
Simple samples for TensorRT programming

Introduction This is a collection of simplified TensorRT samples to get you started with TensorRT programming. Most of the samples are written in C++,

NVIDIA Corporation 675 Jan 6, 2023
Support Yolov4/Yolov3/Centernet/Classify/Unet. use darknet/libtorch/pytorch to onnx to tensorrt

ONNX-TensorRT Yolov4/Yolov3/CenterNet/Classify/Unet Implementation Yolov4/Yolov3 centernet INTRODUCTION you have the trained model file from the darkn

null 172 Dec 29, 2022
vs2015上使用tensorRT加速yolov5推理(Using tensorrt to accelerate yolov5 reasoning on vs2015)

1、安装环境 CUDA10.2 TensorRT7.2 OpenCV3.4(工程中已给出,不需安装) vs2015 下载相关工程:https://github.com/wang-xinyu/tensorrtx.git 2、生成yolov5s.wts文件 在生成yolov5s.wts前,首先需要下载模

null 16 Apr 19, 2022
Inference framework for MoE layers based on TensorRT with Python binding

InfMoE Inference framework for MoE-based models, based on a TensorRT custom plugin named MoELayerPlugin (including Python binding) that can run infere

Shengqi Chen 34 Nov 25, 2022
TensorRT for Scaled YOLOv4(yolov4-csp.cfg)

TensoRT Scaled YOLOv4 TensorRT for Scaled YOLOv4(yolov4-csp.cfg) 很多人都写过TensorRT版本的yolo了,我也来写一个。 测试环境 ubuntu 18.04 pytorch 1.7.1 jetpack 4.4 CUDA 11.0

Bolano 10 Jul 30, 2021
YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream

Deepstream 5.1 YOLOv4 App This Deepstream application showcases YOLOv4 running at high FPS throughput! P.S - Click the gif to watch the entire video!

Akash James 35 Nov 10, 2022