TensorRT implementation of RepVGG models from RepVGG: Making VGG-style ConvNets Great Again

Overview

RepVGG

RepVGG models from "RepVGG: Making VGG-style ConvNets Great Again" https://arxiv.org/pdf/2101.03697.pdf

For the Pytorch implementation, you can refer to DingXiaoH/RepVGG

How to run

  1. generate wts file.
git clone https://github.com/DingXiaoH/RepVGG.git
cd RepVGG

You may convert a trained model into the inference-time structure with

python convert.py [weights file of the training-time model to load] [path to save] -a [model name]

For example,

python convert.py RepVGG-B2-train.pth RepVGG-B2-deploy.pth -a RepVGG-B2

Then copy TensorRT-RepVGG/gen_wts.py to RepVGG and generate .wts file, for example

python gen_wts.py -w RepVGG-B2-deploy.pth -s RepVGG-B2.wts
  1. build and run
cd TensorRT-RepVGG

mkdir build

cd build

cmake ..

make

sudo ./repvgg -s RepVGG-B2  // serialize model to plan file i.e. 'RepVGG-B2.engine'
sudo ./repvgg -d RepVGG-B2  // deserialize plan file and run inference
You might also like...
C++ library based on tensorrt integration
C++ library based on tensorrt integration

3行代码实现YoloV5推理,TensorRT C++库 支持最新版tensorRT8.0,具有最新的解析器算子支持 支持静态显性batch size,和动态非显性batch size,这是官方所不支持的 支持自定义插件,简化插件的实现过程 支持fp32、fp16、int8的编译 优化代码结构,打印

A multi object tracking Library Based on tensorrt
A multi object tracking Library Based on tensorrt

YoloV5_JDE_TensorRT_for_Track Introduction A multi object detect and track Library Based on tensorrt 一个基于TensorRT的多目标检测和跟踪融合算法库,可以同时支持行人的多目标检测和跟踪,当然也可

(ROS) YOLO detection with TensorRT, utilizing tkDNN

tkDNN-ROS YOLO object detection with ROS and TensorRT using tkDNN Currently, only YOLO is supported. Comparison of performance and other YOLO implemen

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

Real-time object detection with YOLOv5 and TensorRT

YOLOv5-TensorRT The goal of this library is to provide an accessible and robust method for performing efficient, real-time inference with YOLOv5 using

Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

An R3D network implemented with TensorRT

r3d_TensorRT An r3d network implemented with TensorRT8.x, The weight of the model comes from PyTorch. A description of the models in Pytroch can be fo

The MOT implement by Solov2+DeepSORT with C++ (Libtorch, TensorRT).

Tracking-Solov2-Deepsort This project implement the Multi-Object-Tracking(MOT) base on SOLOv2 and DeepSORT with C++。 The instance segmentation model S

In this repo, we deployed SOLOv2 to TensorRT with C++.

Solov2-TensorRT-CPP in this repo, we deployed SOLOv2 to TensorRT with C++. See the video:https://www.bilibili.com/video/BV1rQ4y1m7mx Requirements Ubun

Comments
  • jetson nano repvgg make failed

    jetson nano repvgg make failed

    Env GPU, Nano OS, Cuda version 10.2 TensorRT version About this repo repvgg

    Your problem sudo ./repvgg -s RepVGG-B2 // serialize model to plan file i.e. 'RepVGG-B2.engine' finished load weights erorr @ 281line

    opened by mk123qwe 1
Owner
weiwei zhou
weiwei zhou
LibtorchSegmentation - A c++ trainable semantic segmentation library based on libtorch (pytorch c++). Backbone: VGG, ResNet, ResNext. Architecture: FPN, U-Net, PAN, LinkNet, PSPNet, DeepLab-V3, DeepLab-V3+ by now.

English | 中文 C++ library with Neural Networks for Image Segmentation based on LibTorch. ⭐ Please give a star if this project helps you. ⭐ The main fea

null 292 Sep 28, 2022
Implement yolov5 with Tensorrt C++ api, and integrate batchedNMSPlugin. A Python wrapper is also provided.

yolov5 Original codes from tensorrtx. I modified the yololayer and integrated batchedNMSPlugin. A yolov5s.wts is provided for fast demo. How to genera

weiwei zhou 43 Sep 26, 2022
TensorRT int8 量化部署 yolov5s 4.0 模型,实测3.3ms一帧!

tensorrt模型推理 git clone https://github.com/Wulingtian/yolov5_tensorrt_int8.git(求star) cd yolov5_tensorrt_int8 vim CMakeLists.txt 修改USER_DIR参数为自己的用户根目录

null 108 Sep 27, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Oct 5, 2022
Simple samples for TensorRT programming

Introduction This is a collection of simplified TensorRT samples to get you started with TensorRT programming. Most of the samples are written in C++,

NVIDIA Corporation 558 Sep 30, 2022
Support Yolov4/Yolov3/Centernet/Classify/Unet. use darknet/libtorch/pytorch to onnx to tensorrt

ONNX-TensorRT Yolov4/Yolov3/CenterNet/Classify/Unet Implementation Yolov4/Yolov3 centernet INTRODUCTION you have the trained model file from the darkn

null 168 Sep 23, 2022
vs2015上使用tensorRT加速yolov5推理(Using tensorrt to accelerate yolov5 reasoning on vs2015)

1、安装环境 CUDA10.2 TensorRT7.2 OpenCV3.4(工程中已给出,不需安装) vs2015 下载相关工程:https://github.com/wang-xinyu/tensorrtx.git 2、生成yolov5s.wts文件 在生成yolov5s.wts前,首先需要下载模

null 16 Apr 19, 2022
Inference framework for MoE layers based on TensorRT with Python binding

InfMoE Inference framework for MoE-based models, based on a TensorRT custom plugin named MoELayerPlugin (including Python binding) that can run infere

Shengqi Chen 32 Aug 16, 2022
TensorRT for Scaled YOLOv4(yolov4-csp.cfg)

TensoRT Scaled YOLOv4 TensorRT for Scaled YOLOv4(yolov4-csp.cfg) 很多人都写过TensorRT版本的yolo了,我也来写一个。 测试环境 ubuntu 18.04 pytorch 1.7.1 jetpack 4.4 CUDA 11.0

Bolano 10 Jul 30, 2021
YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream

Deepstream 5.1 YOLOv4 App This Deepstream application showcases YOLOv4 running at high FPS throughput! P.S - Click the gif to watch the entire video!

Akash James 34 Jul 28, 2022