Sample app code for LPR deployment on DeepStream

Overview

Sample For Car License Recognization


Description

This sample is to show how to use graded models for detection and classification with DeepStream SDK version not less than 5.0.1. The models in this sample are all TLT3.0 models.

PGIE(car detection) -> SGIE(car license plate detection) -> SGIE(car license plate recognization)

LPR/LPD application

This pipeline is based on three TLT models below

More details for TLT3.0 LPD and LPR models and TLT training, please refer to TLT document.

Performance

Below table shows the end-to-end performance of processing 1080p videos with this sample application.

Device Number of streams Batch Size Total FPS
Jetson Nano 1 1 9.2
Jetson NX 3 3 80.31
Jetson Xavier 5 5 146.43
T4 14 14 447.15

Prerequisition

  • DeepStream SDK 5.0.1

    Make sure deepstream-test1 sample can run successful to verify your DeepStream installation

  • tlt-converter

    Download x86 or Jetson tlt-converter which is compatible to your platform from the following links.

Platform Compute Link
x86 + GPU CUDA 10.2/cuDNN 8.0/TensorRT 7.1 link
x86 + GPU CUDA 10.2/cuDNN 8.0/TensorRT 7.2 link
x86 + GPU CUDA 11.0/cuDNN 8.0/TensorRT 7.1 link
x86 + GPU CUDA 11.0/cuDNN 8.0/TensorRT 7.2 link
Jetson JetPack 4.4 link
Jetson JetPack 4.5 link

Download

  1. Download Project with SSH or HTTPS
    // SSH
    git clone [email protected]:NVIDIA-AI-IOT/deepstream_lpr_app.git
    // or HTTPS
    git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
  1. Prepare Models and TensorRT engine
    cd deepstream_lpr_app/

For US car plate recognition

    ./download_us.sh
    // DS5.0.1 gst-nvinfer cannot generate TRT engine for LPR model, so generate it with tlt-converter
    ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 \
           models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

For Chinese car plate recognition

    ./download_ch.sh
    // DS5.0.1 gst-nvinfer cannot generate TRT engine for LPR model, so generate it with tlt-converter
    ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 \
           models/LP/LPR/ch_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_ch_onnx_b16.engine

Build and Run

    make
    cd deepstream-lpr-app

For US car plate recognition

    cp dict_us.txt dict.txt

For Chinese car plate recognition

    cp dict_ch.txt dict.txt

Start to run the application

    ./deepstream-lpr-app <1:US car plate model|2: Chinese car plate model> \
         <1: output as h264 file| 2:fakesink 3:display output> <0:ROI disable|1:ROI enable> \
         <input mp4 file name> ... <input mp4 file name> <output file name>

A sample of US car plate recognition:

./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264

A sample of Chinese car plate recognition:

./deepstream-lpr-app 2 2 0 ch_car_test.mp4 ch_car_test.mp4 output.264

Notice

  1. This sample application only support mp4 files which contain H264 videos as input files.
  2. For Chinese plate recognition, please make sure the OS supports Chinese language.
  3. The second argument of deepstream-lpr-app should be 2(fakesink) for performance test.
  4. The trafficcamnet and LPD models are all INT8 models, the LPR model is FP16 model.
Comments
  • Create lpr engine file

    Create lpr engine file

    Hi, I am working on License plate recognition problem. When I run deepstream app i am facing following issue.

    **Starting pipeline

    0:00:00.209260524 226 0x1a87b20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed. WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output. WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead python3: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vectormyelin::ir::tactic_attribute_t&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed. Aborted (core dumped)**

    I am using DS 6.0. Can anyone please help me on how to solve this issue?

    opened by vinodbukya6 7
  • Fix wrong confidence values

    Fix wrong confidence values

    Confidence is currently transferred from TensorRT network to Deepstream in a wrong way, resulting in random values.

    The LPR network has two output layers:

    1   OUTPUT kINT32 tf_op_layer_ArgMax 24              
    2   OUTPUT kFLOAT tf_op_layer_Max 24              
    

    The first is a 24x1 vector that contains detected characters, the second is a 24x1 vector that contains the confidence of each detected character.

    At the moment, confidence of each detected character is extracted by using the detected character as key to the confidence vector.

    int curr_data = outputStrBuffer[seq_id];
    ...
    bank_softmax_max[valid_bank_count] = outputConfBuffer[curr_data];
    

    i.e., if the 2nd element of the detection vector is character 'Z', that is the 34th character listed in the dict file, its confidence is assumed to be the 34th element of the confidence vector (that doesn't even exist), while it should be the one correspondent to the order in which the character was detected (in this case the 2nd element of the confidence vector).

    This should be:

    int curr_data = outputStrBuffer[seq_id];
    ...
    bank_softmax_max[valid_bank_count] = outputConfBuffer[seq_id];
    

    This patch fixes the issue.

    opened by aler9 3
  • Hello

    Hello

    Can I replace Car detection model LPD model LPR model with my own model ?I want to replace the car recognition model with the model of the high-speed rail that I trained with Yolo.

    opened by NidhoggBo 0
  • cannot find -lnvds_yml_parser

    cannot find -lnvds_yml_parser

    cc -o deepstream-lpr-app deepstream_lpr_app.o deepstream_nvdsanalytics_meta.o ds_yml_parse.o pkg-config --libs gstreamer-1.0 -L/opt/nvidia/deepstream/deepstream/lib/ -lnvdsgst_meta -lnvds_meta -lm -lstdc++ -lnvds_yml_parser -lyaml-cpp -lgstrtspserver-1.0 -Wl,-rpath,/opt/nvidia/deepstream/deepstream/lib/ /usr/bin/ld: cannot find -lnvds_yml_parser collect2: error: ld returned 1 exit status Makefile:74: recipe for target 'deepstream-lpr-app' failed make[1]: *** [deepstream-lpr-app] Error 1 make[1]: Leaving directory '/media/csitc/M2/projects/deepstream_lpr_app/deepstream-lpr-app' Makefile:2: recipe for target 'all' failed

    opened by wycrystal 0
  • Element Could not be created. Exiting.

    Element Could not be created. Exiting.

    I've gotten this message returned after running the execution script. I've reinstalled Deepstream twice and confirmed all the include file should be available when executing with sudo. I even downloaded a sample video and named it to match the corresponding name, since a sample video was not included with the package. I've followed the instructions verbatim twice. I'm running on an AGX Orin with Deepstream 6.1.

    Trying various parameters: vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264 [sudo] password for vetted: One element could not be created. Exiting. vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 3 0 us_car_test2.mp4 us_car_test2.mp4 output.264 One element could not be created. Exiting. vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 3 0 us_car_test2.mp4 us_car_test2.mp4 One element could not be created. Exiting. vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 One element could not be created. Exiting. vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 1 0 us_car_test2.mp4 us_car_test2.mp4 output.264 One element could not be created. Exiting. vetted@ORIN1:/deepstream_lpr_app/deepstream-lpr-app$ sudo ./deepstream-lpr-app 1 1 1 us_car_test2.mp4 us_car_test2.mp4 output.264 One element could not be created. Exiting.

    opened by VettedSJohnston 0
  • Can't find the output file

    Can't find the output file

    Hello How are you? Thanks for contributing to this project. I ran this project by using DeepStream 6.0.1 on Jetson (JetPack 4.6.1). image

    The detected & recognized info are outputted on terminal stdout but I can NOT find the output video file.

    opened by rose-jinyang 0
  • wrong value

    wrong value "0:ROI enable" in ./deepstream-lpr-app -h

    ./deepstream-lpr-app -h
    Usage: ./deepstream-lpr-app [1:us model|2: ch_model] [1:file sink|2:fakesink|3:display sink] [0:ROI disable|0:ROI enable] <In mp4 filename> <in mp4 filename> ... <out H264 filename>
    

    It should be 1:ROI enable

    opened by peterjpxie 0
Owner
NVIDIA AI IOT
NVIDIA AI IOT
LLVM meets Code Property Graphs

llvm2cpg llvm2cpg is a tool that converts LLVM Bitcode into Code Property Graph (CPG). The CPG can be further analyzed via Joern or Ocular. To get sta

ShiftLeft Inc. 62 Nov 24, 2022
My OI Code for OI learners.

MyOI My OI Career Some of My Code... 包括CLion工程,Visual Studio工程,和一些直接的cpp文件。 很容易区分,可以自行用对应的软件打开查看。 大部分是我做过的题目,可以当做题解来看。 但是注释比较少,可能并不是那么容易看懂…… 文件名即为题号,可

芊枫 121 Nov 22, 2022
zeroEngine Logger Code-Base

zero-logger zeroEngine Logger - Part of zeroSDK Features basic log-levels lightweight unicode support by native wchar_t multibyte chars support platfo

Denis 1 Dec 11, 2021
YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream

Deepstream 5.1 YOLOv4 App This Deepstream application showcases YOLOv4 running at high FPS throughput! P.S - Click the gif to watch the entire video!

Akash James 35 Nov 10, 2022
deploy yolox algorithm use deepstream

YOLOX(Megvii-BaseDetection) Deploy DeepStream ?? ?? This project base on https://github.com/Megvii-BaseDetection/YOLOX and https://zhuanlan.zhihu.com/

null 81 Jan 4, 2023
A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection

A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection, how to set up the GigE camera, and deployment for the DeepStream apps.

NVIDIA AI IOT 9 Sep 23, 2022
C++ Workflow with kubernetes automated deployment.

workflow-k8s 本项目旨在将Workflow的服务治理与kubernetes的自动部署相融合,打造稳定、便捷的服务体系。 Kubernetes API Server提供了HTTP(S)接口,当集群内Pod发生变动后,会及时将这些事件推送给监听者,本项目依托Workflow的服务治理体系,使

Sogou Open Source 25 Aug 5, 2022
This is a C++17 deployment of deep-learning based image inpainting algorithm on Windows10, using Libtorch, Opencv and Qt.

This is a desktop software for image inpainting. It is a C++ deployment of image inpainting algorithm on Windows10, based on C++17 and implemented using vs2019.

null 4 May 13, 2022
Repo of HACK@AC 2021 Challenges and deployment

HACK@AC 2021 The challenges were made by the following contributors: dbsqwerty Ocean MiloTruck Wealthyturtle Lord_Idiot daniellimws lampardnk samuzora

null 6 Mar 9, 2022
Mmdeploy - OpenMMLab Model Deployment Framework

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 1.5k Jan 4, 2023
This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch.

Onsets and Frames TensorRT inference This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch (http

Xianke Wang 6 Jan 13, 2022
A sample app that demonstrates several techniques for rendering real-time shadow maps

Shadows This is a D3D11 sample app that demonstrates several techniques for rendering real-time shadow maps. The following techniques are implemented:

MJP 587 Jan 8, 2023
If the button pressed esp will reset and App mode will on. App mode will on then led will on, network is connected led will off.

DHT22-to-Google-sheet-Reset-Using-ESP8266-LED-Switch If button pressed esp will reset and App mode will on. App mode will on then led will on, network

Md. Harun-Or-Rashid 3 Aug 17, 2022
Flutter-Clock-and-Reminder-App - a highly functional clock and reminder app developed on flutter framework.

clock_app A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started if thi

Umar Baloch 6 Aug 4, 2022
Arduino sample code to help you get started using the Soracom IoT Starter Kit!

Soracom IoT Starter Kit The Soracom IoT Starter Kit includes everything you need to build your first connected device. It includes an Arduino MKR GSM

Soracom Labs 13 Jul 30, 2022
Provide sample code of efficient operator implementation based on the Cambrian Machine Learning Unit (MLU) .

Cambricon CNNL-Example CNNL-Example 提供基于寒武纪机器学习单元(Machine Learning Unit,MLU)开发高性能算子、C 接口封装的示例代码。 依赖条件 操作系统: 目前只支持 Ubuntu 16.04 x86_64 寒武纪 MLU SDK: 编译和

Cambricon Technologies 1 Mar 7, 2022
OpenCV Sample Code in C++

OpenCV Sample Code in C++

iwatake 47 Jan 4, 2023
HMS Core Scene Kit Slim Mesh Demo sample code demonstrates how to invoke external interfaces of SlimMesh to perform mesh simplification.

HMS Core Scene Kit Slim Mesh Demo English | 中文 Table of Contents Introduction Environments Requirements Result License Introduction The Slim Mesh is a

HMS 8 Jul 28, 2022
Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content.

This project is a Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content. The application decodes the h.264/5 compressed content using an HW accelerated decoder, the decoded YCbCr frames are processed with Vulkan Graphics and then presented via the Vulkan WSI.

NVIDIA DesignWorks Samples 132 Dec 15, 2022