Sample app code for LPR deployment on DeepStream

Overview

Sample For Car License Recognization


Description

This sample is to show how to use graded models for detection and classification with DeepStream SDK version not less than 5.0.1. The models in this sample are all TLT3.0 models.

PGIE(car detection) -> SGIE(car license plate detection) -> SGIE(car license plate recognization)

LPR/LPD application

This pipeline is based on three TLT models below

More details for TLT3.0 LPD and LPR models and TLT training, please refer to TLT document.

Performance

Below table shows the end-to-end performance of processing 1080p videos with this sample application.

Device Number of streams Batch Size Total FPS
Jetson Nano 1 1 9.2
Jetson NX 3 3 80.31
Jetson Xavier 5 5 146.43
T4 14 14 447.15

Prerequisition

  • DeepStream SDK 5.0.1

    Make sure deepstream-test1 sample can run successful to verify your DeepStream installation

  • tlt-converter

    Download x86 or Jetson tlt-converter which is compatible to your platform from the following links.

Platform Compute Link
x86 + GPU CUDA 10.2/cuDNN 8.0/TensorRT 7.1 link
x86 + GPU CUDA 10.2/cuDNN 8.0/TensorRT 7.2 link
x86 + GPU CUDA 11.0/cuDNN 8.0/TensorRT 7.1 link
x86 + GPU CUDA 11.0/cuDNN 8.0/TensorRT 7.2 link
Jetson JetPack 4.4 link
Jetson JetPack 4.5 link

Download

  1. Download Project with SSH or HTTPS
    // SSH
    git clone [email protected]:NVIDIA-AI-IOT/deepstream_lpr_app.git
    // or HTTPS
    git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
  1. Prepare Models and TensorRT engine
    cd deepstream_lpr_app/

For US car plate recognition

    ./download_us.sh
    // DS5.0.1 gst-nvinfer cannot generate TRT engine for LPR model, so generate it with tlt-converter
    ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 \
           models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

For Chinese car plate recognition

    ./download_ch.sh
    // DS5.0.1 gst-nvinfer cannot generate TRT engine for LPR model, so generate it with tlt-converter
    ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 \
           models/LP/LPR/ch_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_ch_onnx_b16.engine

Build and Run

    make
    cd deepstream-lpr-app

For US car plate recognition

    cp dict_us.txt dict.txt

For Chinese car plate recognition

    cp dict_ch.txt dict.txt

Start to run the application

    ./deepstream-lpr-app <1:US car plate model|2: Chinese car plate model> \
         <1: output as h264 file| 2:fakesink 3:display output> <0:ROI disable|1:ROI enable> \
         <input mp4 file name> ... <input mp4 file name> <output file name>

A sample of US car plate recognition:

./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264

A sample of Chinese car plate recognition:

./deepstream-lpr-app 2 2 0 ch_car_test.mp4 ch_car_test.mp4 output.264

Notice

  1. This sample application only support mp4 files which contain H264 videos as input files.
  2. For Chinese plate recognition, please make sure the OS supports Chinese language.
  3. The second argument of deepstream-lpr-app should be 2(fakesink) for performance test.
  4. The trafficcamnet and LPD models are all INT8 models, the LPR model is FP16 model.
Issues
  • Create lpr engine file

    Create lpr engine file

    Hi, I am working on License plate recognition problem. When I run deepstream app i am facing following issue.

    **Starting pipeline

    0:00:00.209260524 226 0x1a87b20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed. WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output. WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead python3: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vectormyelin::ir::tactic_attribute_t&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed. Aborted (core dumped)**

    I am using DS 6.0. Can anyone please help me on how to solve this issue?

    opened by vinodbukya6 7
  • Can't find the output file

    Can't find the output file

    Hello How are you? Thanks for contributing to this project. I ran this project by using DeepStream 6.0.1 on Jetson (JetPack 4.6.1). image

    The detected & recognized info are outputted on terminal stdout but I can NOT find the output video file.

    opened by rose-jinyang 0
  • wrong value

    wrong value "0:ROI enable" in ./deepstream-lpr-app -h

    ./deepstream-lpr-app -h
    Usage: ./deepstream-lpr-app [1:us model|2: ch_model] [1:file sink|2:fakesink|3:display sink] [0:ROI disable|0:ROI enable] <In mp4 filename> <in mp4 filename> ... <out H264 filename>
    

    It should be 1:ROI enable

    opened by peterjpxie 0
  • Fix wrong confidence values

    Fix wrong confidence values

    Confidence is currently transferred from TensorRT network to Deepstream in a wrong way, resulting in random values.

    The LPR network has two output layers:

    1   OUTPUT kINT32 tf_op_layer_ArgMax 24              
    2   OUTPUT kFLOAT tf_op_layer_Max 24              
    

    The first is a 24x1 vector that contains detected characters, the second is a 24x1 vector that contains the confidence of each detected character.

    At the moment, confidence of each detected character is extracted by using the detected character as key to the confidence vector.

    int curr_data = outputStrBuffer[seq_id];
    ...
    bank_softmax_max[valid_bank_count] = outputConfBuffer[curr_data];
    

    i.e., if the 2nd element of the detection vector is character 'Z', that is the 34th character listed in the dict file, its confidence is assumed to be the 34th element of the confidence vector (that doesn't even exist), while it should be the one correspondent to the order in which the character was detected (in this case the 2nd element of the confidence vector).

    This should be:

    int curr_data = outputStrBuffer[seq_id];
    ...
    bank_softmax_max[valid_bank_count] = outputConfBuffer[seq_id];
    

    This patch fixes the issue.

    opened by aler9 0
  • deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false &&

    deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector&, std::vector&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed. Aborted (core dumped)

    This app is running fine is Deepstream-Devel container [nvcr.io/nvidia/deepstream:6.0-devel]

    when I am running with same app in Deepstream-Base container facing below issue.

    [email protected]:/app/deepstream_lpr_app/deepstream-lpr-app# ./deepstream-lpr-app 1 2 0 /app/metro_Trim.mp4 out.h264
    Request sink_0 pad from streammux
    Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
    Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
    Now playing: 1
    ERROR: [TRT]: 1: [graphContext.h::MyelinGraphContext::24] Error Code 1: Myelin (cuBLASLt error 1 querying major version.)
    ERROR: [TRT]: 1: [graphContext.h::MyelinGraphContext::24] Error Code 1: Myelin (cuBLASLt error 1 querying major version.)
    ERROR: nvdsinfer_backend.cpp:394 Failed to setOptimizationProfile with idx:0 
    ERROR: nvdsinfer_backend.cpp:228 Failed to initialize TRT backend, nvinfer error:NVDSINFER_INVALID_PARAMS
    0:00:02.993528390 15515 0x558d7dc0fe30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1896> [UID = 3]: create backend context from engine from file :/app/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed
    0:00:02.994778147 15515 0x558d7dc0fe30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 3]: deserialize backend context from engine from file :/app/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed, try rebuild
    0:00:02.994800887 15515 0x558d7dc0fe30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
    WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
    WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
    WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
    WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
    deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed.
    Aborted (core dumped)
    
    opened by imSrbh 1
Owner
NVIDIA AI IOT
NVIDIA AI IOT
LLVM meets Code Property Graphs

llvm2cpg llvm2cpg is a tool that converts LLVM Bitcode into Code Property Graph (CPG). The CPG can be further analyzed via Joern or Ocular. To get sta

ShiftLeft Inc. 56 Jun 10, 2022
My OI Code for OI learners.

MyOI My OI Career Some of My Code... 包括CLion工程,Visual Studio工程,和一些直接的cpp文件。 很容易区分,可以自行用对应的软件打开查看。 大部分是我做过的题目,可以当做题解来看。 但是注释比较少,可能并不是那么容易看懂…… 文件名即为题号,可

芊枫 123 Feb 14, 2022
zeroEngine Logger Code-Base

zero-logger zeroEngine Logger - Part of zeroSDK Features basic log-levels lightweight unicode support by native wchar_t multibyte chars support platfo

Denis 1 Dec 11, 2021
YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream

Deepstream 5.1 YOLOv4 App This Deepstream application showcases YOLOv4 running at high FPS throughput! P.S - Click the gif to watch the entire video!

Akash James 31 Apr 21, 2022
deploy yolox algorithm use deepstream

YOLOX(Megvii-BaseDetection) Deploy DeepStream ?? ?? This project base on https://github.com/Megvii-BaseDetection/YOLOX and https://zhuanlan.zhihu.com/

null 74 Jun 13, 2022
A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection

A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection, how to set up the GigE camera, and deployment for the DeepStream apps.

NVIDIA AI IOT 7 May 27, 2022
C++ Workflow with kubernetes automated deployment.

workflow-k8s 本项目旨在将Workflow的服务治理与kubernetes的自动部署相融合,打造稳定、便捷的服务体系。 Kubernetes API Server提供了HTTP(S)接口,当集群内Pod发生变动后,会及时将这些事件推送给监听者,本项目依托Workflow的服务治理体系,使

Sogou Open Source 22 Apr 29, 2022
This is a C++17 deployment of deep-learning based image inpainting algorithm on Windows10, using Libtorch, Opencv and Qt.

This is a desktop software for image inpainting. It is a C++ deployment of image inpainting algorithm on Windows10, based on C++17 and implemented using vs2019.

null 4 May 13, 2022
Repo of [email protected] 2021 Challenges and deployment

[email protected] 2021 The challenges were made by the following contributors: dbsqwerty Ocean MiloTruck Wealthyturtle Lord_Idiot daniellimws lampardnk samuzora

null 6 Mar 9, 2022
Mmdeploy - OpenMMLab Model Deployment Framework

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 987 Jun 28, 2022
This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch.

Onsets and Frames TensorRT inference This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch (http

Xianke Wang 6 Jan 13, 2022
A sample app that demonstrates several techniques for rendering real-time shadow maps

Shadows This is a D3D11 sample app that demonstrates several techniques for rendering real-time shadow maps. The following techniques are implemented:

MJP 516 Jun 21, 2022
If the button pressed esp will reset and App mode will on. App mode will on then led will on, network is connected led will off.

DHT22-to-Google-sheet-Reset-Using-ESP8266-LED-Switch If button pressed esp will reset and App mode will on. App mode will on then led will on, network

Md. Harun-Or-Rashid 4 Oct 5, 2021
Flutter-Clock-and-Reminder-App - a highly functional clock and reminder app developed on flutter framework.

clock_app A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started if thi

Umar Baloch 5 Dec 27, 2021
Arduino sample code to help you get started using the Soracom IoT Starter Kit!

Soracom IoT Starter Kit The Soracom IoT Starter Kit includes everything you need to build your first connected device. It includes an Arduino MKR GSM

Soracom Labs 13 Feb 22, 2022
Provide sample code of efficient operator implementation based on the Cambrian Machine Learning Unit (MLU) .

Cambricon CNNL-Example CNNL-Example 提供基于寒武纪机器学习单元(Machine Learning Unit,MLU)开发高性能算子、C 接口封装的示例代码。 依赖条件 操作系统: 目前只支持 Ubuntu 16.04 x86_64 寒武纪 MLU SDK: 编译和

Cambricon Technologies 1 Mar 7, 2022
OpenCV Sample Code in C++

OpenCV Sample Code in C++

iwatake 39 Jun 20, 2022
HMS Core Scene Kit Slim Mesh Demo sample code demonstrates how to invoke external interfaces of SlimMesh to perform mesh simplification.

HMS Core Scene Kit Slim Mesh Demo English | 中文 Table of Contents Introduction Environments Requirements Result License Introduction The Slim Mesh is a

HMS 7 Feb 13, 2022
Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content.

This project is a Vulkan Video Sample Application demonstrating an end-to-end, all-Vulkan, processing of h.264/5 compressed video content. The application decodes the h.264/5 compressed content using an HW accelerated decoder, the decoded YCbCr frames are processed with Vulkan Graphics and then presented via the Vulkan WSI.

NVIDIA DesignWorks Samples 108 Jun 18, 2022
ESP32 drum computer / sample player / midi sequencer (Arduino audio project)

esp32_drum_computer ESP32 drum computer / sample player / midi sequencer (Arduino audio project) The project can be seen in my video https://youtu.be/

Marcel 30 Jun 18, 2022
A bespoke sample compression codec for 64k intros

pulsejet A bespoke sample compression codec for 64K intros codec pulsejet lifts a lot of ideas from Opus, and more specifically, its CELT layer, which

logicoma 32 Apr 6, 2022
A C++17 library of computationally efficient methods for calculating sample statistics

Vectorized statistics using SIMD primitives Introduction is a C++17 library of computationally efficient methods for calculating sample statistics (me

HEAL 10 Jan 7, 2022
Cross-platform tool to extract wavetables and draw envelopes from sample files, exporting the wavetable and generating the appropriate SFZ text to use in a suitable player.

wextract Cross-platform tool to extract wavetables and draw envelopes from sample files, exporting the wavetable and generating the appropriate SFZ te

Paul Ferrand 9 Jan 5, 2022
This is a sample ncnn android project, it depends on ncnn library and opencv

This is a sample ncnn android project, it depends on ncnn library and opencv

null 211 Jun 22, 2022
Faster Non-Integer Sample Rate Conversion

Non-Integer Sample Rate Conversion This repository contains a comparison of sample-rate conversion (SRC) algorithms, with an emphasis on performance f

null 23 Mar 6, 2022
A sample demonstrating hybrid ray tracing and rasterisation for shadow rendering and use of the FidelityFX Denoiser.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

GPUOpen Effects 49 Jun 24, 2022
Fast, modern C++ DSP framework, FFT, Sample Rate Conversion, FIR/IIR/Biquad Filters (SSE, AVX, AVX-512, ARM NEON)

KFR - Fast, modern C++ DSP framework Compiler support: https://www.kfr.dev KFR is an open source C++ DSP framework that focuses on high performance (s

KFR 1.2k Jun 26, 2022
This repository Contains PPTs and Sample Codes for IoT workshop 21st to 24th Aug 2021

IoT-workshop This repository Contains PPTs and Sample Codes for IoT workshop 21st to 24th Aug 2021 Fritzing Download Link: https://www.filehorse.com/d

Aman Singh 2 Feb 8, 2022