yolov5 onnx caffe

Overview

环境配置

ubuntu:18.04

cuda:10.0

cudnn:7.6.5

caffe: 1.0

OpenCV:3.4.2

Anaconda3:5.2.0

相关的安装包我已经放到百度云盘,可以从如下链接下载: https://pan.baidu.com/s/17bjiU4H5O36psGrHlFdM7A 密码: br7h

cuda和cudnn的安装

可以参考我的另一篇部署文章(TensorRT int8 量化部署 yolov5s 4.0 模型)

Anaconda安装

chmod +x Anaconda3-5.2.0-Linux-x86_64.sh(从上面百度云盘链接下载)

./Anaconda3-5.2.0-Linux-x86_64.sh

按ENTER,然后按q调至结尾

接受协议 yes

安装路径 使用默认路径

执行安装

在使用的用户.bashrc上添加anaconda路径,比如

export PATH=/home/willer/anaconda3/bin:$PATH

caffe安装

git clone https://github.com/Wulingtian/yolov5_caffe.git

cd yolov5_caffe

命令行输入如下内容:

export CPLUS_INCLUDE_PATH=/home/你的用户名/anaconda3/include/python3.6m

make all -j8

make pycaffe -j8

vim ~/.bashrc

export PYTHONPATH=/home/你的用户名/yolov5_caffe/python:$PYTHONPATH

source ~/.bashrc

编译过程踩过的坑

libstdc++.so.6: version `GLIBCXX_3.4.21' not found

解决方案:搞定 libstdc++.so.6: version `GLIBCXX_3.4.21' not found

ImportError: No module named google.protobuf.internal

解决方案:ImportError: No module named google.protobuf.internal

wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or dir

解决方案:caffe : /wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or dir

yolov5s模型转换onnx模型

pip安装onnx和onnx-simplifier

pip install onnx

pip install onnx-simplifier

拉取yolov5官方代码

git clone https://github.com/ultralytics/yolov5.git

训练自己的模型步骤参考yolov5官方介绍,训练完成后我们得到了一个模型文件

cd yolov5

vim models/export.py 修改opset_version为10

python models/export.py --weights 训练得到的模型权重路径 --img-size 训练图片输入尺寸

python -m onnxsim onnx模型名称 yolov5s-simple.onnx 得到最终简化后的onnx模型

onnx模型转换caffe模型

git clone https://github.com/Wulingtian/yolov5_onnx2caffe.git

cd yolov5_onnx2caffe

vim convertCaffe.py

设置onnx_path(上面转换得到的onnx模型),prototxt_path(caffe的prototxt保存路径),caffemodel_path(caffe的caffemodel保存路径)

python convertCaffe.py 得到转换好的caffe模型

caffe模型推理

定位到yolov5_caffe目录下

cd tools

vim caffe_yolov5s.cpp

设置如下参数:

INPUT_W(模型输入宽度)

INPUT_H(模型输入高度)

NUM_CLASS(模型有多少个类别,例如我训练的模型是安全帽检测,只有1类,所以设置为1,不需要加背景类)

NMS_THRESH(做非极大值抑制的阈值)

CONF_THRESH(类别置信度)

prototxt_path(caffe模型的prototxt路径)

caffemodel_path(caffe模型的caffemodel路径)

pic_path(预测图片的路径)

定位到yolov5_caffe目录下

make -j8

cd build

./tools/caffe_yolov5s 输出平均推理时间,以及保存预测图片到当前目录下,至此,部署完成!

You might also like...
Lite.AI 🚀🚀🌟  is a user-friendly C++ lib for awesome🔥🔥🔥 AI models  based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD
Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, Colorization🔥, SSD🔥, etc.

Lite.AI 🚀🚀🌟  is a user friendly C++ lib of 60+ awesome AI models. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, RetinaFace🔥, SSD🔥, etc.
Lite.AI 🚀🚀🌟 is a user friendly C++ lib of 60+ awesome AI models. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, RetinaFace🔥, SSD🔥, etc.

Lite.AI 🚀 🚀 🌟 Introduction. Lite.AI 🚀 🚀 🌟 is a simple and user-friendly C++ library of awesome 🔥 🔥 🔥 AI models. It's a collection of personal

shufflev2-yolov5: lighter, faster and easier to deploy
shufflev2-yolov5: lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

A rknn cpp/c++ inference codebase for yolov5.

Yolov5 RKNN Cpp This is a code base for yolov5 cpp inference. This code is built for android arm v8 test. NDK Version: r16b Install Download and set N

A c++ implementation of yolov5 and deepsort

A C++ implementation of Yolov5 and Deepsort in Jetson Xavier nx and Jetson nano This repository uses yolov5 and deepsort to follow humna heads which c

Real-time object detection with YOLOv5 and TensorRT

YOLOv5-TensorRT The goal of this library is to provide an accessible and robust method for performing efficient, real-time inference with YOLOv5 using

This is a c++ implement of yolov5 and fire/smoke detect.
This is a c++ implement of yolov5 and fire/smoke detect.

A C++ implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano This repository uses yolov5 and deepsort to fol

A C++ implementation of Yolov5 helmet detection in Jetson Xavier nx and Jetson nano
A C++ implementation of Yolov5 helmet detection in Jetson Xavier nx and Jetson nano

A C++ implementation of Yolov5 to detect head or helmet in the wild in Jetson Xavier nx and Jetson nano This repository uses yolov5 to detect humnan h

A C++ implementation of Yolov5 to detect mask running in Jetson Xavier nx and Jetson nano.
A C++ implementation of Yolov5 to detect mask running in Jetson Xavier nx and Jetson nano.

yolov5-mask-detect A C++ implementation of Yolov5 to detect mask running in Jetson Xavier nx and Jetson nano.In Jetson Xavier Nx, it can achieve 33 FP

Comments
  • It is can not convert onnx to caffe  model

    It is can not convert onnx to caffe model

    hello; i have build the environment successfully fellow the illustration. when i run python convertCaffe.py; it return as bellow : Traceback (most recent call last): File "convertCaffe.py", line 6, in import caffe File "/home/bkuser/work2/data/liudongbo/yolov5_caffe/python/caffe/init.py", line 1, in from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "/home/bkuser/work2/data/liudongbo/yolov5_caffe/python/caffe/pycaffe.py", line 13, in from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver,
    ImportError: dynamic module does not define module export function (PyInit__caffe)

    then i run python2 convertCaffe.py; it return as bellow : Traceback (most recent call last): File "convertCaffe.py", line 6, in import caffe File "/home/bkuser/work2/data/liudongbo/yolov5_caffe/python/caffe/init.py", line 1, in from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "/home/bkuser/work2/data/liudongbo/yolov5_caffe/python/caffe/pycaffe.py", line 13, in from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver,
    ImportError: dynamic module does not define module export function (PyInit__caffe) bkuser@bk-ai3:~/work2/data/liudongbo/yolov5_onnx2caffe$ python2 convertCaffe.py Traceback (most recent call last): File "convertCaffe.py", line 8, in import onnx ImportError: No module named onnx

    then , i run pip2 install onnx ,but it failed.

    could you give me an advice please ? Yours

    opened by lwxGitHub123 0
  • boost::detail::set_tss_data 碰到找不到这个,请问作者你boost的版本是多少

    boost::detail::set_tss_data 碰到找不到这个,请问作者你boost的版本是多少

    .build_release/lib/libcaffe.so: undefined reference to boost::detail::set_tss_data(void const*, void (*)(void (*)(void*), void*), void (*)(void*), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/caffe.bin' failed make: *** [.build_release/tools/caffe.bin] Error 1 make: *** Waiting for unfinished jobs.... .build_release/lib/libcaffe.so: undefined reference toboost::detail::set_tss_data(void const*, void ()(void ()(void*), void*), void ()(void), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/upgrade_net_proto_binary.bin' failed make: *** [.build_release/tools/upgrade_net_proto_binary.bin] Error 1 .build_release/lib/libcaffe.so: undefined reference to boost::detail::set_tss_data(void const*, void (*)(void (*)(void*), void*), void (*)(void*), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/upgrade_solver_proto_text.bin' failed make: *** [.build_release/tools/upgrade_solver_proto_text.bin] Error 1 .build_release/lib/libcaffe.so: undefined reference toboost::detail::set_tss_data(void const*, void ()(void ()(void*), void*), void ()(void), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/caffemodel2txt.bin' failed make: *** [.build_release/tools/caffemodel2txt.bin] Error 1 .build_release/lib/libcaffe.so: undefined reference to boost::detail::set_tss_data(void const*, void (*)(void (*)(void*), void*), void (*)(void*), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/caffe_yolov5s.bin' failed make: *** [.build_release/tools/caffe_yolov5s.bin] Error 1 .build_release/lib/libcaffe.so: undefined reference toboost::detail::set_tss_data(void const*, void ()(void ()(void*), void*), void ()(void), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/extract_features.bin' failed make: *** [.build_release/tools/extract_features.bin] Error 1 .build_release/lib/libcaffe.so: undefined reference to boost::detail::set_tss_data(void const*, void (*)(void (*)(void*), void*), void (*)(void*), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/compute_image_mean.bin' failed make: *** [.build_release/tools/compute_image_mean.bin] Error 1 .build_release/lib/libcaffe.so: undefined reference toboost::detail::set_tss_data(void const*, void ()(void ()(void*), void*), void ()(void), void*, bool)' collect2: error: ld returned 1 exit status Makefile:636: recipe for target '.build_release/tools/upgrade_net_proto_text.bin' failed make: *** [.build_release/tools/upgrade_net_proto_text.bin] Error 1

    opened by huangzongmou 2
  • The problem about C++11 when make all -j8

    The problem about C++11 when make all -j8

    when I make all -j8,the problem as: tools/caffe_yolov5s.cpp:246:5: warning: identifier 'nullptr' is a keyword in C++11 [-Wc++0x-compat] Net caffe_net(prototxt_path, caffe::TEST, 0, nullptr); ^ tools/caffe_yolov5s.cpp: In function 'std::vector initAnchors()': tools/caffe_yolov5s.cpp:62:13: error: 'class std::vector' has no member named 'emplace_back' anchors.emplace_back(anchor);

    ubuntu:18.04 cuda:10.0 cudnn:7.6.5 caffe: 1.0 OpenCV:3.4.2 Anaconda3:5.2.0

    opened by cqchenqianqc 0
Owner
null
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Dec 30, 2022
Pure C ONNX runtime with zero dependancies for embedded devices

?? cONNXr C ONNX Runtime A onnx runtime written in pure C99 with zero dependencies focused on embedded devices. Run inference on your machine learning

Alvaro 140 Dec 4, 2022
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Libonnx A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. Getting Started The library's

xboot.org 442 Dec 25, 2022
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator compatible with deep learning frameworks, PyTorch and TensorFlow/Keras, as well as classical machine learning libraries such as scikit-learn, and more.

Microsoft 8k Jan 2, 2023
Support Yolov4/Yolov3/Centernet/Classify/Unet. use darknet/libtorch/pytorch to onnx to tensorrt

ONNX-TensorRT Yolov4/Yolov3/CenterNet/Classify/Unet Implementation Yolov4/Yolov3 centernet INTRODUCTION you have the trained model file from the darkn

null 172 Dec 29, 2022
Examples for using ONNX Runtime for machine learning inferencing.

Examples for using ONNX Runtime for machine learning inferencing.

Microsoft 394 Jan 3, 2023
YOLO v5 ONNX Runtime C++ inference code.

yolov5-onnxruntime C++ YOLO v5 ONNX Runtime inference code for object detection. Dependecies: OpenCV 4.5+ ONNXRuntime 1.7+ OS: Windows 10 or Ubuntu 20

null 99 Dec 30, 2022
Implement yolov5 with Tensorrt C++ api, and integrate batchedNMSPlugin. A Python wrapper is also provided.

yolov5 Original codes from tensorrtx. I modified the yololayer and integrated batchedNMSPlugin. A yolov5s.wts is provided for fast demo. How to genera

weiwei zhou 46 Dec 6, 2022
vs2015上使用tensorRT加速yolov5推理(Using tensorrt to accelerate yolov5 reasoning on vs2015)

1、安装环境 CUDA10.2 TensorRT7.2 OpenCV3.4(工程中已给出,不需安装) vs2015 下载相关工程:https://github.com/wang-xinyu/tensorrtx.git 2、生成yolov5s.wts文件 在生成yolov5s.wts前,首先需要下载模

null 16 Apr 19, 2022
ncnn of yolov5 v5.0 branch

YOLOv5转NCNN 基于YOLOv5最新v5.0 release,和NCNN官方给出example的差别主要有: 激活函数hardswish变为siLu; 流程和详细记录u版YOLOv5目标检测ncnn实现略微不同 编译运行 动态库用的是官方编译好的ncnn-20210507-ubuntu-16

null 75 Dec 16, 2022