YOLO5Face.lite.ai.toolkit

Overview

YOLO5Face.lite.ai.toolkit

使用 🍅 🍅 Lite.AI.ToolKit C++工具箱来跑YOLO5Face人脸检测(带关键点)的一些案例(https://github.com/DefTruth/lite.ai.toolkit) , 包含ONNXRuntime C++、MNN、TNN和NCNN版本。

如果觉得有用,不妨给个Star ⭐️ 🌟 支持一下吧~ 🙃 🤪 🍀

2. C++版本源码

YOLO5Face C++ 版本的源码包含ONNXRuntime、MNN、TNN和NCNN四个版本,源码可以在 lite.ai.toolkit 工具箱中找到。本项目主要介绍如何基于 lite.ai.toolkit 工具箱,直接使用YOLO5Face来跑人脸检测。需要说明的是,本项目是基于MacOS下编译的 liblite.ai.toolkit.v0.1.0.dylib 来实现的,对于使用MacOS的用户,可以直接下载本项目包含的liblite.ai.toolkit.v0.1.0动态库和其他依赖库进行使用。而非MacOS用户,则需要从lite.ai.toolkit 中下载源码进行编译。lite.ai.toolkit c++工具箱目前包含70+流行的开源模型,就不多介绍了,只是平时顺手捏的,整合了自己学习过程中接触到的一些模型,感兴趣的同学可以去看看。

ONNXRuntime C++、MNN、TNN和NCNN版本的推理实现均已测试通过,欢迎白嫖~

3. 模型文件

3.1 ONNX模型文件

可以从我提供的链接下载 (Baidu Drive code: 8gin) , 也可以从本仓库下载。

Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::face::detect::YOLO5Face yolov5face-blazeface-640x640.onnx YOLO5Face 3.4Mb
lite::cv::face::detect::YOLO5Face yolov5face-l-640x640.onnx YOLO5Face 181Mb
lite::cv::face::detect::YOLO5Face yolov5face-m-640x640.onnx YOLO5Face 83Mb
lite::cv::face::detect::YOLO5Face yolov5face-n-0.5-320x320.onnx YOLO5Face 2.5Mb
lite::cv::face::detect::YOLO5Face yolov5face-n-0.5-640x640.onnx YOLO5Face 4.6Mb
lite::cv::face::detect::YOLO5Face yolov5face-n-640x640.onnx YOLO5Face 9.5Mb
lite::cv::face::detect::YOLO5Face yolov5face-s-640x640.onnx YOLO5Face 30Mb

3.2 MNN模型文件

MNN模型文件下载地址,(Baidu Drive code: 9v63), 也可以从本仓库下载。

Class Pretrained MNN Files Rename or Converted From (Repo) Size
lite::mnn::cv::face::detect::YOLO5Face yolov5face-blazeface-640x640.mnn YOLO5Face 3.4Mb
lite::mnn::cv::face::detect::YOLO5Face yolov5face-l-640x640.mnn YOLO5Face 181Mb
lite::mnn::cv::face::detect::YOLO5Face yolov5face-m-640x640.mnn YOLO5Face 83Mb
lite::mnn::cv::face::detect::YOLO5Face yolov5face-n-0.5-320x320.mnn YOLO5Face 2.5Mb
lite::mnn::cv::face::detect::YOLO5Face yolov5face-n-0.5-640x640.mnn YOLO5Face 4.6Mb
lite::mnn::cv::face::detect::YOLO5Face yolov5face-n-640x640.mnn YOLO5Face 9.5Mb
lite::mnn::cv::face::detect::YOLO5Face yolov5face-s-640x640.mnn YOLO5Face 30Mb

3.3 TNN模型文件

TNN模型文件下载地址,(Baidu Drive code: 6o6k), 也可以从本仓库下载。

Class Pretrained TNN Files Rename or Converted From (Repo) Size
lite::tnn::cv::face::detect::YOLO5Face yolov5face-blazeface-640x640.opt.tnnproto&tnnmodel YOLO5Face 3.4Mb
lite::tnn::cv::face::detect::YOLO5Face yolov5face-l-640x640.opt.tnnproto&tnnmodel YOLO5Face 181Mb
lite::tnn::cv::face::detect::YOLO5Face yolov5face-m-640x640.opt.tnnproto&tnnmodel YOLO5Face 83Mb
lite::tnn::cv::face::detect::YOLO5Face yolov5face-n-0.5-320x320.opt.tnnproto&tnnmodel YOLO5Face 2.5Mb
lite::tnn::cv::face::detect::YOLO5Face yolov5face-n-0.5-640x640.opt.tnnproto&tnnmodel YOLO5Face 4.6Mb
lite::tnn::cv::face::detect::YOLO5Face yolov5face-n-640x640.opt.tnnproto&tnnmodel YOLO5Face 9.5Mb
lite::tnn::cv::face::detect::YOLO5Face yolov5face-s-640x640.opt.tnnproto&tnnmodel YOLO5Face 30Mb

3.4 NCNN模型文件

NCNN模型文件下载地址,(Baidu Drive code: sc7f), 也可以从本仓库下载。

Class Pretrained NCNN Files Rename or Converted From (Repo) Size
lite::ncnn::cv::face::detect::YOLO5Face yolov5face-l-640x640.opt.param&bin YOLO5Face 181Mb
lite::ncnn::cv::face::detect::YOLO5Face yolov5face-m-640x640.opt.param&bin YOLO5Face 83Mb
lite::ncnn::cv::face::detect::YOLO5Face yolov5face-n-0.5-320x320.opt.param&bin YOLO5Face 2.5Mb
lite::ncnn::cv::face::detect::YOLO5Face yolov5face-n-0.5-640x640.opt.param&bin YOLO5Face 4.6Mb
lite::ncnn::cv::face::detect::YOLO5Face yolov5face-n-640x640.opt.param&bin YOLO5Face 9.5Mb
lite::ncnn::cv::face::detect::YOLO5Face yolov5face-s-640x640.opt.param&bin YOLO5Face 30Mb

4. 接口文档

lite.ai.toolkit 中,YOLO5Face的实现类为:

class LITE_EXPORTS lite::cv::face::detect::YOLO5Face;
class LITE_EXPORTS lite::mnn::cv::face::detect::YOLO5Face;
class LITE_EXPORTS lite::tnn::cv::face::detect::YOLO5Face;
class LITE_EXPORTS lite::ncnn::cv::face::detect::YOLO5Face;

该类型目前包含1公共接口detect用于进行目标检测。

public:
    /**
     * @param mat cv::Mat BGR format
     * @param detected_boxes_kps vector of BoxfWithLandmarks to catch detected boxes and landmarks.
     * @param score_threshold default 0.25f, only keep the result which >= score_threshold.
     * @param iou_threshold default 0.45f, iou threshold for NMS.
     * @param topk default 400, maximum output boxes after NMS.
     */
    void detect(const cv::Mat &mat, std::vector
    &detected_boxes_kps,
                
   float score_threshold = 
   0.
   25f, 
   float iou_threshold = 
   0.
   45f,
                
   unsigned 
   int topk = 
   400);
  

detect接口的输入参数说明:

  • mat: cv::Mat类型,BGR格式。
  • detected_boxes_kps: BoxfWithLandmarks向量,包含被检测到的框box(Boxf),box中包含x1,y1,x2,y2,label,score等成员; 以及landmarks(landmarks)人脸关键点(5个),其中包含了points,代表关键点,是一个cv::point2f向量(vector);
  • score_threshold:分类得分(质量得分)阈值,默认0.25,小于该阈值的框将被丢弃。
  • iou_threshold:NMS中的iou阈值,默认0.3。
  • topk:默认400,只保留前k个检测到的结果。

5. 使用案例

这里测试使用的是yolov5face-n-640x640.onnx(yolov5n-face)nano版本的模型,你可以尝试使用其他版本的模型。

5.1 ONNXRuntime版本

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5face->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); std::cout << "Default Version Done! Detected Face Num: " << detected_boxes.size() << std::endl; delete yolov5face; }">
#include "lite/lite.h"

static void test_default()
{
    std::string onnx_path = "../hub/onnx/cv/yolov5face-n-640x640.onnx"; // yolov5n-face
    std::string test_img_path = "../resources/4.jpg";
    std::string save_img_path = "../logs/4.jpg";
    
    auto *yolov5face = new lite::cv::face::detect::YOLO5Face(onnx_path);
    
    std::vector
    detected_boxes;
    cv::Mat img_bgr = 
   cv::imread(test_img_path);
    yolov5face->
   detect(img_bgr, detected_boxes);
    
    
   lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
    
    
   cv::imwrite(save_img_path, img_bgr);
    
    std::cout << 
   "Default Version Done! Detected Face Num: " << detected_boxes.
   size() << std::endl;
    
    
   delete yolov5face;
}
  

5.2 MNN版本

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5face->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); std::cout << "MNN Version Done! Detected Face Num: " << detected_boxes.size() << std::endl; delete yolov5face; #endif }">
#include "lite/lite.h"

static void test_mnn()
{
#ifdef ENABLE_MNN
    std::string mnn_path = "../hub/mnn/cv/yolov5face-n-640x640.mnn"; // yolov5n-face
    std::string test_img_path = "../resources/12.jpg";
    std::string save_img_path = "../logs/12.jpg";
    
    auto *yolov5face = new lite::mnn::cv::face::detect::YOLO5Face(mnn_path);
    
    std::vector
    detected_boxes;
    cv::Mat img_bgr = 
   cv::imread(test_img_path);
    yolov5face->
   detect(img_bgr, detected_boxes);
    
    
   lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
    
    
   cv::imwrite(save_img_path, img_bgr);
    
    std::cout << 
   "MNN Version Done! Detected Face Num: " << detected_boxes.
   size() << std::endl;
    
    
   delete yolov5face;
#
   endif
}
  

5.3 TNN版本

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5face->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); std::cout << "TNN Version Done! Detected Face Num: " << detected_boxes.size() << std::endl; delete yolov5face; #endif }">
#include "lite/lite.h"

static void test_tnn()
{
#ifdef ENABLE_TNN
    std::string proto_path = "../hub/tnn/cv/yolov5face-n-640x640.opt.tnnproto"; // yolov5n-face
    std::string model_path = "../hub/tnn/cv/yolov5face-n-640x640.opt.tnnmodel";
    std::string test_img_path = "../resources/9.jpg";
    std::string save_img_path = "../logs/9.jpg";
    
    auto *yolov5face = new lite::tnn::cv::face::detect::YOLO5Face(proto_path, model_path);
    
    std::vector
    detected_boxes;
    cv::Mat img_bgr = 
   cv::imread(test_img_path);
    yolov5face->
   detect(img_bgr, detected_boxes);
    
    
   lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
    
    
   cv::imwrite(save_img_path, img_bgr);
    
    std::cout << 
   "TNN Version Done! Detected Face Num: " << detected_boxes.
   size() << std::endl;
    
    
   delete yolov5face;
#
   endif
}
  

5.4 NCNN版本

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5face->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); std::cout << "NCNN Version Done! Detected Face Num: " << detected_boxes.size() << std::endl; delete yolov5face; #endif }">
#include "lite/lite.h"

static void test_ncnn()
{
#ifdef ENABLE_NCNN
    std::string param_path = "../hub/ncnn/cv/yolov5face-n-640x640.opt.param"; // yolov5n-face
    std::string bin_path = "../hub/ncnn/cv/yolov5face-n-640x640.opt.bin";
    std::string test_img_path = "../resources/1.jpg";
    std::string save_img_path = "../logs/1.jpg";
    
    auto *yolov5face = new lite::ncnn::cv::face::detect::YOLO5Face(param_path, bin_path, 1, 640, 640);
    
    std::vector
    detected_boxes;
    cv::Mat img_bgr = 
   cv::imread(test_img_path);
    yolov5face->
   detect(img_bgr, detected_boxes);
    
    
   lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
    
    
   cv::imwrite(save_img_path, img_bgr);
    
    std::cout << 
   "NCNN Version Done! Detected Face Num: " << detected_boxes.
   size() << std::endl;
    
    
   delete yolov5face;
#
   endif
}
  
  • 输出结果为:

6. 编译运行

在MacOS下可以直接编译运行本项目,无需下载其他依赖库。其他系统则需要从lite.ai.toolkit 中下载源码先编译lite.ai.toolkit.v0.1.0动态库。

git clone --depth=1 https://github.com/DefTruth/YOLO5Face.lite.ai.toolkit.git
cd YOLO5Face.lite.ai.toolkit 
sh ./build.sh
  • CMakeLists.txt设置
cmake_minimum_required(VERSION 3.17)
project(YOLO5Face.lite.ai.toolkit)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_yolo5face examples/test_lite_yolo5face.cpp)
target_link_libraries(lite_yolo5face
        lite.ai.toolkit
        onnxruntime
        MNN  # need, if built lite.ai.toolkit with ENABLE_MNN=ON,  default OFF
        ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF
        TNN  # need, if built lite.ai.toolkit with ENABLE_TNN=ON,  default OFF
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.
  • building && testing information:
[ 50%] Building CXX object CMakeFiles/lite_yolo5face.dir/examples/test_lite_yolo5face.cpp.o
[100%] Linking CXX executable lite_yolo5face
[100%] Built target lite_yolo5face
Testing Start ...
LITEORT_DEBUG LogId: ../hub/onnx/cv/yolov5face-n-640x640.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 640
input_node_dims: 640
=============== Output-Dims ==============
Output: 0 Name: output Dim: 0 :1
Output: 0 Name: output Dim: 1 :25200
Output: 0 Name: output Dim: 2 :16
========================================
generate_bboxes_kps num: 2824
Default Version Done! Detected Face Num: 326
LITEORT_DEBUG LogId: ../hub/onnx/cv/yolov5face-n-640x640.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 640
input_node_dims: 640
=============== Output-Dims ==============
Output: 0 Name: output Dim: 0 :1
Output: 0 Name: output Dim: 1 :25200
Output: 0 Name: output Dim: 2 :16
========================================
generate_bboxes_kps num: 253
ONNXRuntime Version Done! Detected Face Num: 16
LITEMNN_DEBUG LogId: ../hub/mnn/cv/yolov5face-n-640x640.mnn
=============== Input-Dims ==============
        **Tensor shape**: 1, 3, 640, 640, 
Dimension Type: (CAFFE/PyTorch/ONNX)NCHW
=============== Output-Dims ==============
getSessionOutputAll done!
Output: output:         **Tensor shape**: 1, 25200, 16, 
========================================
generate_bboxes_kps num: 71
MNN Version Done! Detected Face Num: 5
LITENCNN_DEBUG LogId: ../hub/ncnn/cv/yolov5face-n-640x640.opt.param
generate_bboxes_kps num: 34
NCNN Version Done! Detected Face Num: 2
LITETNN_DEBUG LogId: ../hub/tnn/cv/yolov5face-n-640x640.opt.tnnproto
=============== Input-Dims ==============
input: [1 3 640 640 ]
Input Data Format: NCHW
=============== Output-Dims ==============
output: [1 25200 16 ]
========================================
generate_bboxes_kps num: 98
TNN Version Done! Detected Face Num: 7
Testing Successful !

You might also like...
TensorFlow Lite for Microcontrollers

TensorFlow Lite for Microcontrollers Build Status Official Builds Community Supported Builds Additional Documentation TensorFlow Lite for Microcontrol

Lite.AI 🚀🚀🌟  is a user-friendly C++ lib for awesome🔥🔥🔥 AI models  based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD
Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, Colorization🔥, SSD🔥, etc.

Lite.AI 🚀🚀🌟  is a user friendly C++ lib of 60+ awesome AI models. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, RetinaFace🔥, SSD🔥, etc.
Lite.AI 🚀🚀🌟 is a user friendly C++ lib of 60+ awesome AI models. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, RetinaFace🔥, SSD🔥, etc.

Lite.AI 🚀 🚀 🌟 Introduction. Lite.AI 🚀 🚀 🌟 is a simple and user-friendly C++ library of awesome 🔥 🔥 🔥 AI models. It's a collection of personal

Swapping face using Face Mesh with TensorFlow Lite
Swapping face using Face Mesh with TensorFlow Lite

demo.mp4 Aiine Transform (アイン変換) Swapping face using FaceMesh. (could be used to unveil masked faces) Tested Environment Computer Windows 10 (x64) + V

A demo to run tensorflow-lite on Penglai TEE.
A demo to run tensorflow-lite on Penglai TEE.

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

A CM4 gaming handheld, designed with the Switch Lite aesthetics in mind
A CM4 gaming handheld, designed with the Switch Lite aesthetics in mind

A CM4 gaming handheld, designed with the Switch Lite aesthetics in mind. Contains all STL files and code to build your own landscape retro handheld system. Co-project with Dmcke5.

Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla
Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla

InferenceHelper This is a helper class for deep learning frameworks especially for inference This class provides an interface to use various deep lear

MaixPy3 is a Python3 toolkit based on cpython
MaixPy3 is a Python3 toolkit based on cpython

MaixPy3 is a Python3 toolkit based on cpython, which simplifies the development of applications on Linux AI edge devices through Python programming.

🐸 Coqui STT is an open source Speech-to-Text toolkit which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers
🐸 Coqui STT is an open source Speech-to-Text toolkit which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers

Coqui STT ( 🐸 STT) is an open-source deep-learning toolkit for training and deploying speech-to-text models. 🐸 STT is battle tested in both producti

Owner
DefTruth
🍅🍅保持学习、专注、高效和信念。
DefTruth
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 78 Nov 2, 2022
Fast and robust face tracking addon for openFrameworks based on YOLO5Face

ofxFaceTracker3 Working in progress Fast and robust face tracking addon for openFrameworks based on YOLO5Face and ONNX Runtime. Features Fast and robu

Yuya Hanai 13 Nov 6, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 2, 2022
Zenotech 6 Oct 21, 2022
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Insight Software Consortium 1.1k Dec 4, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

The Microsoft Cognitive Toolkit is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph.

Microsoft 17.3k Dec 5, 2022
Number recognition with MNIST on Raspberry Pi Pico + TensorFlow Lite for Microcontrollers

About Number recognition with MNIST on Raspberry Pi Pico + TensorFlow Lite for Microcontrollers Device Raspberry Pi Pico LCDディスプレイ 2.8"240x320 SPI TFT

iwatake 50 Oct 9, 2022
Eloquent interface to Tensorflow Lite for Microcontrollers

This Arduino library is here to simplify the deployment of Tensorflow Lite for Microcontrollers models to Arduino boards using the Arduino IDE.

null 185 Nov 30, 2022
TensorFlow Lite, Coral Edge TPU samples (Python/C++, Raspberry Pi/Windows/Linux).

TensorFlow Lite, Coral Edge TPU samples (Python/C++, Raspberry Pi/Windows/Linux).

Nobuo Tsukamoto 87 Nov 16, 2022
Want a faster ML processor? Do it yourself! -- A framework for playing with custom opcodes to accelerate TensorFlow Lite for Microcontrollers (TFLM).

CFU Playground Want a faster ML processor? Do it yourself! This project provides a framework that an engineer, intern, or student can use to design an

Google 319 Nov 27, 2022