Lite.AI.ToolKit 🚀🚀🌟: A lite C++ toolkit of awesome AI models such as RobustVideoMatting🔥, YOLOX🔥, YOLOP🔥 etc.

Overview

Lite.AI.ToolKit 🚀 🚀 🌟 : A lite C++ toolkit of awesome AI models.


English | 中文

Lite.AI.ToolKit 🚀 🚀 🌟 : A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc. Lite.AI.ToolKit based on ONNXRuntime C++ by default. I do have plans to reimplement it with ncnn and MNN, but not coming soon. Currently, I mainly consider its ease of use. Developers who need higher performance can make new optimizations based on the C++ implementation and ONNX files provided by this repo~ Welcome to open a new PR~ 👏 👋 , if you want to add a new model to this repo.

Core Features 🚀 🚀 🌟

Latest Release Quick Start Usage
👉 lite.ai.toolkit.macos.v0.1.0 👉 lite.ai.toolkit.demo & Quick Start Examples 👉 lite.ai.toolkit.examples

Important Notes !!!

Expand for More Notes.

More Notes !!!

Contents.

1. Build Lite.AI.ToolKit

Build the shared lib of Lite.AI.ToolKit for MacOS from sources. Note that Lite.AI.ToolKit uses onnxruntime as default backend, for the reason that onnxruntime supports the most of onnx's operators.

Linux and Windows.

Linux and Windows.

⚠️ Lite.AI.ToolKit is not directly support Linux and Windows now. For Linux and Windows, you need to build or download(if have official builts) the shared libs of OpenCV and ONNXRuntime firstly and put then into the third_party directory. Please reference the build-docs1 for third_party.

  • Windows: You can reference to issue#6
  • Linux: The Docs and Docker image for Linux will be coming soon ~ issue#2
  • Happy News !!! : 🚀 You can download the latest ONNXRuntime official built libs of Windows, Linux, MacOS and Arm !!! Both CPU and GPU versions are available. No more attentions needed pay to build it from source. Download the official built libs from v1.8.1. I have used version 1.7.0 for Lite.AI.ToolKit now, you can downlod it from v1.7.0, but version 1.8.1 should also work, I guess ~ 🙃 🤪 🍀 . For OpenCV, try to build from source(Linux) or down load the official built(Windows) from OpenCV 4.5.3. Then put the includes and libs into third_party directory of Lite.AI.ToolKit.
    git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # latest
    cd lite.ai.toolkit && sh ./build.sh  # On MacOS, you can use the built OpenCV and ONNXRuntime libs in this repo.
Expand for more details of How to link the shared lib of Lite.AI.ToolKit?
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib 
liblite.ai.toolkit.0.0.1.dylib:
        @rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
        @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
        @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
        ...
cd ../ && tree .
├── bin
├── include
│   ├── lite
│   │   ├── backend.h
│   │   ├── config.h
│   │   └── lite.h
│   └── ort
└── lib
    └── liblite.ai.toolkit.0.0.1.dylib
  • Run the built examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x  1 root  staff   301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5
  • To link lite.ai.toolkit shared lib. You need to make sure that OpenCV and onnxruntime are linked correctly. Just like:
cmake_minimum_required(VERSION 3.17)
project(testlite.ai.toolkit)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE debug)
# link opencv.
set(OpenCV_DIR ${CMAKE_SOURCE_DIR}/opencv/lib/cmake/opencv4)
find_package(OpenCV 4 REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
# link onnxruntime.
set(ONNXRUNTIME_DIR ${CMAKE_SOURCE_DIR}/onnxruntime/)
set(ONNXRUNTIME_INCLUDE_DIR ${ONNXRUNTIME_DIR}/include)
set(ONNXRUNTIME_LIBRARY_DIR ${ONNXRUNTIME_DIR}/lib)
include_directories(${ONNXRUNTIME_INCLUDE_DIR})
link_directories(${ONNXRUNTIME_LIBRARY_DIR})
# link lite.ai.toolkit.
set(LITEHUB_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITEHUB_INCLUDE_DIR ${LITEHUB_DIR}/include)
set(LITEHUB_LIBRARY_DIR ${LITEHUB_DIR}/lib)
include_directories(${LITEHUB_INCLUDE_DIR})
link_directories(${LITEHUB_LIBRARY_DIR})
# add your executable
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 lite.ai.toolkit onnxruntime ${OpenCV_LIBS})

A minimum example to show you how to link the shared lib of Lite.AI.ToolKit correctly for your own project can be found at lite.ai.toolkit.demo.

2. Model Zoo.

Lite.AI.ToolKit contains 70+ AI models with 150+ frozen pretrained .onnx files now. Most of the onnx files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.AI.ToolKit.

Expand Details for Namespace and Lite.AI.ToolKit modules.

Namespace and Lite.AI.ToolKit modules.

Namepace Details
lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc.
lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc.
lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️
lite::cv::face Face Analysis. detect, align, pose, attr, etc. ❇️
lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️
lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️
lite::cv::face::pose Head Pose Estimation. FSANet, etc. ❇️
lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️
lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. ⚠️
lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. ⚠️
lite::cv::matting Image Matting. Object and Human matting. ⚠️
lite::cv::colorization Colorization. Make Gray image become RGB. ⚠️
lite::cv::resolution Super Resolution. ⚠️

Lite.AI.ToolKit's Classes and Pretrained Files.

Correspondence between the classes in Lite.AI.ToolKit and pretrained model files can be found at lite.ai.toolkit.hub.onnx.md. For examples, the pretrained model files for lite::cv::detection::YoloV5 and lite::cv::detection::YoloX are listed as follows.

Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::detection::YoloV5 yolov5l.onnx yolov5 ( 🔥 🔥 💥 ↑) 188Mb
lite::cv::detection::YoloV5 yolov5m.onnx yolov5 ( 🔥 🔥 💥 ↑) 85Mb
lite::cv::detection::YoloV5 yolov5s.onnx yolov5 ( 🔥 🔥 💥 ↑) 29Mb
lite::cv::detection::YoloV5 yolov5x.onnx yolov5 ( 🔥 🔥 💥 ↑) 351Mb
lite::cv::detection::YoloX yolox_x.onnx YOLOX ( 🔥 🔥 !!↑) 378Mb
lite::cv::detection::YoloX yolox_l.onnx YOLOX ( 🔥 🔥 !!↑) 207Mb
lite::cv::detection::YoloX yolox_m.onnx YOLOX ( 🔥 🔥 !!↑) 97Mb
lite::cv::detection::YoloX yolox_s.onnx YOLOX ( 🔥 🔥 !!↑) 34Mb
lite::cv::detection::YoloX yolox_tiny.onnx YOLOX ( 🔥 🔥 !!↑) 19Mb
lite::cv::detection::YoloX yolox_nano.onnx YOLOX ( 🔥 🔥 !!↑) 3.5Mb

It means that you can load the the any one yolov5*.onnx and yolox_*.onnx according to your application through the same Lite.AI.ToolKit's classes, such as YoloV5, YoloX, etc.

auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx");  // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); 
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");  
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx");  // for mobile device 
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx");  // 3.5Mb only !
  • Downloads:
    Baidu Drive code: 8gin && Google Drive .
    Note, I can not upload all the *.onnx files because of the storage limitation of Google Driver (15G).

  • Object Detection.

Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 🔥 🔥 💥 detection demo
YoloV3 236M onnx-models 🔥 🔥 🔥 detection demo
TinyYoloV3 33M onnx-models 🔥 🔥 🔥 detection demo
YoloV4 176M YOLOv4... 🔥 🔥 🔥 detection demo
SSD 76M onnx-models 🔥 🔥 🔥 detection demo
SSDMobileNetV1 27M onnx-models 🔥 🔥 🔥 detection demo
YoloX 3.5M YOLOX 🔥 🔥 new↑ detection demo
TinyYoloV4VOC 22M yolov4-tiny... 🔥 🔥 detection demo
TinyYoloV4COCO 22M yolov4-tiny... 🔥 🔥 detection demo
YoloR 39M yolor 🔥 🔥 new↑ detection demo
ScaledYoloV4 270M ScaledYOLOv4 🔥 🔥 🔥 detection demo
EfficientDet 15M ...EfficientDet... 🔥 🔥 🔥 detection demo
EfficientDetD7 220M ...EfficientDet... 🔥 🔥 🔥 detection demo
EfficientDetD8 322M ...EfficientDet... 🔥 🔥 🔥 detection demo
YOLOP 30M YOLOP 🔥 🔥 new↑ detection demo
  • Face Recognition.
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface 🔥 🔥 🔥 faceid demo
GlintCosFace 92M insightface 🔥 🔥 🔥 faceid demo
GlintPartialFC 170M insightface 🔥 🔥 🔥 faceid demo
FaceNet 89M facenet... 🔥 🔥 🔥 faceid demo
FocalArcFace 166M face.evoLVe... 🔥 🔥 🔥 faceid demo
FocalAsiaArcFace 166M face.evoLVe... 🔥 🔥 🔥 faceid demo
TencentCurricularFace 249M TFace 🔥 🔥 faceid demo
TencentCifpFace 130M TFace 🔥 🔥 faceid demo
CenterLossFace 280M center-loss... 🔥 🔥 faceid demo
SphereFace 80M sphere... 🔥 🔥 faceid ✅️ demo
PoseRobustFace 92M DREAM 🔥 🔥 faceid ✅️ demo
NaivePoseRobustFace 43M DREAM 🔥 🔥 faceid ✅️ demo
MobileFaceNet 3.8M MobileFace... 🔥 🔥 faceid demo
CavaGhostArcFace 15M cavaface... 🔥 🔥 faceid demo
CavaCombinedFace 250M cavaface... 🔥 🔥 faceid demo
MobileSEFocalFace 4.5M face_recog... 🔥 🔥 faceid demo
  • Matting.
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting 🔥 🔥 🔥 latest↑ matting demo
⚠️ Expand More Details for Lite.AI.ToolKit's Model Zoo.
  • Face Detection.
Class Size From Awesome File Type State Usage
UltraFace 1.1M Ultra-Light... 🔥 🔥 🔥 face::detect demo
RetinaFace 1.6M ...Retinaface 🔥 🔥 🔥 face::detect demo
FaceBoxes 3.8M FaceBoxes 🔥 🔥 face::detect demo
  • Face Alignment.
Class Size From Awesome File Type State Usage
PFLD 1.0M pfld_106_... 🔥 🔥 face::align demo
PFLD98 4.8M PFLD... 🔥 🔥 face::align ✅️ demo
MobileNetV268 9.4M ...landmark 🔥 🔥 face::align ✅️ demo
MobileNetV2SE68 11M ...landmark 🔥 🔥 face::align ✅️ demo
PFLD68 2.8M ...landmark 🔥 🔥 face::align ✅️ demo
FaceLandmark1000 2.0M FaceLandm... 🔥 face::align ✅️ demo
  • Head Pose Estimation.
Class Size From Awesome File Type State Usage
FSANet 1.2M ...fsanet... 🔥 face::pose demo
  • Face Attributes.
Class Size From Awesome File Type State Usage
AgeGoogleNet 23M onnx-models 🔥 🔥 🔥 face::attr demo
GenderGoogleNet 23M onnx-models 🔥 🔥 🔥 face::attr demo
EmotionFerPlus 33M onnx-models 🔥 🔥 🔥 face::attr demo
VGG16Age 514M onnx-models 🔥 🔥 🔥 face::attr demo
VGG16Gender 512M onnx-models 🔥 🔥 🔥 face::attr demo
SSRNet 190K SSR_Net... 🔥 face::attr demo
EfficientEmotion7 15M face-emo... 🔥 face::attr ✅️ demo
EfficientEmotion8 15M face-emo... 🔥 face::attr demo
MobileEmotion7 13M face-emo... 🔥 face::attr demo
ReXNetEmotion7 30M face-emo... 🔥 face::attr demo
  • Classification.
Class Size From Awesome File Type State Usage
EfficientNetLite4 49M onnx-models 🔥 🔥 🔥 classification demo
ShuffleNetV2 8.7M onnx-models 🔥 🔥 🔥 classification demo
DenseNet121 30.7M torchvision 🔥 🔥 🔥 classification demo
GhostNet 20M torchvision 🔥 🔥 🔥 classification demo
HdrDNet 13M torchvision 🔥 🔥 🔥 classification demo
IBNNet 97M torchvision 🔥 🔥 🔥 classification demo
MobileNetV2 13M torchvision 🔥 🔥 🔥 classification demo
ResNet 44M torchvision 🔥 🔥 🔥 classification demo
ResNeXt 95M torchvision 🔥 🔥 🔥 classification demo
  • Segmentation.
Class Size From Awesome File Type State Usage
DeepLabV3ResNet101 232M torchvision 🔥 🔥 🔥 segmentation demo
FCNResNet101 207M torchvision 🔥 🔥 🔥 segmentation demo
  • Style Transfer.
Class Size From Awesome File Type State Usage
FastStyleTransfer 6.4M onnx-models 🔥 🔥 🔥 style demo
  • Colorization.
Class Size From Awesome File Type State Usage
Colorizer 123M colorization 🔥 🔥 🔥 colorization demo
  • Super Resolution.
Class Size From Awesome File Type State Usage
SubPixelCNN 234K ...PIXEL... 🔥 resolution demo

3. Examples for Lite.AI.ToolKit.

More examples can be found at lite.ai.toolkit.examples. Click ▶️ will show you more examples for the specific topic you are interested in.

Example0: Object Detection using YoloV5. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector
   
    detected_boxes;
  cv::Mat img_bgr = 
   
   cv::imread(test_img_path);
  yolov5->
   
   detect(img_bgr, detected_boxes);
  
  
   
   lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  
   
   cv::imwrite(save_img_path, img_bgr);  
  
  
   
   delete yolov5;
}
  
  

The output is:

Or you can use Newest 🔥 🔥 ! YOLO series's detector YOLOX or YoloR. They got the similar results.


Example1: Video Matting using RobustVideoMatting2021 🔥 🔥 🔥 . Download model from Model-Zoo2.

contents; // 1. video matting. rvm->detect_video(video_path, output_path, contents, false, 0.4f); delete rvm; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector
   
    contents;
  
  
   
   // 1. video matting.
  rvm->
   
   detect_video(video_path, output_path, contents, 
   
   false, 
   
   0.
   
   4f);
  
  
   
   delete rvm;
}
  
  

The output is:



Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

detect(img_bgr, landmarks); lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks); cv::imwrite(save_img_path, img_bgr); delete face_landmarks_1000; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::cv::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:


Example3: Colorization using colorization. Download model from Model-Zoo2.

detect(img_bgr, colorize_content); if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat); delete colorizer; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::cv::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:



Example4: Face Recognition using ArcFace. Download model from Model-Zoo2.

detect(img_bgr0, face_content0); glint_arcface->detect(img_bgr1, face_content1); glint_arcface->detect(img_bgr2, face_content2); if (face_content0.flag && face_content1.flag && face_content2.flag) { float sim01 = lite::cv::utils::math::cosine_similarity ( face_content0.embedding, face_content1.embedding); float sim02 = lite::cv::utils::math::cosine_similarity ( face_content0.embedding, face_content2.embedding); std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl; } delete glint_arcface; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::cv::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::cv::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::cv::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267


Example5: Face Detection using UltraFace. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); ultraface->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete ultraface; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector
   
    detected_boxes;
  cv::Mat img_bgr = 
   
   cv::imread(test_img_path);
  ultraface->
   
   detect(img_bgr, detected_boxes);
  
   
   lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  
   
   cv::imwrite(save_img_path, img_bgr);

  
   
   delete ultraface;
}
  
  

The output is:

⚠️ Expand All Examples for Each Topic in Lite.AI.ToolKit
3.1 Expand Examples for Object Detection.

3.1 Object Detection using YoloV5. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
  
  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
  std::vector
     
      detected_boxes;
  cv::Mat img_bgr = 
     
     cv::imread(test_img_path);
  yolov5->
     
     detect(img_bgr, detected_boxes);
  
  
     
     lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  
     
     cv::imwrite(save_img_path, img_bgr);
  
  
     
     delete yolov5;
}
    
    

The output is:

Or you can use Newest 🔥 🔥 ! YOLO series's detector YOLOX . They got the similar results.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolox->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolox; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolox_s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolox_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolox_1.jpg";

  auto *yolox = new lite::cv::detection::YoloX(onnx_path); 
  std::vector
     
      detected_boxes;
  cv::Mat img_bgr = 
     
     cv::imread(test_img_path);
  yolox->
     
     detect(img_bgr, detected_boxes);
  
  
     
     lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  
     
     cv::imwrite(save_img_path, img_bgr);  
  
  
     
     delete yolox;
}
    
    

The output is:

More classes for general object detection.

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path); 
3.2 Expand Examples for Face Recognition.

3.2 Face Recognition using ArcFace. Download model from Model-Zoo2.

detect(img_bgr0, face_content0); glint_arcface->detect(img_bgr1, face_content1); glint_arcface->detect(img_bgr2, face_content2); if (face_content0.flag && face_content1.flag && face_content2.flag) { float sim01 = lite::cv::utils::math::cosine_similarity ( face_content0.embedding, face_content1.embedding); float sim02 = lite::cv::utils::math::cosine_similarity ( face_content0.embedding, face_content2.embedding); std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl; } delete glint_arcface; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::cv::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::cv::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::cv::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition.

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
3.3 Expand Examples for Segmentation.

3.3 Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.

detect(img_bgr, content); if (content.flag) { cv::Mat out_img; cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img); cv::imwrite(save_img_path, out_img); if (!content.names_map.empty()) { for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it) { std::cout << it->first << " Name: " << it->second << std::endl; } } } delete deeplabv3_resnet101; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::cv::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

The output is:

More classes for segmentation.

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
3.4 Expand Examples for Face Attributes Analysis.

3.4 Age Estimation using SSRNet . Download model from Model-Zoo2.

detect(img_bgr, age); lite::cv::utils::draw_age_inplace(img_bgr, age); cv::imwrite(save_img_path, img_bgr); std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl; delete ssrnet; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";

  lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::cv::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::cv::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);
  std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;

  delete ssrnet;
}

The output is:

More classes for face attributes analysis.

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
3.5 Expand Examples for Image Classification.

3.5 1000 Classes Classification using DenseNet. Download model from Model-Zoo2.

detect(img_bgr, content); if (content.flag) { const unsigned int top_k = content.scores.size(); if (top_k > 0) { for (unsigned int i = 0; i < top_k; ++i) std::cout << i + 1 << ": " << content.labels.at(i) << ": " << content.texts.at(i) << ": " << content.scores.at(i) << std::endl; } } delete densenet; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::cv::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

The output is:

More classes for image classification.

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
3.6 Expand Examples for Face Detection.

3.6 Face Detection using UltraFace. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); ultraface->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete ultraface; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector
     
      detected_boxes;
  cv::Mat img_bgr = 
     
     cv::imread(test_img_path);
  ultraface->
     
     detect(img_bgr, detected_boxes);
  
     
     lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  
     
     cv::imwrite(save_img_path, img_bgr);

  
     
     delete ultraface;
}
    
    

The output is:

More classes for face detection.

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
3.7 Expand Examples for Colorization.

3.7 Colorization using colorization. Download model from Model-Zoo2.

detect(img_bgr, colorize_content); if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat); delete colorizer; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::cv::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:


3.8 Expand Examples for Head Pose Estimation.

3.8 Head Pose Estimation using FSANet. Download model from Model-Zoo2.

detect(img_bgr, euler_angles); if (euler_angles.flag) { lite::cv::utils::draw_axis_inplace(img_bgr, euler_angles); cv::imwrite(save_img_path, img_bgr); std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl; } delete fsanet; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::cv::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::cv::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

The output is:

3.9 Expand Examples for Face Alignment.

3.9 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

detect(img_bgr, landmarks); lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks); cv::imwrite(save_img_path, img_bgr); delete face_landmarks_1000; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::cv::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment.

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks !
3.10 Expand Examples for Style Transfer.

3.10 Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.

detect(img_bgr, style_content); if (style_content.flag) cv::imwrite(save_img_path, style_content.mat); delete fast_style_transfer; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::cv::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

The output is:


3.11 Expand Examples for Image Matting.

3.11 Video Matting using RobustVideoMatting. Download model from Model-Zoo2.

contents; // 1. video matting. rvm->detect_video(video_path, output_path, contents); delete rvm; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector
     
      contents;
  
  
     
     // 1. video matting.
  rvm->
     
     detect_video(video_path, output_path, contents);
  
  
     
     delete rvm;
}
    
    

The output is:


4. Lite.AI.ToolKit API Docs.

4.1 Default Version APIs.

More details of Default Version APIs can be found at api.default.md . For examples, the interface for YoloV5 is:

lite::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector
   
    &detected_boxes, 
            
   
   float score_threshold = 
   
   0.
   
   25f, 
   
   float iou_threshold = 
   
   0.
   
   45f,
            
   
   unsigned 
   
   int topk = 
   
   100, 
   
   unsigned 
   
   int nms_type = NMS::OFFSET);
  
  
Expand for ONNXRuntime, MNN and NCNN version APIs.

4.2 ONNXRuntime Version APIs.

More details of ONNXRuntime Version APIs can be found at api.onnxruntime.md . For examples, the interface for YoloV5 is:

lite::onnxruntime::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector
    
     &detected_boxes, 
            
    
    float score_threshold = 
    
    0.
    
    25f, 
    
    float iou_threshold = 
    
    0.
    
    45f,
            
    
    unsigned 
    
    int topk = 
    
    100, 
    
    unsigned 
    
    int nms_type = NMS::OFFSET);
   
   

4.3 MNN Version APIs.

(todo ⚠️ : Not implementation now, coming soon.)

lite::mnn::cv::detection::YoloV5

lite::mnn::cv::detection::YoloV4

lite::mnn::cv::detection::YoloV3

lite::mnn::cv::detection::SSD

...

4.4 NCNN Version APIs.

(todo ⚠️ : Not implementation now, coming soon.)

lite::ncnn::cv::detection::YoloV5

lite::ncnn::cv::detection::YoloV4

lite::ncnn::cv::detection::YoloV3

lite::ncnn::cv::detection::SSD

...

5. Other Docs.

Expand More Details for Other Docs.

5.1 Docs for ONNXRuntime.

5.2 Docs for third_party.

Other build documents for different engines and different targets will be added later.

Library Target Docs
OpenCV mac-x86_64 opencv-mac-x86_64-build.zh.md
OpenCV android-arm opencv-static-android-arm-build.zh.md
onnxruntime mac-x86_64 onnxruntime-mac-x86_64-build.zh.md
onnxruntime android-arm onnxruntime-android-arm-build.zh.md
NCNN mac-x86_64 todo ⚠️
MNN mac-x86_64 todo ⚠️
TNN mac-x86_64 todo ⚠️

6. License.

The code of Lite.AI.ToolKit is released under the GPL-3.0 License.

7. References.

Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.

Expand for More References.

Citations.

Cite it as follows if you use Lite.AI.ToolKit. Star 🌟 👆🏻 this repo if it does any helps to you ~ 🙃 🤪 🍀

@misc{lite.ai.toolkit2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={Yan Jun},
  year={2021}
}
Issues
  • 🎃Linux下配置lite.ai.toolkit库教程

    🎃Linux下配置lite.ai.toolkit库教程

    您好!已经在 Linux 系统下成功编译 lite.ai.toolkit 但是进 g++ 编译 yolox 的时候出现了如下错误: ERROR1 如果不用 -I ,由于都是相对路径,又会找不到头文件,出现例如: fatal error: lite/ort/core/ort_core.h: No such file or directory

    documentation question Linux 
    opened by FL77N 50
  • windows vs2019编译报错:

    windows vs2019编译报错:

    图片 core\ort_types.h(272,1): error C2440: “初始化”: 无法从“ortcv::types::BoundingBoxType<int,double>”转换为“ortcv::types::BoundingBoxType<int,float>”

    图片

    bug enhancement Windows 
    opened by xinsuinizhuan 19
  • Linux Build Error

    Linux Build Error

    I got this error while building in Linux:

    /usr/bin/ld: cannot find -lopencv_core
    /usr/bin/ld: cannot find -lopencv_imgproc
    /usr/bin/ld: cannot find -lopencv_imgcodecs
    /usr/bin/ld: cannot find -lopencv_video
    /usr/bin/ld: cannot find -lopencv_videoio
    /usr/bin/ld: cannot find -lonnxruntime
    collect2: error: ld returned 1 exit status
    

    Please help! thanks!

    opened by AthanatiusC 13
  • Runtime Version Detected Sim always same even changed the person

    Runtime Version Detected Sim always same even changed the person

    Hi,

    I succesully compiled on the Mac Os x.

    trying face rec. algorithms and recognized that ONNX Runtime Version Detected Sim always same even changed the person.

    ie : lite_glint_arcface.cpp model : std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";

    person a - person b :

    /var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpl3pFGJ ; exit; LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 [ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739 Default Version Detected Sim: 0.415043 Default Version Detected Dist: 1.08163 LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 ONNXRuntime Version Detected Sim: 0.0349244

    person-x personc

    /var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpzFKmvz ; exit; LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 [ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739 Default Version Detected Sim: 0.0609607 Default Version Detected Dist: 1.37043 LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 ONNXRuntime Version Detected Sim: 0.0349244

    insightface 
    opened by MyraBaba 11
  • compile problem for ARM

    compile problem for ARM

    Hi,

    in Mac Os x all is good. compiling succesfully . But raspi iot ubunutu has issue.

    gives :

    /usr/bin/ld: /home/pi/Projects/lite.ai.toolkit/build/lite.ai.toolkit/lib/liblite.ai.toolkit.so: undefined reference tocv::Mat::Mat()' ` also liblite examples same error.

    undefined reference tocv::Mat::Mat()'`

    when I look liblite.ai.toolkit.so with ldd: it linked to the opencv.

    ldd /home/pi/Projects/lite.ai.toolkit/build/lite.ai.toolkit/lib/liblite.ai.toolkit.so linux-vdso.so.1 (0x0000007f9e95c000) libopencv_video.so.4.5 => /usr/local/lib/libopencv_video.so.4.5 (0x0000007f9e762000) libopencv_videoio.so.4.5 => /usr/local/lib/libopencv_videoio.so.4.5 (0x0000007f9e6e9000) libonnxruntime.so.1.11.0 => /home/pi/USBA/onnxruntime/build/Linux/RelWithDebInfo/libonnxruntime.so.1.11.0 (0x0000007f9dd30000) libopencv_calib3d.so.4.5 => /usr/local/lib/libopencv_calib3d.so.4.5 (0x0000007f9db7b000) libopencv_features2d.so.4.5 => /usr/local/lib/libopencv_features2d.so.4.5 (0x0000007f9dabc000) libopencv_flann.so.4.5 => /usr/local/lib/libopencv_flann.so.4.5 (0x0000007f9da50000) libopencv_imgcodecs.so.4.5 => /usr/local/lib/libopencv_imgcodecs.so.4.5 (0x0000007f9d7e6000) libopencv_imgproc.so.4.5 => /usr/local/lib/libopencv_imgproc.so.4.5 (0x0000007f9d3b2000) libopencv_core.so.4.5 => /usr/local/lib/libopencv_core.so.4.5 (0x0000007f9d036000) libstdc++.so.6 => /usr/lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000007f9ce8d000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f9cdd0000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000007f9cdac000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f9cc3a000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000007f9cc26000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000007f9cbf7000) libdc1394.so.22 => /usr/lib/aarch64-linux-gnu/libdc1394.so.22 (0x0000007f9cb75000) libgstreamer-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0 (0x0000007f9ca20000) libgobject-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgobject-2.0.so.0 (0x0000007f9c9b9000) libglib-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0 (0x0000007f9c886000) libgstapp-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstapp-1.0.so.0 (0x0000007f9c867000) libgstriff-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstriff-1.0.so.0 (0x0000007f9c849000) libgstpbutils-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstpbutils-1.0.so.0 (0x0000007f9c7ff000) libavcodec.so.58 => /usr/lib/aarch64-linux-gnu/libavcodec.so.58 (0x0000007f9b507000) libavformat.so.58 => /usr/lib/aarch64-linux-gnu/libavformat.so.58 (0x0000007f9b2a1000) libavutil.so.56 => /usr/lib/aarch64-linux-gnu/libavutil.so.56 (0x0000007f9b217000) libswscale.so.5 => /usr/lib/aarch64-linux-gnu/libswscale.so.5 (0x0000007f9b196000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000007f9b17e000) libjpeg.so.62 => /usr/lib/aarch64-linux-gnu/libjpeg.so.62 (0x0000007f9b12e000) libpng16.so.16 => /usr/lib/aarch64-linux-gnu/libpng16.so.16 (0x0000007f9b0e9000) libtiff.so.5 => /usr/lib/aarch64-linux-gnu/libtiff.so.5 (0x0000007f9b05e000) libz.so.1 => /lib/aarch64-linux-gnu/libz.so.1 (0x0000007f9b031000) libtbb.so => /usr/local/lib/libtbb.so (0x0000007f9aff1000) /lib/ld-linux-aarch64.so.1 (0x0000007f9e92e000) libraw1394.so.11 => /usr/lib/aarch64-linux-gnu/libraw1394.so.11 (0x0000007f9afd4000) libusb-1.0.so.0 => /lib/aarch64-linux-gnu/libusb-1.0.so.0 (0x0000007f9afad000) libgmodule-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgmodule-2.0.so.0 (0x0000007f9af97000) libffi.so.6 => /usr/lib/aarch64-linux-gnu/libffi.so.6 (0x0000007f9af7f000) libpcre.so.3 => /lib/aarch64-linux-gnu/libpcre.so.3 (0x0000007f9af0c000) libgstbase-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstbase-1.0.so.0 (0x0000007f9ae8c000) libgstaudio-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstaudio-1.0.so.0 (0x0000007f9ae0d000) libgsttag-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgsttag-1.0.so.0 (0x0000007f9adc1000) libgstvideo-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstvideo-1.0.so.0 (0x0000007f9ad1f000) libswresample.so.3 => /usr/lib/aarch64-linux-gnu/libswresample.so.3 (0x0000007f9acf8000) libvpx.so.5 => /usr/lib/aarch64-linux-gnu/libvpx.so.5 (0x0000007f9ab54000) libwebpmux.so.3 => /usr/lib/aarch64-linux-gnu/libwebpmux.so.3 (0x0000007f9ab3b000) libwebp.so.6 => /usr/lib/aarch64-linux-gnu/libwebp.so.6 (0x0000007f9aae1000) liblzma.so.5 => /lib/aarch64-linux-gnu/liblzma.so.5 (0x0000007f9aaac000) librsvg-2.so.2 => /usr/lib/aarch64-linux-gnu/librsvg-2.so.2 (0x0000007f9a5f2000) libcairo.so.2 => /usr/lib/aarch64-linux-gnu/libcairo.so.2 (0x0000007f9a4d3000) libzvbi.so.0 => /usr/lib/aarch64-linux-gnu/libzvbi.so.0 (0x0000007f9a439000) libsnappy.so.1 => /usr/lib/aarch64-linux-gnu/libsnappy.so.1 (0x0000007f9a420000) libaom.so.0 => /usr/lib/aarch64-linux-gnu/libaom.so.0 (0x0000007f9a0f8000) libcodec2.so.0.8.1 => /usr/lib/aarch64-linux-gnu/libcodec2.so.0.8.1 (0x0000007f9a087000) libgsm.so.1 => /usr/lib/aarch64-linux-gnu/libgsm.so.1 (0x0000007f9a06a000) libmp3lame.so.0 => /usr/lib/aarch64-linux-gnu/libmp3lame.so.0 (0x0000007f99fed000) libopenjp2.so.7 => /usr/lib/aarch64-linux-gnu/libopenjp2.so.7 (0x0000007f99f8d000) libopus.so.0 => /usr/lib/aarch64-linux-gnu/libopus.so.0 (0x0000007f99f2f000) libshine.so.3 => /usr/lib/aarch64-linux-gnu/libshine.so.3 (0x0000007f99f15000) libspeex.so.1 => /usr/lib/aarch64-linux-gnu/libspeex.so.1 (0x0000007f99ef0000) libtheoraenc.so.1 => /usr/lib/aarch64-linux-gnu/libtheoraenc.so.1 (0x0000007f99ead000) libtheoradec.so.1 => /usr/lib/aarch64-linux-gnu/libtheoradec.so.1 (0x0000007f99e84000) libtwolame.so.0 => /usr/lib/aarch64-linux-gnu/libtwolame.so.0 (0x0000007f99e54000) libvorbis.so.0 => /usr/lib/aarch64-linux-gnu/libvorbis.so.0 (0x0000007f99e1b000) libvorbisenc.so.2 => /usr/lib/aarch64-linux-gnu/libvorbisenc.so.2 (0x0000007f99d6b000) libwavpack.so.1 => /usr/lib/aarch64-linux-gnu/libwavpack.so.1 (0x0000007f99d36000) libx264.so.155 => /usr/lib/aarch64-linux-gnu/libx264.so.155 (0x0000007f99ae4000) libx265.so.165 => /usr/lib/aarch64-linux-gnu/libx265.so.165 (0x0000007f99838000) libxvidcore.so.4 => /usr/lib/aarch64-linux-gnu/libxvidcore.so.4 (0x0000007f99750000) libva.so.2 => /usr/lib/aarch64-linux-gnu/libva.so.2 (0x0000007f9971f000) libxml2.so.2 => /usr/lib/aarch64-linux-gnu/libxml2.so.2 (0x0000007f9956f000) libbz2.so.1.0 => /lib/aarch64-linux-gnu/libbz2.so.1.0 (0x0000007f9954b000) libgme.so.0 => /usr/lib/aarch64-linux-gnu/libgme.so.0 (0x0000007f994f5000) libopenmpt.so.0 => /usr/lib/aarch64-linux-gnu/libopenmpt.so.0 (0x0000007f9931c000) libchromaprint.so.1 => /usr/lib/aarch64-linux-gnu/libchromaprint.so.1 (0x0000007f992f9000) libbluray.so.2 => /usr/lib/aarch64-linux-gnu/libbluray.so.2 (0x0000007f9929e000) libgnutls.so.30 => /usr/lib/aarch64-linux-gnu/libgnutls.so.30 (0x0000007f990ca000) libssh-gcrypt.so.4 => /usr/lib/aarch64-linux-gnu/libssh-gcrypt.so.4 (0x0000007f99039000) libva-drm.so.2 => /usr/lib/aarch64-linux-gnu/libva-drm.so.2 (0x0000007f99026000) libva-x11.so.2 => /usr/lib/aarch64-linux-gnu/libva-x11.so.2 (0x0000007f99010000) libvdpau.so.1 => /usr/lib/aarch64-linux-gnu/libvdpau.so.1 (0x0000007f98ffc000) libX11.so.6 => /usr/lib/aarch64-linux-gnu/libX11.so.6 (0x0000007f98eb2000) libdrm.so.2 => /usr/lib/aarch64-linux-gnu/libdrm.so.2 (0x0000007f98e90000) libzstd.so.1 => /usr/lib/aarch64-linux-gnu/libzstd.so.1 (0x0000007f98df7000) libjbig.so.0 => /usr/lib/aarch64-linux-gnu/libjbig.so.0 (0x0000007f98dda000) libudev.so.1 => /lib/aarch64-linux-gnu/libudev.so.1 (0x0000007f98da6000) liborc-0.4.so.0 => /usr/lib/aarch64-linux-gnu/liborc-0.4.so.0 (0x0000007f98d17000) libsoxr.so.0 => /usr/lib/aarch64-linux-gnu/libsoxr.so.0 (0x0000007f98cb5000) libgdk_pixbuf-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgdk_pixbuf-2.0.so.0 (0x0000007f98c7e000) libgio-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgio-2.0.so.0 (0x0000007f98a9e000) libpangocairo-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpangocairo-1.0.so.0 (0x0000007f98a80000) libpangoft2-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpangoft2-1.0.so.0 (0x0000007f98a5a000) libpango-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpango-1.0.so.0 (0x0000007f98a01000) libfontconfig.so.1 => /usr/lib/aarch64-linux-gnu/libfontconfig.so.1 (0x0000007f989aa000) libcroco-0.6.so.3 => /usr/lib/aarch64-linux-gnu/libcroco-0.6.so.3 (0x0000007f9895e000) libpixman-1.so.0 => /usr/lib/aarch64-linux-gnu/libpixman-1.so.0 (0x0000007f988ee000) libfreetype.so.6 => /usr/lib/aarch64-linux-gnu/libfreetype.so.6 (0x0000007f9882f000) libxcb-shm.so.0 => /usr/lib/aarch64-linux-gnu/libxcb-shm.so.0 (0x0000007f9881b000) libxcb.so.1 => /usr/lib/aarch64-linux-gnu/libxcb.so.1 (0x0000007f987e4000) libxcb-render.so.0 => /usr/lib/aarch64-linux-gnu/libxcb-render.so.0 (0x0000007f987c5000) libXrender.so.1 => /usr/lib/aarch64-linux-gnu/libXrender.so.1 (0x0000007f987ac000) libXext.so.6 => /usr/lib/aarch64-linux-gnu/libXext.so.6 (0x0000007f9878c000) libogg.so.0 => /usr/lib/aarch64-linux-gnu/libogg.so.0 (0x0000007f98774000) libnuma.so.1 => /usr/lib/aarch64-linux-gnu/libnuma.so.1 (0x0000007f98754000) libicui18n.so.63 => /usr/lib/aarch64-linux-gnu/libicui18n.so.63 (0x0000007f9847f000) libicuuc.so.63 => /usr/lib/aarch64-linux-gnu/libicuuc.so.63 (0x0000007f982a4000) libicudata.so.63 => /usr/lib/aarch64-linux-gnu/libicudata.so.63 (0x0000007f968a6000) libmpg123.so.0 => /usr/lib/aarch64-linux-gnu/libmpg123.so.0 (0x0000007f96847000) libvorbisfile.so.3 => /usr/lib/aarch64-linux-gnu/libvorbisfile.so.3 (0x0000007f9682e000) libp11-kit.so.0 => /usr/lib/aarch64-linux-gnu/libp11-kit.so.0 (0x0000007f966e6000) libidn2.so.0 => /usr/lib/aarch64-linux-gnu/libidn2.so.0 (0x0000007f966b9000) libunistring.so.2 => /usr/lib/aarch64-linux-gnu/libunistring.so.2 (0x0000007f96533000) libtasn1.so.6 => /usr/lib/aarch64-linux-gnu/libtasn1.so.6 (0x0000007f96512000) libnettle.so.6 => /usr/lib/aarch64-linux-gnu/libnettle.so.6 (0x0000007f964cb000) libhogweed.so.4 => /usr/lib/aarch64-linux-gnu/libhogweed.so.4 (0x0000007f96484000) libgmp.so.10 => /usr/lib/aarch64-linux-gnu/libgmp.so.10 (0x0000007f963fa000) libgcrypt.so.20 => /lib/aarch64-linux-gnu/libgcrypt.so.20 (0x0000007f9632d000) libgssapi_krb5.so.2 => /usr/lib/aarch64-linux-gnu/libgssapi_krb5.so.2 (0x0000007f962d4000) libXfixes.so.3 => /usr/lib/aarch64-linux-gnu/libXfixes.so.3 (0x0000007f962be000) libgomp.so.1 => /usr/lib/aarch64-linux-gnu/libgomp.so.1 (0x0000007f96282000) libmount.so.1 => /lib/aarch64-linux-gnu/libmount.so.1 (0x0000007f96210000) libselinux.so.1 => /lib/aarch64-linux-gnu/libselinux.so.1 (0x0000007f961dc000) libresolv.so.2 => /lib/aarch64-linux-gnu/libresolv.so.2 (0x0000007f961b6000) libharfbuzz.so.0 => /usr/lib/aarch64-linux-gnu/libharfbuzz.so.0 (0x0000007f960b8000) libthai.so.0 => /usr/lib/aarch64-linux-gnu/libthai.so.0 (0x0000007f9609f000) libfribidi.so.0 => /usr/lib/aarch64-linux-gnu/libfribidi.so.0 (0x0000007f96072000) libexpat.so.1 => /lib/aarch64-linux-gnu/libexpat.so.1 (0x0000007f96033000) libuuid.so.1 => /lib/aarch64-linux-gnu/libuuid.so.1 (0x0000007f9601b000) libXau.so.6 => /usr/lib/aarch64-linux-gnu/libXau.so.6 (0x0000007f96008000) libXdmcp.so.6 => /usr/lib/aarch64-linux-gnu/libXdmcp.so.6 (0x0000007f95ff2000) libgpg-error.so.0 => /lib/aarch64-linux-gnu/libgpg-error.so.0 (0x0000007f95fc0000) libkrb5.so.3 => /usr/lib/aarch64-linux-gnu/libkrb5.so.3 (0x0000007f95ed5000) libk5crypto.so.3 => /usr/lib/aarch64-linux-gnu/libk5crypto.so.3 (0x0000007f95e93000) libcom_err.so.2 => /lib/aarch64-linux-gnu/libcom_err.so.2 (0x0000007f95e7f000) libkrb5support.so.0 => /usr/lib/aarch64-linux-gnu/libkrb5support.so.0 (0x0000007f95e62000) libkeyutils.so.1 => /lib/aarch64-linux-gnu/libkeyutils.so.1 (0x0000007f95e4d000) libblkid.so.1 => /lib/aarch64-linux-gnu/libblkid.so.1 (0x0000007f95de6000) libgraphite2.so.3 => /usr/lib/aarch64-linux-gnu/libgraphite2.so.3 (0x0000007f95db3000) libdatrie.so.1 => /usr/lib/aarch64-linux-gnu/libdatrie.so.1 (0x0000007f95d9b000) libbsd.so.0 => /usr/lib/aarch64-linux-gnu/libbsd.so.0 (0x0000007f95d75000)

    i couldn solve

    Best

    help wanted ARM 
    opened by MyraBaba 9
  • yolox_nano速度问题

    yolox_nano速度问题

    我使用您的代码框架测试了一下yolox系列的推理速度,yolox_nano以外的模型推理速度都很正常,但是使用nano模型时,推理速度甚至低于yolox_s。所用的onnx文件均为利用官方coco数据集训练出来的pth文件转化得到。 我注意到yolox在定义nano模型时,有一段额外代码(./exps/default/nano.py中),如下图所示 image 这是否会有影响?

    question YOLOX:Inference 
    opened by 1VeniVediVeci1 9
  • MNN模型获取输入维度信息直接崩溃

    MNN模型获取输入维度信息直接崩溃

    在BasicMNNHandler::initialize_handler中 会调用batch() channel() height()等几个函数 获取模型的输入维度信息 我这都失败了 调试显示dim里 没有任何数据 batch函数去返回dim[0]时 就崩溃了 我这用MNN是1.2.0 模型用的是你上传的网盘的模型 我试了nanodet和retinaface等模型 都不能获得维度信息 请问是是哪里的问题呢?

    opened by MatchX 8
  • 使用 cv::imshow cv::waitKey 会报错 ld: symbol(s) not found for architecture x86_64

    使用 cv::imshow cv::waitKey 会报错 ld: symbol(s) not found for architecture x86_64

    int main()
    {
      //test_lite();
        std::string onnx_path = "/Users/also/Downloads/lite.ai.toolkit-main/hub/onnx/cv/FaceLandmark1000.onnx";
        std::string test_img_path = "/Users/also/Downloads/lite.ai.toolkit-main/examples/lite/resources/test_lite_face_landmarks_0.png";
        std::string save_img_path = "/Users/also/Downloads/lite.ai.toolkit-main/logs/test_lite_face_landmarks_1000.jpg";
    
        lite::cv::face::align::FaceLandmark1000 *face_landmarks_1000 =
                new lite::cv::face::align::FaceLandmark1000(onnx_path);
    
        lite::types::Landmarks landmarks;
    
        cv::VideoCapture cap;
        cv::Mat im;
        cap.open(1);
        if(! cap.isOpened())
        {
            std::cerr << "Cannot open the camera." << std::endl;
            return 0;
        }
    
        if( cap.isOpened()) {
            while (true) {
                cap >> im;
                //cout << "Image size: " << im.rows << "X" << im.cols << endl;
                cv::Mat img_bgr = im.clone();
    
                //cv::Mat img_bgr = cv::imread(test_img_path);
                face_landmarks_1000->detect(img_bgr, landmarks);
    
                lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
    
                //cv::imwrite(save_img_path, img_bgr);
    
                std::cout << "Default Version Done! Detected Landmarks Num: "
                          << landmarks.points.size() << std::endl;
    
    
                cv::imshow("result", img_bgr);
    
                if((cv::waitKey(2)& 0xFF) == 'q')
                    break;
            }
        }
      return 0;
    }
    
    Undefined symbols for architecture x86_64:
      "cv::imshow(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, cv::_InputArray const&)", referenced from:
          _main in test_lite_face_landmarks_1000.cpp.o
      "cv::waitKey(int)", referenced from:
          _main in test_lite_face_landmarks_1000.cpp.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    make[3]: *** [lite.ai.toolkit/bin/lite_face_landmarks_1000] Error 1
    make[2]: *** [examples/lite/CMakeFiles/lite_face_landmarks_1000.dir/all] Error 2
    make[1]: *** [examples/lite/CMakeFiles/lite_face_landmarks_1000.dir/rule] Error 2
    make: *** [lite_face_landmarks_1000] Error 2
    
    opened by chfeizy 8
  • 👉Windows10 build error(For Windows users)

    👉Windows10 build error(For Windows users)

    References for Windows10 users

    windows下的使用可以参考以下这几个讨论(some references for windows users)

    • 👉 #6
    • 👉 #10
    • 👉 #32
    • 👉 #48
    • 👉 #39
    • 👉 #77

    另外,是win32和system32,目前lite.ai.toolkit没有考虑32位的系统。还有就是在windows下编译完之后,需要手动把依赖库都拷贝到build/lite.ai.toolkit/lib和build/lite.ai.toolkit/bin,并且检查修改下模型文件的路径,比如说路径的反斜杠之类的。(Also, for win32 and system32, currently, lite.ai.toolkit does not consider 32-bit systems. Also, after compiling under Windows, you need to manually copy the dependent libraries to build/lite.ai.toolkit/lib and build/lite.ai.toolkit/bin, and check and modify the path of the model file, for example, The backslash of the path.)

    Search issues about windows

    image

    opened by DefTruth 8
  • WIN10 自行编译的MNN-VULKAN GPU库无效

    WIN10 自行编译的MNN-VULKAN GPU库无效

    我在win10上成功编译了此项目,运行也ok。 我现在是通过mnn来推理的,但因为cpu版本的mnn无法做到实时,所以想编译GPU版本的mnn试试,因为电脑已经装了vulkan,就编译了mnn的cpu+vulkan的库,在项目中替换了带GPU的mnn库之后,运行还是在cpu上(通过任务管理器的性能看负载,和图片的处理速度得出的) 我将MNN::ScheduleConfig schedule_config; 这里也设置成了 schedule_config.backupType = MNN_FORWARD_VULKAN; (原来是MNN_FORWARD_CPU),依然无效。 请问有办法让RVM在MNN环境下调用GPU吗? 然后不知道MNN,ONNXRUNTIME更推荐使用哪一个呢?

    opened by yyl9510 7
  • Support yolact, yolactEdge or other real time instance segmentation model

    Support yolact, yolactEdge or other real time instance segmentation model

    如题,

    我根据这两个帖子,成功的把模型转成了onnx,https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT.html and https://github.com/Ma-Dan/yolact/tree/onnx

    问题是转换成onnx后,速度降低不少(30ms vs 125ms),主要问题出在onnx需要把资料从gpu搬回cpu上,我尝试过利用torchvision.nms处理nms,但是运算速度更慢(高于1000ms),不知道哪里出了问题,在知乎上看见一篇文章提到,“后处理在onnx中会转换成一大坨胶水op,非常琐碎,在框架中实现效率低下”, 不知道是不是这个原因。

    请问大佬你知道什么比较好的方法优化这一部分吗?谢谢

    TODO 
    opened by stereomatchingkiss 7
  • [FAQ]: 作者回答👉关于如何根据抠图模型的alpha合成新背景?

    [FAQ]: 作者回答👉关于如何根据抠图模型的alpha合成新背景?

    由于问这个问题的人比较多,我在lite里面增加了一些辅助函数来实现背景合成,但还没合并进主分支,具体细节可以参考以下这段逻辑,有需要的可以参考一下(目前不考虑性能的优化问题,有需要的同学可以自己根据这段逻辑做特定的性能优化):

    void lite::utils::swap_background(const cv::Mat &fgr_mat, const cv::Mat &pha_mat,
                                      const cv::Mat &bgr_mat, cv::Mat &out_mat,
                                      bool fgr_is_already_mul_pha)
    {
      // user-friendly method for background swap.
      if (fgr_mat.empty() || pha_mat.empty() || bgr_mat.empty()) return;
      const unsigned int fg_h = fgr_mat.rows;
      const unsigned int fg_w = fgr_mat.cols;
      const unsigned int bg_h = bgr_mat.rows;
      const unsigned int bg_w = bgr_mat.cols;
      const unsigned int ph_h = pha_mat.rows;
      const unsigned int ph_w = pha_mat.cols;
      const unsigned int channels = fgr_mat.channels();
      if (channels != 3) return; // only support 3 channels.
      const unsigned int num_elements = fg_h * fg_w * channels;
    
      cv::Mat bg_mat_copy, ph_mat_copy, fg_mat_copy;
      if (bg_h != fg_h || bg_w != fg_w)
        cv::resize(bgr_mat, bg_mat_copy, cv::Size(fg_w, fg_h));
      else bg_mat_copy = bgr_mat; // ref only.
      if (ph_h != fg_h || ph_w != fg_w)
        cv::resize(pha_mat, ph_mat_copy, cv::Size(fg_w, fg_h));
      else ph_mat_copy = pha_mat; // ref only.
      if (ph_mat_copy.channels() == 1)
        cv::cvtColor(ph_mat_copy, ph_mat_copy, cv::COLOR_GRAY2BGR); // 0.~1.
      // convert mats to float32 points.
      if (bg_mat_copy.type() != CV_32FC3) bg_mat_copy.convertTo(bg_mat_copy, CV_32FC3); // 0.~255.
      if (ph_mat_copy.type() != CV_32FC3) ph_mat_copy.convertTo(ph_mat_copy, CV_32FC3); // 0.~1.
      if (fgr_mat.type() != CV_32FC3) fgr_mat.convertTo(fg_mat_copy, CV_32FC3); // 0.~255.
      else fg_mat_copy = fgr_mat; // ref only
    
      // element wise operations.
      out_mat = fg_mat_copy.clone();
      const float *fg_ptr = (float *) fg_mat_copy.data;
      const float *bg_ptr = (float *) bg_mat_copy.data;
      const float *ph_ptr = (float *) ph_mat_copy.data;
      float *mutable_out_ptr = (float *) out_mat.data;
    
      // TODO: add omp support instead of native loop.
      if (!fgr_is_already_mul_pha)
        for (unsigned int i = 0; i < num_elements; ++i)
          mutable_out_ptr[i] = fg_ptr[i] * ph_ptr[i] + (1.f - ph_ptr[i]) * bg_ptr[i];
      else
        for (unsigned int i = 0; i < num_elements; ++i)
          mutable_out_ptr[i] = fg_ptr[i] + (1.f - ph_ptr[i]) * bg_ptr[i];
    
      if (!out_mat.empty() && out_mat.type() != CV_8UC3)
        out_mat.convertTo(out_mat, CV_8UC3);
    }
    

    使用案例(MODNet还在开发中,此处仅用作参考示例)

    static void test_default()
    {
      std::string onnx_path = "../../../hub/onnx/cv/modnet_photographic_portrait_matting-512x512.onnx";
      std::string test_img_path = "../../../examples/lite/resources/test_lite_matting_input.jpg";
      std::string test_bgr_path = "../../../examples/lite/resources/test_lite_matting_bgr.jpg";
      std::string save_fgr_path = "../../../logs/test_lite_modnet_fgr.jpg";
      std::string save_pha_path = "../../../logs/test_lite_modnet_pha.jpg";
      std::string save_merge_path = "../../../logs/test_lite_modnet_merge.jpg";
      std::string save_swap_path = "../../../logs/test_lite_modnet_swap.jpg";
    
      lite::cv::matting::MODNet *modnet =
          new lite::cv::matting::MODNet(onnx_path, 16); // 16 threads
    
      lite::types::MattingContent content;
      cv::Mat img_bgr = cv::imread(test_img_path);
      cv::Mat bgr_mat = cv::imread(test_bgr_path);
    
      // 1. image matting.
      modnet->detect(img_bgr, content, true);
    
      if (content.flag)
      {
        if (!content.fgr_mat.empty()) cv::imwrite(save_fgr_path, content.fgr_mat);
        if (!content.pha_mat.empty()) cv::imwrite(save_pha_path, content.pha_mat * 255.);
        if (!content.merge_mat.empty()) cv::imwrite(save_merge_path, content.merge_mat);
        // swap background
        cv::Mat out_mat;
        lite::utils::swap_background(content.fgr_mat, content.pha_mat, bgr_mat, out_mat, true);
        if (!out_mat.empty())
        {
          cv::imwrite(save_swap_path, out_mat);
          std::cout << "Saved Swap Image Done!" << std::endl;
        }
    
        std::cout << "Default Version MGMatting Done!" << std::endl;
      }
    
      delete modnet;
    }
    

    效果示例

    • 合成图 test_lite_modnet_swap
    • 原图 test_lite_matting_input
    • 背景图 test_lite_matting_bgr
    enhancement question 
    opened by DefTruth 0
  • 🔥Windows10  VS2019  CUDA 11.1 配置lite.ai.toolkit库

    🔥Windows10 VS2019 CUDA 11.1 配置lite.ai.toolkit库

    补充说明

    首先,非常感谢 @zhanghongyong123456 同学这份详细的windows下配置lite.ai.toolkit 的教程~ 简直太热心啦~ 用windows的同学可以先看这份教程进行配置~ 以及以下相关issues的讨论。

    References for Windows10 users

    windows下的使用可以参考以下这几个讨论(some references for windows users)

    • 👉 #6
    • 👉 #10
    • 👉 #32
    • 👉 #48
    • 👉 #39
    • 👉 #77
    • 👉 #242

    另外,是win32和system32,目前lite.ai.toolkit没有考虑32位的系统。还有就是在windows下编译完之后,需要手动把依赖库都拷贝到build/lite.ai.toolkit/lib和build/lite.ai.toolkit/bin,并且检查修改下模型文件的路径,比如说路径的反斜杠之类的。(Also, for win32 and system32, currently, lite.ai.toolkit does not consider 32-bit systems. Also, after compiling under Windows, you need to manually copy the dependent libraries to build/lite.ai.toolkit/lib and build/lite.ai.toolkit/bin, and check and modify the path of the model file, for example, The backslash of the path.)

    Search issues about windows

    image


    Windows10 VS2019 CUDA 11.1 配置 lite.ai.toolkit库

    作者: @zhanghongyong123456

    以下是配置教程原文。


    第一步:相关依赖库的配置: 配置: 1.1 Opencv: 按照这篇博客,下载官方编译好的库解压安装,添加环境变量;我是用 opencv 4.5.5 https://blog.csdn.net/xgocn/article/details/104170088 1.2 配置 Onnxruntime: 按照这篇博客就行操作即可: https://blog.csdn.net/qq_44747572/article/details/121340735?spm=1001.2014.3001.5501 这里注意自己CUDA的版本号,我采用的 onnxruntime-win-x64-gpu-1.9.0官方编译版本 1.3 配置TNN 直接下载安装包解压即可 github下载即可 : https://github.com/Tencent/TNN/releases/tag/v0.3.0 https://github.com/Tencent/TNN/releases/download/v0.3.0/tnn-v0.3.0-windows.zip image 1.4 github下载源码,自己编译MNN 按照这篇博客操作即可,,可以选debug 或者release 版本(可以在同一个文件夹编译两个版本),至于是否需要切换语言,可以先编译看看是否成功,不成功再去设置语言 https://blog.csdn.net/ouyangfushu/article/details/96476245 直接打开x64 Native Tools Command Prompt for VS 2019 编译即可,博客第一步、第二步我没有进行操作,也可以成功 image cmake -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release .. 只需要设置 Debug 或者Release 即可。后续采用release 版本 1.4 配置 ncnn,参考这个链接 https://zhuanlan.zhihu.com/p/391609325 特别注意:在配置Protobuf时候,我在一个源文件夹下先后进行Debug以及 Release编译,即使这里不出问题,后续在编译的时候也会报错,所以如果想要配置release以及debug版本,个人强烈建议在两个源文件夹下进行分别编译;同样的在后续下载NCNN源码进行debug和release版本编译时候,也要创建两个源文件分别进行分别编译。 在编译Vulkan时候,需要配置环境变量,否则后续在编译ncnn时候需要设置 set VULKAN_SDK=C:/VulkanSDK/1.3.204.0 后面跟你VulkanSDK的安装的路径,像下图这样 image 1.5 开始配置 lite.ai.toolkit 库 1.5.1拉取项目库: image 1.5.2 替换项目子库 image image opencv: image tnn: image onnxruntime:需要下载你自己对应版本的源码, image image 我这里使用的onnxruntime-win-x64-gpu-1.9.0,所以下载的对应的源码进行替换, image MNN: 源码下include下的MNN文件夹 ncnn: 自己编译的build/install/include/ncnn文件夹 M9I_`GAEVJUFF77FE6AC~9D 1.5.3 复制子库对应的lib以及dll到源文件夹: image 这个将lite.ai.toolkit/lib文件夹下,(这里我第一次没有windows子文件夹,我忘记是我删除了还是后更新的,后面发现有子文件夹,我就在lib文件下复制了一份lib及dll库,他对应的子文件夹windows也复制了一份) opencv: image image tnn: image image onnxruntime: image MNN:我这里拷贝的是release版本,后续也是编译的MinSizeRel版本 image ncnn: image protobuf :这个也复制了(因为编译错误很多次,所以就把用到的都复制了) image 总文件夹: image 1.5.4 配置 修改opencv配置: image 1.5.5 cmake: 方法一: image image image

    法二: cmake GUI image image image 1.5.6 打开项目,准备编译: image image 配置 头文件: image 配置 lib路径: image 配置lib文件名: image 这是我的lib名: opencv_world455.lib onnxruntime.lib MNN.lib ncnn.lib TNN.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib libprotobuf.lib vulkan-1.lib glslang.lib SPIRV.lib OGLCompiler.lib OSDependent.lib MachineIndependent.lib GenericCodeGen.lib 添加error解决: image

    1.5.7 生成: image 1.5.8 结束 image 可能有不妥当地方,欢迎大佬指出:

    documentation GPU Windows 
    opened by zhanghongyong123456 18
  • 👉CONTRIBUTING (如何添加你的模型?)

    👉CONTRIBUTING (如何添加你的模型?)

    这个issue主要讲一下,如何把你自己的模型添加到lite.ai.toolkit。lite.ai.toolkit集成了一些比较新的基础模型,比如人脸检测、人脸识别、抠图、人脸属性分析、图像分类、人脸关键点识别、图像着色、目标检测等等,可以直接用到具体的场景中。但是,毕竟lite.ai.toolkit的模型还是有限的,具体的场景下,可能有你经过优化的模型,比如你自己训了一个目标检测器,可能效果更好。那么,如何把你的模型加入到lite.ai.toolkit中呢?这样既能用到lite.ai.toolkit一些已有的算法能力,也能兼容您的具体场景。这个issue主要是讲这个问题。大家有疑惑的可以提在这个issue,我会尽可能回答~

    good first issue 
    opened by DefTruth 11
Releases(v0.1.1)
Owner
DefTruth
保持学习 ☕️😜🙃
DefTruth
Lite.AI 🚀🚀🌟 is a user friendly C++ lib of 60+ awesome AI models. YOLOX🔥, YoloV5🔥, YoloV4🔥, DeepLabV3🔥, ArcFace🔥, CosFace🔥, RetinaFace🔥, SSD🔥, etc.

Lite.AI ?? ?? ?? Introduction. Lite.AI ?? ?? ?? is a simple and user-friendly C++ library of awesome ?? ?? ?? AI models. It's a collection of personal

Def++ 2k Aug 12, 2022
Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Lite.AI ?????? is a user-friendly C++ lib for awesome?????? AI models based on onnxruntime, ncnn or mnn. YOLOX??, YoloV5??, YoloV4??, DeepLabV3??, ArcFace??, CosFace??, Colorization??, SSD??, etc.

Def++ 2k Aug 9, 2022
YOLOP running in Android by ncnn

YOLOP-NCNN 将YOLOP的模型搬运到NCNN上,工程里面给了windows下的VS测试以及安卓实现 YOLOP YOLOP:车辆检测+路面分割+车道线分割 三合一的网络,基于YOLO系列设计的,官方的工程在这:https://github.com/hustvl/YOLOP 工程细节 VS2

WuJinxuan 26 Aug 9, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 75 Apr 14, 2022
deploy yolox algorithm use deepstream

YOLOX(Megvii-BaseDetection) Deploy DeepStream ?? ?? This project base on https://github.com/Megvii-BaseDetection/YOLOX and https://zhuanlan.zhihu.com/

null 75 Jul 14, 2022
Android yolox hand detect by ncnn

The yolox hand detection This is a sample ncnn android project, it depends on ncnn library and opencv https://github.com/Tencent/ncnn https://github.c

FeiGeChuanShu 12 Jul 18, 2022
YOLOX + ROS2 object detection package

YOLOX-ROS YOLOX+ROS2 Foxy Supported List Base ROS1 C++ ROS1 Python ROS2 C++ ROS2 Python CPU ✅ CUDA ✅ CUDA (FP16) ✅ TensorRT (CUDA) ✅ OpenVINO ✅ MegEng

Ar-Ray 132 Aug 9, 2022
YoloX for a Jetson Nano 4 using ncnn.

YoloX Jetson Nano YoloX with the ncnn framework. Paper: https://arxiv.org/pdf/2107.08430.pdf Special made for a Jetson Nano, see Q-engineering deep le

Q-engineering 8 Aug 2, 2022
YoloX for a bare Raspberry Pi 4 using ncnn.

YoloX Raspberry Pi 4 YoloX with the ncnn framework. Paper: https://arxiv.org/pdf/2107.08430.pdf Special made for a bare Raspberry Pi 4, see Q-engineer

Q-engineering 5 Jun 24, 2022
VNOpenAI 23 Jul 31, 2022
YOLO5Face.lite.ai.toolkit

???? YOLO5Face 2021 with MNN/NCNN/TNN/ONNXRuntime C++ !

DefTruth 30 Jul 11, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.2k Aug 6, 2022
Zenotech 7 Nov 13, 2020
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Insight Software Consortium 1.1k Aug 8, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

The Microsoft Cognitive Toolkit is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph.

Microsoft 17.2k Aug 9, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8k Aug 4, 2022
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tobias Hermann 882 Aug 7, 2022
TensorRT implementation of RepVGG models from RepVGG: Making VGG-style ConvNets Great Again

RepVGG RepVGG models from "RepVGG: Making VGG-style ConvNets Great Again" https://arxiv.org/pdf/2101.03697.pdf For the Pytorch implementation, you can

weiwei zhou 67 Aug 4, 2022