Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Overview

Lite.AI 🚀 🚀 🌟


Introduction.

Lite.AI 🚀 🚀 🌟 is a simple and user-friendly C++ library of awesome 🔥 🔥 🔥 AI models. It's a collection of personal interests. such as YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, etc. And, it only relies on OpenCV and commonly used inference engines, namely, onnxruntime, ncnn, and MNN. It based on onnxruntime c++ by default. Lite.AI includes object detection, face detection, style transfer, face alignment, face recognition, segmentation, colorization, face attributes analysis, image classification, matting, etc. You can use these awesome models simply through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. Have a good travel ~ 🙃 🤪 🍀

Important Notes !!!

Contents.

1. Dependencies.

Mac OS.

install OpenCV and onnxruntime libraries using Homebrew or you can download the built dependencies from this repo. See third_party and build-docs1 for more details.

  brew update
  brew install opencv
  brew install onnxruntime
Expand for More Details of Dependencies.

Linux.

  • todo ⚠️

Windows.

  • todo ⚠️

Inference Engine Plans:

  • doing:
    ❇️ onnxruntime
  • todo:
    ⚠️ NCNN
    ⚠️ MNN
    ⚠️ OpenMP

2. Build Lite.AI.

Build the shared lib of Lite.AI for MacOS from sources. Note that Lite.AI uses onnxruntime as default backend, for the reason that onnxruntime supports the most of onnx's operators.

Linux and Windows.

⚠️ Lite.AI is not directly support Linux and Windows now. For Linux and Windows, you need to build the shared libs of OpenCV and onnxruntime firstly and put then into the third_party directory. Please reference the build-docs1 for third_party. The documents and docker image for Linux will be coming soon ~

  • Clone the Lite.AI from sources:
git clone --depth=1 https://github.com/DefTruth/lite.ai.git  # latest
  • Build shared lib.
cd lite.ai
sh ./build.sh
cd ./build/lite.ai/lib && otool -L liblite.ai.0.0.1.dylib 
liblite.ai.0.0.1.dylib:
        @rpath/liblite.ai.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
        @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
        @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
        ...
Expand for more details of How to link the shared lib of Lite.AI?
cd ../ && tree .
├── bin
├── include
│   ├── lite
│   │   ├── backend.h
│   │   ├── config.h
│   │   └── lite.h
│   └── ort
└── lib
    └── liblite.ai.0.0.1.dylib
  • Run the built examples:
cd ./build/lite.ai/bin && ls -lh | grep lite
-rwxr-xr-x  1 root  staff   301K Jun 26 23:10 liblite.ai.0.0.1.dylib
...
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5
  • To link lite.ai shared lib. You need to make sure that OpenCV and onnxruntime are linked correctly. Just like:
cmake_minimum_required(VERSION 3.17)
project(testlite.ai)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE debug)
# link opencv.
set(OpenCV_DIR ${CMAKE_SOURCE_DIR}/opencv/lib/cmake/opencv4)
find_package(OpenCV 4 REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
# link onnxruntime.
set(ONNXRUNTIME_DIR ${CMAKE_SOURCE_DIR}/onnxruntime/)
set(ONNXRUNTIME_INCLUDE_DIR ${ONNXRUNTIME_DIR}/include)
set(ONNXRUNTIME_LIBRARY_DIR ${ONNXRUNTIME_DIR}/lib)
include_directories(${ONNXRUNTIME_INCLUDE_DIR})
link_directories(${ONNXRUNTIME_LIBRARY_DIR})
# link lite.ai.
set(LITEHUB_DIR ${CMAKE_SOURCE_DIR}/lite.ai)
set(LITEHUB_INCLUDE_DIR ${LITEHUB_DIR}/include)
set(LITEHUB_LIBRARY_DIR ${LITEHUB_DIR}/lib)
include_directories(${LITEHUB_INCLUDE_DIR})
link_directories(${LITEHUB_LIBRARY_DIR})
# add your executable
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 lite.ai onnxruntime ${OpenCV_LIBS})

A minimum example to show you how to link the shared lib of Lite.AI correctly for your own project can be found at lite.ai-release .

3. Model Zoo.

3.1 Namespace and Lite.AI modules.

Lite.AI contains 50+ AI models with 100+ frozen pretrained .onnx files now. They come from different fields of computer vision. Click the Expand ▶️ button for more details.

Expand Details for Namespace and Lite.AI modules.
Namepace Details
lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc.
lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc.
lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️
lite::cv::face Face Analysis. detect, align, pose, attr, etc. ❇️
lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️
lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️
lite::cv::face::pose Head Pose Estimation. FSANet, etc. ❇️
lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️
lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. ⚠️
lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. ⚠️
lite::cv::matting Image Matting. Object and Human matting. ⚠️
lite::cv::colorization Colorization. Make Gray image become RGB. ⚠️
lite::cv::resolution Super Resolution. ⚠️

3.2 Lite.AI's Classes and Pretrained Files.

Correspondence between the classes in Lite.AI and pretrained model files can be found at lite.ai.hub.onnx.md. For examples, the pretrained model files for lite::cv::detection::YoloV5 and lite::cv::detection::YoloX are listed as follows.

Expand Examples for Lite.AI's Classes and Pretrained Files.
Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::detection::YoloV5 yolov5l.onnx yolov5 ( 🔥 🔥 💥 ↑) 188Mb
lite::cv::detection::YoloV5 yolov5m.onnx yolov5 ( 🔥 🔥 💥 ↑) 85Mb
lite::cv::detection::YoloV5 yolov5s.onnx yolov5 ( 🔥 🔥 💥 ↑) 29Mb
lite::cv::detection::YoloV5 yolov5x.onnx yolov5 ( 🔥 🔥 💥 ↑) 351Mb
lite::cv::detection::YoloX yolox_x.onnx YOLOX ( 🔥 🔥 !!↑) 378Mb
lite::cv::detection::YoloX yolox_l.onnx YOLOX ( 🔥 🔥 !!↑) 207Mb
lite::cv::detection::YoloX yolox_m.onnx YOLOX ( 🔥 🔥 !!↑) 97Mb
lite::cv::detection::YoloX yolox_s.onnx YOLOX ( 🔥 🔥 !!↑) 34Mb
lite::cv::detection::YoloX yolox_tiny.onnx YOLOX ( 🔥 🔥 !!↑) 19Mb
lite::cv::detection::YoloX yolox_nano.onnx YOLOX ( 🔥 🔥 !!↑) 3.5Mb

It means that you can load the the any one yolov5*.onnx and yolox_*.onnx according to your application through the same Lite.AI classes, such as YoloV5, YoloX, etc.

auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx");  // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); 
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");  
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx");  // for mobile device 
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx");  // 3.5Mb only !

3.3 Model Zoo for Lite.AI.

Note that the models here are all from third-party projects. Most of the models were converted by Lite.AI. In Lite.AI, different names of the same algorithm mean that the corresponding models come from different repositories, different implementations, or use different training data, etc. means passed the test and ⚠️ means not implements yet but coming soon. For classes which denoted , you can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5 . More details can be found at Examples for Lite.AI .
(Baidu Drive code: 8gin)

  • Object Detection.
Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 🔥 🔥 💥 detection demo
YoloV3 236M onnx-models 🔥 🔥 🔥 detection demo
TinyYoloV3 33M onnx-models 🔥 🔥 🔥 detection demo
YoloV4 176M YOLOv4... 🔥 🔥 🔥 detection demo
SSD 76M onnx-models 🔥 🔥 🔥 detection demo
SSDMobileNetV1 27M onnx-models 🔥 🔥 🔥 detection demo
YoloX 3.5M YOLOX 🔥 🔥 new↑ detection demo
⚠️ Expand More Details for Lite.AI's Model Zoo.
  • Classification.
Class Size From Awesome File Type State Usage
EfficientNetLite4 49M onnx-models 🔥 🔥 🔥 classification demo
ShuffleNetV2 8.7M onnx-models 🔥 🔥 🔥 classification demo
DenseNet121 30.7M torchvision 🔥 🔥 🔥 classification demo
GhostNet 20M torchvision 🔥 🔥 🔥 classification demo
HdrDNet 13M torchvision 🔥 🔥 🔥 classification demo
IBNNet 97M torchvision 🔥 🔥 🔥 classification demo
MobileNetV2 13M torchvision 🔥 🔥 🔥 classification demo
ResNet 44M torchvision 🔥 🔥 🔥 classification demo
ResNeXt 95M torchvision 🔥 🔥 🔥 classification demo
  • Face Detection.
Class Size From Awesome File Type State Usage
UltraFace 1.1M Ultra-Light... 🔥 🔥 🔥 face::detect demo
MobileV1RetinaFace - ...Retinaface 🔥 🔥 🔥 face::detect ⚠️ -
ResNetRetinaFace - ...Retinaface 🔥 🔥 🔥 face::detect ⚠️ -
FaceBoxes - FaceBoxes 🔥 🔥 face::detect ⚠️ -
  • Face Alignment.
Class Size From Awesome File Type State Usage
PFLD 1.0M pfld_106_... 🔥 🔥 face::align demo
PFLD98 4.8M PFLD... 🔥 🔥 face::align ✅️ demo
MobileNetV268 9.4M ...landmark 🔥 🔥 face::align ✅️ demo
MobileNetV2SE68 11M ...landmark 🔥 🔥 face::align ✅️ demo
PFLD68 2.8M ...landmark 🔥 🔥 face::align ✅️ demo
FaceLandmark1000 2.0M FaceLandm... 🔥 face::align ✅️ demo
  • Face Attributes.
Class Size From Awesome File Type State Usage
AgeGoogleNet 23M onnx-models 🔥 🔥 🔥 face::attr demo
GenderGoogleNet 23M onnx-models 🔥 🔥 🔥 face::attr demo
EmotionFerPlus 33M onnx-models 🔥 🔥 🔥 face::attr demo
VGG16Age 514M onnx-models 🔥 🔥 🔥 face::attr demo
VGG16Gender 512M onnx-models 🔥 🔥 🔥 face::attr demo
SSRNet 190K SSR_Net... 🔥 face::attr demo
EfficientEmotion7 15M face-emo... 🔥 face::attr ✅️ demo
EfficientEmotion8 15M face-emo... 🔥 face::attr demo
MobileEmotion7 13M face-emo... 🔥 face::attr demo
ReXNetEmotion7 30M face-emo... 🔥 face::attr demo
  • Face Recognition.
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface 🔥 🔥 🔥 faceid demo
GlintCosFace 92M insightface 🔥 🔥 🔥 faceid demo
GlintPartialFC 170M insightface 🔥 🔥 🔥 faceid demo
FaceNet 89M facenet... 🔥 🔥 🔥 faceid demo
FocalArcFace 166M face.evoLVe... 🔥 🔥 🔥 faceid demo
FocalAsiaArcFace 166M face.evoLVe... 🔥 🔥 🔥 faceid demo
TencentCurricularFace 249M TFace 🔥 🔥 faceid demo
TencentCifpFace 130M TFace 🔥 🔥 faceid demo
CenterLossFace 280M center-loss... 🔥 🔥 faceid demo
SphereFace 80M sphere... 🔥 🔥 faceid ✅️ demo
PoseRobustFace 92M DREAM ?? 🔥 faceid ✅️ demo
NaivePoseRobustFace 43M DREAM 🔥 🔥 faceid ✅️ demo
MobileFaceNet 3.8M MobileFace... 🔥 🔥 faceid demo
CavaGhostArcFace 15M cavaface... 🔥 🔥 faceid demo
CavaCombinedFace 250M cavaface... 🔥 🔥 faceid demo
MobileSEFocalFace 4.5M face_recog... 🔥 🔥 faceid demo
  • Head Pose Estimation.
Class Size From Awesome File Type State Usage
FSANet 1.2M ...fsanet... 🔥 face::pose demo
  • Segmentation.
Class Size From Awesome File Type State Usage
DeepLabV3ResNet101 232M torchvision 🔥 🔥 🔥 segmentation demo
FCNResNet101 207M torchvision 🔥 🔥 🔥 segmentation demo
  • Style Transfer.
Class Size From Awesome File Type State Usage
FastStyleTransfer 6.4M onnx-models 🔥 🔥 🔥 style demo
  • Colorization.
Class Size From Awesome File Type State Usage
Colorizer 123M colorization 🔥 🔥 🔥 colorization demo
  • Super Resolution.
Class Size From Awesome File Type State Usage
SubPixelCNN 234K ...PIXEL... 🔥 resolution demo

4. Examples for Lite.AI.

More examples can be found at lite.ai-demos. Note that the default backend for Lite.AI is onnxruntime, for the reason that onnxruntime supports the most of onnx's operators. Click the Expand ▶️ button will show you more examples for the specific topic you are interested in.

Example0: Object Detection using YoloV5. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

The output is:

Or you can use Newest 🔥 🔥 ! YOLO series's detector YOLOX . They got the similar results.


Example1: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

detect(img_bgr, landmarks); lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks); cv::imwrite(save_img_path, img_bgr); delete face_landmarks_1000; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::cv::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:


Example2: Colorization using colorization. Download model from Model-Zoo2.

detect(img_bgr, colorize_content); if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat); delete colorizer; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::cv::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:


⚠️ Expand All Examples for Each Topic in Lite.AI.
4.1 Expand Examples for Object Detection.

4.1 Object Detection using YoloV5. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; } ">
#include "lite/lite.h"

static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);

lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);

delete yolov5;
}

The output is:

Or you can use Newest 🔥 🔥 ! YOLO series's detector YOLOX . They got the similar results.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolox->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolox; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolox_s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolox_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolox_1.jpg";

  auto *yolox = new lite::cv::detection::YoloX(onnx_path); 
  std::vector detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolox->detect(img_bgr, detected_boxes);
  
  lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolox;
}

The output is:

More classes for general object detection.

auto *detector = new lite::cv::detection::YoloX(onnx_path); // new !!!
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::SSDMobileNetV1(onnx_path); 
4.2 Expand Examples for Face Recognition.

4.2 Face Recognition using ArcFace. Download model from Model-Zoo2.

detect(img_bgr0, face_content0); glint_arcface->detect(img_bgr1, face_content1); glint_arcface->detect(img_bgr2, face_content2); if (face_content0.flag && face_content1.flag && face_content2.flag) { float sim01 = lite::cv::utils::math::cosine_similarity( face_content0.embedding, face_content1.embedding); float sim02 = lite::cv::utils::math::cosine_similarity( face_content0.embedding, face_content2.embedding); std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl; } delete glint_arcface; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::cv::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::cv::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::cv::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition.

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
4.3 Expand Examples for Segmentation.

4.3 Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.

detect(img_bgr, content); if (content.flag) { cv::Mat out_img; cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img); cv::imwrite(save_img_path, out_img); if (!content.names_map.empty()) { for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it) { std::cout << it->first << " Name: " << it->second << std::endl; } } } delete deeplabv3_resnet101; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::cv::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

The output is:

More classes for segmentation.

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
4.4 Expand Examples for Face Attributes Analysis.

4.4 Age Estimation using SSRNet . Download model from Model-Zoo2.

detect(img_bgr, age); lite::cv::utils::draw_age_inplace(img_bgr, age); cv::imwrite(save_img_path, img_bgr); std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl; delete ssrnet; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";

  lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::cv::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::cv::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);
  std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;

  delete ssrnet;
}

The output is:

More classes for face attributes analysis.

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
4.5 Expand Examples for Image Classification.

4.5 1000 Classes Classification using DenseNet. Download model from Model-Zoo2.

detect(img_bgr, content); if (content.flag) { const unsigned int top_k = content.scores.size(); if (top_k > 0) { for (unsigned int i = 0; i < top_k; ++i) std::cout << i + 1 << ": " << content.labels.at(i) << ": " << content.texts.at(i) << ": " << content.scores.at(i) << std::endl; } } delete densenet; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::cv::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

The output is:

More classes for image classification.

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
4.6 Expand Examples for Face Detection.

4.6 Face Detection using UltraFace. Download model from Model-Zoo2.

detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); ultraface->detect(img_bgr, detected_boxes); lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete ultraface; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::cv::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

The output is:

4.7 Expand Examples for Colorization.

4.7 Colorization using colorization. Download model from Model-Zoo2.

detect(img_bgr, colorize_content); if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat); delete colorizer; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::cv::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:


4.8 Expand Examples for Head Pose Estimation.

4.8 Head Pose Estimation using FSANet. Download model from Model-Zoo2.

detect(img_bgr, euler_angles); if (euler_angles.flag) { lite::cv::utils::draw_axis_inplace(img_bgr, euler_angles); cv::imwrite(save_img_path, img_bgr); std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl; } delete fsanet; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::cv::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::cv::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

The output is:

4.9 Expand Examples for Face Alignment.

4.9 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

detect(img_bgr, landmarks); lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks); cv::imwrite(save_img_path, img_bgr); delete face_landmarks_1000; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::cv::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::cv::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment.

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks !
4.10 Expand Examples for Style Transfer.

4.10 Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.

detect(img_bgr, style_content); if (style_content.flag) cv::imwrite(save_img_path, style_content.mat); delete fast_style_transfer; } ">
#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::cv::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

The output is:


4.11 Expand Examples for Image Matting.
  • todo ⚠️

5. Lite.AI API Docs.

5.1 Default Version APIs.

More details of Default Version APIs can be found at default-version-api-docs . For examples, the interface for YoloV5 is:

lite::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector &detected_boxes, 
            float score_threshold = 0.25f, float iou_threshold = 0.45f,
            unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);
Expand for ONNXRuntime, MNN and NCNN version APIs.

5.2 ONNXRuntime Version APIs.

More details of ONNXRuntime Version APIs can be found at onnxruntime-version-api-docs . For examples, the interface for YoloV5 is:

lite::onnxruntime::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector &detected_boxes, 
            float score_threshold = 0.25f, float iou_threshold = 0.45f,
            unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);

5.3 MNN Version APIs.

(todo ⚠️ : Not implementation now, coming soon.)

lite::mnn::cv::detection::YoloV5

lite::mnn::cv::detection::YoloV4

lite::mnn::cv::detection::YoloV3

lite::mnn::cv::detection::SSD

...

5.4 NCNN Version APIs.

(todo ⚠️ : Not implementation now, coming soon.)

lite::ncnn::cv::detection::YoloV5

lite::ncnn::cv::detection::YoloV4

lite::ncnn::cv::detection::YoloV3

lite::ncnn::cv::detection::SSD

...

6. Other Docs.

Expand More Details for Other Docs.

6.1 Docs for ONNXRuntime.

6.2 Docs for third_party.

Other build documents for different engines and different targets will be added later.

Library Target Docs
OpenCV mac-x86_64 opencv-mac-x86_64-build.zh.md
OpenCV android-arm opencv-static-android-arm-build.zh.md
onnxruntime mac-x86_64 onnxruntime-mac-x86_64-build.zh.md
onnxruntime android-arm onnxruntime-android-arm-build.zh.md
NCNN mac-x86_64 todo ⚠️
MNN mac-x86_64 todo ⚠️
TNN mac-x86_64 todo ⚠️

7. References.

Many thanks to the following projects. All the Lite.AI's models are sourced from these repos.

Expand More Details for References.

Star 🌟 👆🏻 this repo if it does any helps to you ~

License.

The code of Lite.AI is released under the MIT License.

Citations.

If you use this library in your project, please, cite it as follows.

@code{lite.ai2021,
  title={Lite.AI: A simple and user friendly C++ library of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai},
  note={Open-source software available at https://github.com/DefTruth/lite.ai},
  author={YanJun},
  year={2021}
}
Comments
  • 🎃Linux下配置lite.ai.toolkit库教程

    🎃Linux下配置lite.ai.toolkit库教程

    您好!已经在 Linux 系统下成功编译 lite.ai.toolkit 但是进 g++ 编译 yolox 的时候出现了如下错误: ERROR1 如果不用 -I ,由于都是相对路径,又会找不到头文件,出现例如: fatal error: lite/ort/core/ort_core.h: No such file or directory

    documentation question Linux 
    opened by FL77N 50
  • windows vs2019编译报错:

    windows vs2019编译报错:

    图片 core\ort_types.h(272,1): error C2440: “初始化”: 无法从“ortcv::types::BoundingBoxType<int,double>”转换为“ortcv::types::BoundingBoxType<int,float>”

    图片

    bug enhancement Windows 
    opened by xinsuinizhuan 20
  • Linux Build Error

    Linux Build Error

    I got this error while building in Linux:

    /usr/bin/ld: cannot find -lopencv_core
    /usr/bin/ld: cannot find -lopencv_imgproc
    /usr/bin/ld: cannot find -lopencv_imgcodecs
    /usr/bin/ld: cannot find -lopencv_video
    /usr/bin/ld: cannot find -lopencv_videoio
    /usr/bin/ld: cannot find -lonnxruntime
    collect2: error: ld returned 1 exit status
    

    Please help! thanks!

    opened by AthanatiusC 13
  • Runtime Version Detected Sim always same even changed the person

    Runtime Version Detected Sim always same even changed the person

    Hi,

    I succesully compiled on the Mac Os x.

    trying face rec. algorithms and recognized that ONNX Runtime Version Detected Sim always same even changed the person.

    ie : lite_glint_arcface.cpp model : std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";

    person a - person b :

    /var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpl3pFGJ ; exit; LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 [ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739 Default Version Detected Sim: 0.415043 Default Version Detected Dist: 1.08163 LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 ONNXRuntime Version Detected Sim: 0.0349244

    person-x personc

    /var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpzFKmvz ; exit; LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 [ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739 Default Version Detected Sim: 0.0609607 Default Version Detected Dist: 1.37043 LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx =============== Input-Dims ============== input_node_dims: 1 input_node_dims: 3 input_node_dims: 112 input_node_dims: 112 =============== Output-Dims ============== Output: 0 Name: embedding Dim: 0 :1 Output: 0 Name: embedding Dim: 1 :512 ONNXRuntime Version Detected Sim: 0.0349244

    insightface 
    opened by MyraBaba 11
  • compile problem for ARM

    compile problem for ARM

    Hi,

    in Mac Os x all is good. compiling succesfully . But raspi iot ubunutu has issue.

    gives :

    /usr/bin/ld: /home/pi/Projects/lite.ai.toolkit/build/lite.ai.toolkit/lib/liblite.ai.toolkit.so: undefined reference tocv::Mat::Mat()' ` also liblite examples same error.

    undefined reference tocv::Mat::Mat()'`

    when I look liblite.ai.toolkit.so with ldd: it linked to the opencv.

    ldd /home/pi/Projects/lite.ai.toolkit/build/lite.ai.toolkit/lib/liblite.ai.toolkit.so linux-vdso.so.1 (0x0000007f9e95c000) libopencv_video.so.4.5 => /usr/local/lib/libopencv_video.so.4.5 (0x0000007f9e762000) libopencv_videoio.so.4.5 => /usr/local/lib/libopencv_videoio.so.4.5 (0x0000007f9e6e9000) libonnxruntime.so.1.11.0 => /home/pi/USBA/onnxruntime/build/Linux/RelWithDebInfo/libonnxruntime.so.1.11.0 (0x0000007f9dd30000) libopencv_calib3d.so.4.5 => /usr/local/lib/libopencv_calib3d.so.4.5 (0x0000007f9db7b000) libopencv_features2d.so.4.5 => /usr/local/lib/libopencv_features2d.so.4.5 (0x0000007f9dabc000) libopencv_flann.so.4.5 => /usr/local/lib/libopencv_flann.so.4.5 (0x0000007f9da50000) libopencv_imgcodecs.so.4.5 => /usr/local/lib/libopencv_imgcodecs.so.4.5 (0x0000007f9d7e6000) libopencv_imgproc.so.4.5 => /usr/local/lib/libopencv_imgproc.so.4.5 (0x0000007f9d3b2000) libopencv_core.so.4.5 => /usr/local/lib/libopencv_core.so.4.5 (0x0000007f9d036000) libstdc++.so.6 => /usr/lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000007f9ce8d000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f9cdd0000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000007f9cdac000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f9cc3a000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000007f9cc26000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000007f9cbf7000) libdc1394.so.22 => /usr/lib/aarch64-linux-gnu/libdc1394.so.22 (0x0000007f9cb75000) libgstreamer-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0 (0x0000007f9ca20000) libgobject-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgobject-2.0.so.0 (0x0000007f9c9b9000) libglib-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0 (0x0000007f9c886000) libgstapp-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstapp-1.0.so.0 (0x0000007f9c867000) libgstriff-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstriff-1.0.so.0 (0x0000007f9c849000) libgstpbutils-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstpbutils-1.0.so.0 (0x0000007f9c7ff000) libavcodec.so.58 => /usr/lib/aarch64-linux-gnu/libavcodec.so.58 (0x0000007f9b507000) libavformat.so.58 => /usr/lib/aarch64-linux-gnu/libavformat.so.58 (0x0000007f9b2a1000) libavutil.so.56 => /usr/lib/aarch64-linux-gnu/libavutil.so.56 (0x0000007f9b217000) libswscale.so.5 => /usr/lib/aarch64-linux-gnu/libswscale.so.5 (0x0000007f9b196000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000007f9b17e000) libjpeg.so.62 => /usr/lib/aarch64-linux-gnu/libjpeg.so.62 (0x0000007f9b12e000) libpng16.so.16 => /usr/lib/aarch64-linux-gnu/libpng16.so.16 (0x0000007f9b0e9000) libtiff.so.5 => /usr/lib/aarch64-linux-gnu/libtiff.so.5 (0x0000007f9b05e000) libz.so.1 => /lib/aarch64-linux-gnu/libz.so.1 (0x0000007f9b031000) libtbb.so => /usr/local/lib/libtbb.so (0x0000007f9aff1000) /lib/ld-linux-aarch64.so.1 (0x0000007f9e92e000) libraw1394.so.11 => /usr/lib/aarch64-linux-gnu/libraw1394.so.11 (0x0000007f9afd4000) libusb-1.0.so.0 => /lib/aarch64-linux-gnu/libusb-1.0.so.0 (0x0000007f9afad000) libgmodule-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgmodule-2.0.so.0 (0x0000007f9af97000) libffi.so.6 => /usr/lib/aarch64-linux-gnu/libffi.so.6 (0x0000007f9af7f000) libpcre.so.3 => /lib/aarch64-linux-gnu/libpcre.so.3 (0x0000007f9af0c000) libgstbase-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstbase-1.0.so.0 (0x0000007f9ae8c000) libgstaudio-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstaudio-1.0.so.0 (0x0000007f9ae0d000) libgsttag-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgsttag-1.0.so.0 (0x0000007f9adc1000) libgstvideo-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libgstvideo-1.0.so.0 (0x0000007f9ad1f000) libswresample.so.3 => /usr/lib/aarch64-linux-gnu/libswresample.so.3 (0x0000007f9acf8000) libvpx.so.5 => /usr/lib/aarch64-linux-gnu/libvpx.so.5 (0x0000007f9ab54000) libwebpmux.so.3 => /usr/lib/aarch64-linux-gnu/libwebpmux.so.3 (0x0000007f9ab3b000) libwebp.so.6 => /usr/lib/aarch64-linux-gnu/libwebp.so.6 (0x0000007f9aae1000) liblzma.so.5 => /lib/aarch64-linux-gnu/liblzma.so.5 (0x0000007f9aaac000) librsvg-2.so.2 => /usr/lib/aarch64-linux-gnu/librsvg-2.so.2 (0x0000007f9a5f2000) libcairo.so.2 => /usr/lib/aarch64-linux-gnu/libcairo.so.2 (0x0000007f9a4d3000) libzvbi.so.0 => /usr/lib/aarch64-linux-gnu/libzvbi.so.0 (0x0000007f9a439000) libsnappy.so.1 => /usr/lib/aarch64-linux-gnu/libsnappy.so.1 (0x0000007f9a420000) libaom.so.0 => /usr/lib/aarch64-linux-gnu/libaom.so.0 (0x0000007f9a0f8000) libcodec2.so.0.8.1 => /usr/lib/aarch64-linux-gnu/libcodec2.so.0.8.1 (0x0000007f9a087000) libgsm.so.1 => /usr/lib/aarch64-linux-gnu/libgsm.so.1 (0x0000007f9a06a000) libmp3lame.so.0 => /usr/lib/aarch64-linux-gnu/libmp3lame.so.0 (0x0000007f99fed000) libopenjp2.so.7 => /usr/lib/aarch64-linux-gnu/libopenjp2.so.7 (0x0000007f99f8d000) libopus.so.0 => /usr/lib/aarch64-linux-gnu/libopus.so.0 (0x0000007f99f2f000) libshine.so.3 => /usr/lib/aarch64-linux-gnu/libshine.so.3 (0x0000007f99f15000) libspeex.so.1 => /usr/lib/aarch64-linux-gnu/libspeex.so.1 (0x0000007f99ef0000) libtheoraenc.so.1 => /usr/lib/aarch64-linux-gnu/libtheoraenc.so.1 (0x0000007f99ead000) libtheoradec.so.1 => /usr/lib/aarch64-linux-gnu/libtheoradec.so.1 (0x0000007f99e84000) libtwolame.so.0 => /usr/lib/aarch64-linux-gnu/libtwolame.so.0 (0x0000007f99e54000) libvorbis.so.0 => /usr/lib/aarch64-linux-gnu/libvorbis.so.0 (0x0000007f99e1b000) libvorbisenc.so.2 => /usr/lib/aarch64-linux-gnu/libvorbisenc.so.2 (0x0000007f99d6b000) libwavpack.so.1 => /usr/lib/aarch64-linux-gnu/libwavpack.so.1 (0x0000007f99d36000) libx264.so.155 => /usr/lib/aarch64-linux-gnu/libx264.so.155 (0x0000007f99ae4000) libx265.so.165 => /usr/lib/aarch64-linux-gnu/libx265.so.165 (0x0000007f99838000) libxvidcore.so.4 => /usr/lib/aarch64-linux-gnu/libxvidcore.so.4 (0x0000007f99750000) libva.so.2 => /usr/lib/aarch64-linux-gnu/libva.so.2 (0x0000007f9971f000) libxml2.so.2 => /usr/lib/aarch64-linux-gnu/libxml2.so.2 (0x0000007f9956f000) libbz2.so.1.0 => /lib/aarch64-linux-gnu/libbz2.so.1.0 (0x0000007f9954b000) libgme.so.0 => /usr/lib/aarch64-linux-gnu/libgme.so.0 (0x0000007f994f5000) libopenmpt.so.0 => /usr/lib/aarch64-linux-gnu/libopenmpt.so.0 (0x0000007f9931c000) libchromaprint.so.1 => /usr/lib/aarch64-linux-gnu/libchromaprint.so.1 (0x0000007f992f9000) libbluray.so.2 => /usr/lib/aarch64-linux-gnu/libbluray.so.2 (0x0000007f9929e000) libgnutls.so.30 => /usr/lib/aarch64-linux-gnu/libgnutls.so.30 (0x0000007f990ca000) libssh-gcrypt.so.4 => /usr/lib/aarch64-linux-gnu/libssh-gcrypt.so.4 (0x0000007f99039000) libva-drm.so.2 => /usr/lib/aarch64-linux-gnu/libva-drm.so.2 (0x0000007f99026000) libva-x11.so.2 => /usr/lib/aarch64-linux-gnu/libva-x11.so.2 (0x0000007f99010000) libvdpau.so.1 => /usr/lib/aarch64-linux-gnu/libvdpau.so.1 (0x0000007f98ffc000) libX11.so.6 => /usr/lib/aarch64-linux-gnu/libX11.so.6 (0x0000007f98eb2000) libdrm.so.2 => /usr/lib/aarch64-linux-gnu/libdrm.so.2 (0x0000007f98e90000) libzstd.so.1 => /usr/lib/aarch64-linux-gnu/libzstd.so.1 (0x0000007f98df7000) libjbig.so.0 => /usr/lib/aarch64-linux-gnu/libjbig.so.0 (0x0000007f98dda000) libudev.so.1 => /lib/aarch64-linux-gnu/libudev.so.1 (0x0000007f98da6000) liborc-0.4.so.0 => /usr/lib/aarch64-linux-gnu/liborc-0.4.so.0 (0x0000007f98d17000) libsoxr.so.0 => /usr/lib/aarch64-linux-gnu/libsoxr.so.0 (0x0000007f98cb5000) libgdk_pixbuf-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgdk_pixbuf-2.0.so.0 (0x0000007f98c7e000) libgio-2.0.so.0 => /usr/lib/aarch64-linux-gnu/libgio-2.0.so.0 (0x0000007f98a9e000) libpangocairo-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpangocairo-1.0.so.0 (0x0000007f98a80000) libpangoft2-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpangoft2-1.0.so.0 (0x0000007f98a5a000) libpango-1.0.so.0 => /usr/lib/aarch64-linux-gnu/libpango-1.0.so.0 (0x0000007f98a01000) libfontconfig.so.1 => /usr/lib/aarch64-linux-gnu/libfontconfig.so.1 (0x0000007f989aa000) libcroco-0.6.so.3 => /usr/lib/aarch64-linux-gnu/libcroco-0.6.so.3 (0x0000007f9895e000) libpixman-1.so.0 => /usr/lib/aarch64-linux-gnu/libpixman-1.so.0 (0x0000007f988ee000) libfreetype.so.6 => /usr/lib/aarch64-linux-gnu/libfreetype.so.6 (0x0000007f9882f000) libxcb-shm.so.0 => /usr/lib/aarch64-linux-gnu/libxcb-shm.so.0 (0x0000007f9881b000) libxcb.so.1 => /usr/lib/aarch64-linux-gnu/libxcb.so.1 (0x0000007f987e4000) libxcb-render.so.0 => /usr/lib/aarch64-linux-gnu/libxcb-render.so.0 (0x0000007f987c5000) libXrender.so.1 => /usr/lib/aarch64-linux-gnu/libXrender.so.1 (0x0000007f987ac000) libXext.so.6 => /usr/lib/aarch64-linux-gnu/libXext.so.6 (0x0000007f9878c000) libogg.so.0 => /usr/lib/aarch64-linux-gnu/libogg.so.0 (0x0000007f98774000) libnuma.so.1 => /usr/lib/aarch64-linux-gnu/libnuma.so.1 (0x0000007f98754000) libicui18n.so.63 => /usr/lib/aarch64-linux-gnu/libicui18n.so.63 (0x0000007f9847f000) libicuuc.so.63 => /usr/lib/aarch64-linux-gnu/libicuuc.so.63 (0x0000007f982a4000) libicudata.so.63 => /usr/lib/aarch64-linux-gnu/libicudata.so.63 (0x0000007f968a6000) libmpg123.so.0 => /usr/lib/aarch64-linux-gnu/libmpg123.so.0 (0x0000007f96847000) libvorbisfile.so.3 => /usr/lib/aarch64-linux-gnu/libvorbisfile.so.3 (0x0000007f9682e000) libp11-kit.so.0 => /usr/lib/aarch64-linux-gnu/libp11-kit.so.0 (0x0000007f966e6000) libidn2.so.0 => /usr/lib/aarch64-linux-gnu/libidn2.so.0 (0x0000007f966b9000) libunistring.so.2 => /usr/lib/aarch64-linux-gnu/libunistring.so.2 (0x0000007f96533000) libtasn1.so.6 => /usr/lib/aarch64-linux-gnu/libtasn1.so.6 (0x0000007f96512000) libnettle.so.6 => /usr/lib/aarch64-linux-gnu/libnettle.so.6 (0x0000007f964cb000) libhogweed.so.4 => /usr/lib/aarch64-linux-gnu/libhogweed.so.4 (0x0000007f96484000) libgmp.so.10 => /usr/lib/aarch64-linux-gnu/libgmp.so.10 (0x0000007f963fa000) libgcrypt.so.20 => /lib/aarch64-linux-gnu/libgcrypt.so.20 (0x0000007f9632d000) libgssapi_krb5.so.2 => /usr/lib/aarch64-linux-gnu/libgssapi_krb5.so.2 (0x0000007f962d4000) libXfixes.so.3 => /usr/lib/aarch64-linux-gnu/libXfixes.so.3 (0x0000007f962be000) libgomp.so.1 => /usr/lib/aarch64-linux-gnu/libgomp.so.1 (0x0000007f96282000) libmount.so.1 => /lib/aarch64-linux-gnu/libmount.so.1 (0x0000007f96210000) libselinux.so.1 => /lib/aarch64-linux-gnu/libselinux.so.1 (0x0000007f961dc000) libresolv.so.2 => /lib/aarch64-linux-gnu/libresolv.so.2 (0x0000007f961b6000) libharfbuzz.so.0 => /usr/lib/aarch64-linux-gnu/libharfbuzz.so.0 (0x0000007f960b8000) libthai.so.0 => /usr/lib/aarch64-linux-gnu/libthai.so.0 (0x0000007f9609f000) libfribidi.so.0 => /usr/lib/aarch64-linux-gnu/libfribidi.so.0 (0x0000007f96072000) libexpat.so.1 => /lib/aarch64-linux-gnu/libexpat.so.1 (0x0000007f96033000) libuuid.so.1 => /lib/aarch64-linux-gnu/libuuid.so.1 (0x0000007f9601b000) libXau.so.6 => /usr/lib/aarch64-linux-gnu/libXau.so.6 (0x0000007f96008000) libXdmcp.so.6 => /usr/lib/aarch64-linux-gnu/libXdmcp.so.6 (0x0000007f95ff2000) libgpg-error.so.0 => /lib/aarch64-linux-gnu/libgpg-error.so.0 (0x0000007f95fc0000) libkrb5.so.3 => /usr/lib/aarch64-linux-gnu/libkrb5.so.3 (0x0000007f95ed5000) libk5crypto.so.3 => /usr/lib/aarch64-linux-gnu/libk5crypto.so.3 (0x0000007f95e93000) libcom_err.so.2 => /lib/aarch64-linux-gnu/libcom_err.so.2 (0x0000007f95e7f000) libkrb5support.so.0 => /usr/lib/aarch64-linux-gnu/libkrb5support.so.0 (0x0000007f95e62000) libkeyutils.so.1 => /lib/aarch64-linux-gnu/libkeyutils.so.1 (0x0000007f95e4d000) libblkid.so.1 => /lib/aarch64-linux-gnu/libblkid.so.1 (0x0000007f95de6000) libgraphite2.so.3 => /usr/lib/aarch64-linux-gnu/libgraphite2.so.3 (0x0000007f95db3000) libdatrie.so.1 => /usr/lib/aarch64-linux-gnu/libdatrie.so.1 (0x0000007f95d9b000) libbsd.so.0 => /usr/lib/aarch64-linux-gnu/libbsd.so.0 (0x0000007f95d75000)

    i couldn solve

    Best

    help wanted ARM 
    opened by MyraBaba 9
  • yolox_nano速度问题

    yolox_nano速度问题

    我使用您的代码框架测试了一下yolox系列的推理速度,yolox_nano以外的模型推理速度都很正常,但是使用nano模型时,推理速度甚至低于yolox_s。所用的onnx文件均为利用官方coco数据集训练出来的pth文件转化得到。 我注意到yolox在定义nano模型时,有一段额外代码(./exps/default/nano.py中),如下图所示 image 这是否会有影响?

    question YOLOX:Inference 
    opened by 1VeniVediVeci1 9
  • MNN模型获取输入维度信息直接崩溃

    MNN模型获取输入维度信息直接崩溃

    在BasicMNNHandler::initialize_handler中 会调用batch() channel() height()等几个函数 获取模型的输入维度信息 我这都失败了 调试显示dim里 没有任何数据 batch函数去返回dim[0]时 就崩溃了 我这用MNN是1.2.0 模型用的是你上传的网盘的模型 我试了nanodet和retinaface等模型 都不能获得维度信息 请问是是哪里的问题呢?

    opened by MatchX 8
  • 使用 cv::imshow cv::waitKey 会报错 ld: symbol(s) not found for architecture x86_64

    使用 cv::imshow cv::waitKey 会报错 ld: symbol(s) not found for architecture x86_64

    int main()
    {
      //test_lite();
        std::string onnx_path = "/Users/also/Downloads/lite.ai.toolkit-main/hub/onnx/cv/FaceLandmark1000.onnx";
        std::string test_img_path = "/Users/also/Downloads/lite.ai.toolkit-main/examples/lite/resources/test_lite_face_landmarks_0.png";
        std::string save_img_path = "/Users/also/Downloads/lite.ai.toolkit-main/logs/test_lite_face_landmarks_1000.jpg";
    
        lite::cv::face::align::FaceLandmark1000 *face_landmarks_1000 =
                new lite::cv::face::align::FaceLandmark1000(onnx_path);
    
        lite::types::Landmarks landmarks;
    
        cv::VideoCapture cap;
        cv::Mat im;
        cap.open(1);
        if(! cap.isOpened())
        {
            std::cerr << "Cannot open the camera." << std::endl;
            return 0;
        }
    
        if( cap.isOpened()) {
            while (true) {
                cap >> im;
                //cout << "Image size: " << im.rows << "X" << im.cols << endl;
                cv::Mat img_bgr = im.clone();
    
                //cv::Mat img_bgr = cv::imread(test_img_path);
                face_landmarks_1000->detect(img_bgr, landmarks);
    
                lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
    
                //cv::imwrite(save_img_path, img_bgr);
    
                std::cout << "Default Version Done! Detected Landmarks Num: "
                          << landmarks.points.size() << std::endl;
    
    
                cv::imshow("result", img_bgr);
    
                if((cv::waitKey(2)& 0xFF) == 'q')
                    break;
            }
        }
      return 0;
    }
    
    Undefined symbols for architecture x86_64:
      "cv::imshow(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, cv::_InputArray const&)", referenced from:
          _main in test_lite_face_landmarks_1000.cpp.o
      "cv::waitKey(int)", referenced from:
          _main in test_lite_face_landmarks_1000.cpp.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    make[3]: *** [lite.ai.toolkit/bin/lite_face_landmarks_1000] Error 1
    make[2]: *** [examples/lite/CMakeFiles/lite_face_landmarks_1000.dir/all] Error 2
    make[1]: *** [examples/lite/CMakeFiles/lite_face_landmarks_1000.dir/rule] Error 2
    make: *** [lite_face_landmarks_1000] Error 2
    
    opened by chfeizy 8
  • 👉Windows10 build error(For Windows users)

    👉Windows10 build error(For Windows users)

    References for Windows10 users

    windows下的使用可以参考以下这几个讨论(some references for windows users)

    • 👉 #6
    • 👉 #10
    • 👉 #32
    • 👉 #48
    • 👉 #39
    • 👉 #77

    另外,是win32和system32,目前lite.ai.toolkit没有考虑32位的系统。还有就是在windows下编译完之后,需要手动把依赖库都拷贝到build/lite.ai.toolkit/lib和build/lite.ai.toolkit/bin,并且检查修改下模型文件的路径,比如说路径的反斜杠之类的。(Also, for win32 and system32, currently, lite.ai.toolkit does not consider 32-bit systems. Also, after compiling under Windows, you need to manually copy the dependent libraries to build/lite.ai.toolkit/lib and build/lite.ai.toolkit/bin, and check and modify the path of the model file, for example, The backslash of the path.)

    Search issues about windows

    image

    opened by DefTruth 8
  • WIN10 自行编译的MNN-VULKAN GPU库无效

    WIN10 自行编译的MNN-VULKAN GPU库无效

    我在win10上成功编译了此项目,运行也ok。 我现在是通过mnn来推理的,但因为cpu版本的mnn无法做到实时,所以想编译GPU版本的mnn试试,因为电脑已经装了vulkan,就编译了mnn的cpu+vulkan的库,在项目中替换了带GPU的mnn库之后,运行还是在cpu上(通过任务管理器的性能看负载,和图片的处理速度得出的) 我将MNN::ScheduleConfig schedule_config; 这里也设置成了 schedule_config.backupType = MNN_FORWARD_VULKAN; (原来是MNN_FORWARD_CPU),依然无效。 请问有办法让RVM在MNN环境下调用GPU吗? 然后不知道MNN,ONNXRUNTIME更推荐使用哪一个呢?

    opened by yyl9510 7
  • Support yolact, yolactEdge or other real time instance segmentation model

    Support yolact, yolactEdge or other real time instance segmentation model

    如题,

    我根据这两个帖子,成功的把模型转成了onnx,https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT.html and https://github.com/Ma-Dan/yolact/tree/onnx

    问题是转换成onnx后,速度降低不少(30ms vs 125ms),主要问题出在onnx需要把资料从gpu搬回cpu上,我尝试过利用torchvision.nms处理nms,但是运算速度更慢(高于1000ms),不知道哪里出了问题,在知乎上看见一篇文章提到,“后处理在onnx中会转换成一大坨胶水op,非常琐碎,在框架中实现效率低下”, 不知道是不是这个原因。

    请问大佬你知道什么比较好的方法优化这一部分吗?谢谢

    TODO 
    opened by stereomatchingkiss 7
  • No code for face alignment after face detection.

    No code for face alignment after face detection.

    The most important thing in face recognition is to align face images after detected faces from the image. So it can correctly get the embedding code for the detected face and handle the matching function correctly. But, there is no code for face alignment in this repo. So, it can not use face recognition code of that repo.

    opened by lprfacial247 0
Releases(v0.1.1)
Owner
Def++
Legends never die.
Def++
使用OpenCV部署YOLOX,支持YOLOX-S、YOLOX-M、YOLOX-L、YOLOX-X、YOLOX-Darknet53五种结构,包含C++和Python两种版本的程序

yolox-opencv-dnn 使用OpenCV部署YOLOX,支持YOLOX-S、YOLOX-M、YOLOX-L、YOLOX-X、YOLOX-Darknet53五种结构,包含C++和Python两种版本的程序 onnx文件在百度云盘,下载链接:https://pan.baidu.com/s/11

null 140 Jan 4, 2023
Lite.AI.ToolKit 🚀🚀🌟: A lite C++ toolkit of awesome AI models such as RobustVideoMatting🔥, YOLOX🔥, YOLOP🔥 etc.

Lite.AI.ToolKit ?? ?? ?? : A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc.

DefTruth 2.4k Jan 9, 2023
🚀🚀🌟NanoDet with ONNXRuntime/MNN/TNN/NCNN.

nanodet.lite.ai.toolkit ?? ?? ?? 使用Lite.AI.ToolKit C++工具箱来跑NanoDet的一些案例(https://github.com/DefTruth/lite.ai.toolkit) ,ONNXRuntime、MNN、NCNN和TNN四个版本。 若是

DefTruth 13 Dec 29, 2022
Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla

InferenceHelper This is a helper class for deep learning frameworks especially for inference This class provides an interface to use various deep lear

iwatake 192 Dec 26, 2022
SuperGlue MNN C++部署,SuperGlue C++ Inference with MNN

MNNSuperGlue 概述 MNN Superglue 关键点匹配C++实现,原论文《SuperGlue: Learning Feature Matching with Graph Neural Networks (CVPR 2020, Oral)》,原pytorch代码https://gith

Hanson 43 Nov 17, 2022
TensorRT for Scaled YOLOv4(yolov4-csp.cfg)

TensoRT Scaled YOLOv4 TensorRT for Scaled YOLOv4(yolov4-csp.cfg) 很多人都写过TensorRT版本的yolo了,我也来写一个。 测试环境 ubuntu 18.04 pytorch 1.7.1 jetpack 4.4 CUDA 11.0

Bolano 10 Jul 30, 2021
A Cross platform implement of Wenet ASR. It's based on ONNXRuntime and Wenet. We provide a set of easier APIs to call wenet models.

RapidASR: a new member of RapidAI family. Our visio is to offer an out-of-box engineering implementation for ASR. A cpp implementation of recognize-on

RapidAI-NG 97 Nov 17, 2022
NCNN+Int8+YOLOv4 quantitative modeling and real-time inference

NCNN+Int8+YOLOv4 quantitative modeling and real-time inference

pengtougu 20 Dec 6, 2022
Android yolox hand detect by ncnn

The yolox hand detection This is a sample ncnn android project, it depends on ncnn library and opencv https://github.com/Tencent/ncnn https://github.c

FeiGeChuanShu 14 Sep 7, 2022
YoloX for a Jetson Nano 4 using ncnn.

YoloX Jetson Nano YoloX with the ncnn framework. Paper: https://arxiv.org/pdf/2107.08430.pdf Special made for a Jetson Nano, see Q-engineering deep le

Q-engineering 14 Dec 6, 2022
YoloX for a bare Raspberry Pi 4 using ncnn.

YoloX Raspberry Pi 4 YoloX with the ncnn framework. Paper: https://arxiv.org/pdf/2107.08430.pdf Special made for a bare Raspberry Pi 4, see Q-engineer

Q-engineering 7 Nov 3, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 80 Dec 27, 2022
ncnn of yolov5 v5.0 branch

YOLOv5转NCNN 基于YOLOv5最新v5.0 release,和NCNN官方给出example的差别主要有: 激活函数hardswish变为siLu; 流程和详细记录u版YOLOv5目标检测ncnn实现略微不同 编译运行 动态库用的是官方编译好的ncnn-20210507-ubuntu-16

null 75 Dec 16, 2022
Depoly yolov5.ncnn in android.

Deploy yolov5.ncnn(v6.0) in android 测试效果 如何构建? 1. 下载 Android studio Android studio 下载地址: https://developer.android.com/studio 2. git clone 项目构建 Androi

yangcheng 10 Dec 25, 2022
RealSR-NCNN-Android is a simple Android application that based on Realsr-NCNN & Real-ESRGAN.

RealSR-NCNN-Android Real-ESRGAN is a Practical Algorithms for General Image Restoration. RealSR-NCNN-Android is a simple Android application that base

null 272 Jan 3, 2023
RapidOCR - A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR (捷智OCR) 简体中文 | English 目录 RapidOCR (捷智OCR) 简介 近期更新 ?? 2021-12-18 update 2021-11-28 update 2021-11-13 update 2021-10-27 update 2021-09-13 upda

RapidAI-NG 754 Jan 4, 2023
This is a sample ncnn android project, it depends on ncnn library and opencv

This is a sample ncnn android project, it depends on ncnn library and opencv

null 248 Jan 6, 2023
GFPGAN-ncnn - a naive NCNN implementation of GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration

GFPGAN-ncnn a naive ncnn implementation of GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration model support: 1.GFPGANClean

FeiGeChuanShu 42 Dec 10, 2022
A C++ implementation of the MNN correction algorithm

C++ library for MNN correction Overview This library provides functionality for batch correction of arbitrary data via the use of mutual nearest neigh

Aaron Lun 2 Nov 17, 2022