SuperGlue MNN C++部署,SuperGlue C++ Inference with MNN

Overview

MNNSuperGlue

概述

MNN Superglue 关键点匹配C++实现,原论文《SuperGlue: Learning Feature Matching with Graph Neural Networks (CVPR 2020, Oral)》,原pytorch代码https://github.com/magicleap/SuperGluePretrainedNetwork

编译执行

  • Cmake & make

    1.修改Cmake里面MNN库的路径,按照自己实际路径修改
    2.mkdir build
    1.cd build
    2.cmake ../
    3.make
    
  • 执行

    ./build/kptsdet
    

后续

由于原版MNN有些算子不支持CUDA后端,所以目前只在CPU端测试通过,后续有时间会尝试写CUDA后端相关的算子。如果有兴趣的小伙伴也可以一起完善。

致谢

本仓库代码都是翻译https://github.com/magicleap/SuperGluePretrainedNetwork中pytorch代码,具体细节请参考原pytorch版,原理部分可以阅读论文《https://arxiv.org/pdf/1712.07629.pdf》

You might also like...
Inference framework for MoE layers based on TensorRT with Python binding

InfMoE Inference framework for MoE-based models, based on a TensorRT custom plugin named MoELayerPlugin (including Python binding) that can run infere

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

 Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

Benchmark framework of 3D integrated CIM accelerators for popular DNN inference, support both monolithic and heterogeneous 3D integration

3D+NeuroSim V1.0 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly av

Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.

PPLNN, which is short for "PPLNN is a Primitive Library for Neural Network", is a high-performance deep-learning inference engine for efficient AI inferencing.

ffcnn is a cnn neural network inference framework, written in 600 lines C language.

+----------------------------+ ffcnn 卷积神经网络前向推理库 +----------------------------+ ffcnn 是一个 c 语言编写的卷积神经网络前向推理库 只用了 500 多行代码就实现了完整的 yolov3、yolo-fastes

A rknn cpp/c++ inference codebase for yolov5.

Yolov5 RKNN Cpp This is a code base for yolov5 cpp inference. This code is built for android arm v8 test. NDK Version: r16b Install Download and set N

YOLO v5 ONNX Runtime C++ inference code.
YOLO v5 ONNX Runtime C++ inference code.

yolov5-onnxruntime C++ YOLO v5 ONNX Runtime inference code for object detection. Dependecies: OpenCV 4.5+ ONNXRuntime 1.7+ OS: Windows 10 or Ubuntu 20

Comments
  •  Segmentation fault while creating a session

    Segmentation fault while creating a session

    I have got the same issue, while creating a session in superpoint.cpp Line 14
    this->session = this->net->createSession(this->config); It comes to a segmentation fault. Terminal output looks like:

    L2Norm Not Creat !!! L2Norm Not Creat !!! SuperPointNet Creat Done !!! L2Norm Creat Done !!! Segmentation fault (core dumped)

    Could someone overcome the issue? Thanks in advance

    opened by Mehdi-96 0
  • 初始化SuperPoint的时候出现段错误

    初始化SuperPoint的时候出现段错误

    你好,在执行到这一步的时候, this->superPoint = std::shared_ptr<SuperPoint> (new SuperPoint(superpoint_model_name)); 出现报错

    L2Norm Not Creat !!!
    SuperGlueNet Creat Done !!!
    L2Norm Creat Done !!!
    Signal: SIGSEGV (Segmentation fault)
    

    重新下载了SuperPoint.mnn之后还是不行,请问如何解决。 注:不知道与libMNN_Cuda_Main.so被我注释掉了有无关系。

    opened by Ironbrotherstyle 8
  • C3861 getRealDim

    C3861 getRealDim

    错误 C3861 “getRealDim”: 找不到标识符 SuperGlue D:\ProgramData\MNNSuperGlue\src\superpoint.cpp 72 错误 C3861 “getRealDim”: 找不到标识符 kptsdet D:\ProgramData\MNNSuperGlue\src\superpoint.cpp 72

    Have you ever had this problem?

    opened by jiangxf0929 1
  • pth to mnn conversion script

    pth to mnn conversion script

    As in this code-base direct .mnn weight files are provided which are parsed using the custom CPP module and then infered using MNN inference engine, but one of you comments you mentioned of first converting the .pt file to onnx format and later it's been converted to the .mnn weights, these conversion scripts from .pt to onnx would be extremely helpful for converting the same for some other inference engine frameworks.

    opened by andro-galexy 1
Owner
Hanson
More Artificial More Intelligence
Hanson
Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Lite.AI ?????? is a user-friendly C++ lib for awesome?????? AI models based on onnxruntime, ncnn or mnn. YOLOX??, YoloV5??, YoloV4??, DeepLabV3??, ArcFace??, CosFace??, Colorization??, SSD??, etc.

Def++ 2.4k Jan 4, 2023
A C++ implementation of the MNN correction algorithm

C++ library for MNN correction Overview This library provides functionality for batch correction of arbitrary data via the use of mutual nearest neigh

Aaron Lun 2 Nov 17, 2022
🚀🚀🌟NanoDet with ONNXRuntime/MNN/TNN/NCNN.

nanodet.lite.ai.toolkit ?? ?? ?? 使用Lite.AI.ToolKit C++工具箱来跑NanoDet的一些案例(https://github.com/DefTruth/lite.ai.toolkit) ,ONNXRuntime、MNN、NCNN和TNN四个版本。 若是

DefTruth 13 Dec 29, 2022
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Libonnx A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. Getting Started The library's

xboot.org 442 Dec 25, 2022
Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Tencent 123 Mar 17, 2021
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Tencent 509 Dec 17, 2022
NCNN+Int8+YOLOv4 quantitative modeling and real-time inference

NCNN+Int8+YOLOv4 quantitative modeling and real-time inference

pengtougu 20 Dec 6, 2022
ResNet Implementation, Training, and Inference Using LibTorch C++ API

LibTorch C++ ResNet CIFAR Example Introduction ResNet implementation, training, and inference using LibTorch C++ API. Because there is no native imple

Lei Mao 23 Oct 29, 2022
CTranslate2 is a fast inference engine for OpenNMT-py and OpenNMT-tf models supporting both CPU and GPU executio

CTranslate2 is a fast inference engine for OpenNMT-py and OpenNMT-tf models supporting both CPU and GPU execution. The goal is to provide comprehensive inference features and be the most efficient and cost-effective solution to deploy standard neural machine translation systems such as Transformer models.

OpenNMT 395 Jan 2, 2023
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Tencent 113 Dec 23, 2022