cvnp: pybind11 casts between numpy and OpenCV, possibly with shared memory

Overview

cvnp: pybind11 casts and transformers between numpy and OpenCV, possibly with shared memory

Explicit transformers between cv::Mat / cv::Matx and numpy.ndarray, with or without shared memory

Notes:

  • When going from Python to C++ (nparray_to_mat), the memory is always shared
  • When going from C++ to Python (mat_to_nparray) , you have to specify whether you want to share memory via the boolean parameter share_memory
    pybind11::array mat_to_nparray(const cv::Mat& m, bool share_memory);
    cv::Mat         nparray_to_mat(pybind11::array& a);

        template<typename _Tp, int _rows, int _cols>
    pybind11::array matx_to_nparray(const cv::Matx<_Tp, _rows, _cols>& m, bool share_memory);
        template<typename _Tp, int _rows, int _cols>
    void            nparray_to_matx(pybind11::array &a, cv::Matx<_Tp, _rows, _cols>& out_matrix);

Warning: be extremely cautious of the lifetime of your Matrixes when using shared memory! For example, the code below is guaranted to be a definitive UB, and a may cause crash much later.

pybind11::array make_array()
{
    cv::Mat m(cv::Size(10, 10), CV_8UC1);               // create a matrix on the stack
    pybind11::array a = cvnp::mat_to_nparray(m, true);  // create a pybind array from it, using
                                                        // shared memory, which is on the stack!
    return a;                                                        
}  // Here be dragons, when closing the scope!
   // m is now out of scope, it is thus freed, 
   // and the returned array directly points to the old address on the stack!

Automatic casts:

Without shared memory

  • Casts without shared memory between cv::Mat, cv::Matx, cv::Vec and numpy.ndarray
  • Casts without shared memory for simple types, between cv::Size, cv::Point, cv::Point3 and python tuple

With shared memory

  • Casts with shared memory between cvnp::Mat_shared, cvnp::Matx_shared, cvnp::Vec_shared and numpy.ndarray

When you want to cast with shared memory, use these wrappers, which can easily be constructed from their OpenCV counterparts. They are defined in cvnp/cvnp_shared_mat.h.

Be sure that your matrixes lifetime if sufficient (do not ever share the memory of a temporary matrix!)

Supported matrix types

Since OpenCV supports a subset of numpy types, here is the table of supported types:

➜ python
>>> import cvnp
>>> cvnp.print_types_synonyms()
  cv_depth   cv_depth_name   np_format   np_format_long
     0          CV_8U           B         np.uint8  
     1          CV_8S           b         np.int8   
     2          CV_16U          H        np.uint16  
     3          CV_16S          h         np.int16  
     4          CV_32S          i         np.int32  
     5          CV_32F          f          float    
     6          CV_64F          d        np.float64

How to use it in your project

  1. Add cvnp to your project. For example:
cd external
git submodule add https://github.com/pthom/cvnp.git
  1. Link it to your python module:

In your python module CMakeLists, add:

add_subdirectory(path/to/cvnp)
target_link_libraries(your_target PRIVATE cvnp)
  1. (Optional) If you want to import the declared functions in your module:

Write this in your main module code:

void pydef_cvnp(pybind11::module& m);

PYBIND11_MODULE(your_module, m)
{
    ....
    ....
    ....
    pydef_cvnp(m);
}

You will get two simple functions:

  • cvnp.list_types_synonyms()
  • cvnp.print_types_synonyms()
>>> import cvnp
>>> import pprint
>>> pprint.pprint(cvnp.list_types_synonyms(), indent=2, width=120)
[ {'cv_depth': 0, 'cv_depth_name': 'CV_8U', 'np_format': 'B', 'np_format_long': 'np.uint8'},
  {'cv_depth': 1, 'cv_depth_name': 'CV_8S', 'np_format': 'b', 'np_format_long': 'np.int8'},
  {'cv_depth': 2, 'cv_depth_name': 'CV_16U', 'np_format': 'H', 'np_format_long': 'np.uint16'},
  {'cv_depth': 3, 'cv_depth_name': 'CV_16S', 'np_format': 'h', 'np_format_long': 'np.int16'},
  {'cv_depth': 4, 'cv_depth_name': 'CV_32S', 'np_format': 'i', 'np_format_long': 'np.int32'},
  {'cv_depth': 5, 'cv_depth_name': 'CV_32F', 'np_format': 'f', 'np_format_long': 'float'},
  {'cv_depth': 6, 'cv_depth_name': 'CV_64F', 'np_format': 'd', 'np_format_long': 'np.float64'}]

Build and test

These steps are only for development and testing of this package, they are not required in order to use it in a different project.

Build

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

mkdir build
cd build

# if you do not have a global install of OpenCV and pybind11
conan install .. --build=missing
# if you do have a global install of OpenCV, but not pybind11
conan install ../conanfile_pybind_only.txt --build=missing

cmake ..
make

Test

In the build dir, run:

cmake --build . --target test

Deep clean

rm -rf build
rm -rf venv
rm -rf .pytest_cache
rm  *.so 
rm *.pyd

Notes

Thanks to Dan Mašek who gave me some inspiration here: https://stackoverflow.com/questions/60949451/how-to-send-a-cvmat-to-python-over-shared-memory

This code is intended to be integrated into your own pip package. As such, no pip tooling is provided.

Issues
  • Question: supporting non-continuous Mat ?

    Question: supporting non-continuous Mat ?

    I tried copying some of your code to my project.

    When I ran my Python program, I got the error

    ValueError: Only continuous Mats supported.

    from the method mat_to_nparray(const cv::Mat& m

    where it checks if (!m.isContinuous())

    What is needed to be modified to support non-continuous Mats ?

    opened by brianm-sra 1
Owner
Pascal Thomet
Pascal Thomet
The SCRFD face detection, depends on ncnn library and opencv

The SCRFD face detection, depends on ncnn library and opencv

null 141 Jul 12, 2022
This is a sample ncnn android project, it depends on ncnn library and opencv

This is a sample ncnn android project, it depends on ncnn library and opencv

null 220 Jul 28, 2022
Mixed reality VR laser tag using Oculus Quest 2 and OAK-D depth cameras. First prize winner for North America region in OpenCV AI Competition 2021.

Mixed Reality Laser Tag Copyright 2021 Bart Trzynadlowski Overview This is the source code to my Mixed Reality Laser Tag project, which won first priz

null 34 Jun 3, 2022
Example of using ultralytics YOLO V5 with OpenCV 4.5.4, C++ and Python

yolov5-opencv-cpp-python Example of performing inference with ultralytics YOLO V5, OpenCV 4.5.4 DNN, C++ and Python Looking for YOLO V4 OpenCV C++/Pyt

null 121 Aug 11, 2022
Example of using YOLO v4 with OpenCV, C++ and Python

yolov4-opencv-cpp-python Example of performing inference with Darknet YOLO V4, OpenCV 4.4.0 DNN, C++ and Python Looking for YOLO V5 OpenCV C++/Python

null 42 Jul 27, 2022
GPU PyTorch TOP in TouchDesigner with CUDA-enabled OpenCV

PyTorchTOP This project demonstrates how to use OpenCV with CUDA modules and PyTorch/LibTorch in a TouchDesigner Custom Operator. Building this projec

David 65 Jun 15, 2022
A simple facial recognition script using OpenCV's FaceRecognizer module implemented in C++

Local Binary Patterns Histogram Recognizer A proyect that implements the LBPHRecognizer class of the OpenCV library to determine if a detected face co

Pablo Agustín Ortega-Kral 0 Jan 18, 2022
Libcamera with OpenCV in Raspberry Pi 64 bit Bullseye

Libcamera OpenCV RPi Bullseye 64OS Libcamera + OpenCV on a Raspberry Pi 4 with 64-bit Bullseye OS In the new Debian 11, Bullseye, you can only capture

Q-engineering 6 Apr 14, 2022
liteCV is greater than OpenCV :)

liteCV liteCV is lightweight image processing library for C++11. Unlike OpenCV, liteCV must be SIMPLE. Unlike OpenCV, liteCV must be INDEPENDENCE. Ach

Samuel Lee 3 May 26, 2022
Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla

InferenceHelper This is a helper class for deep learning frameworks especially for inference This class provides an interface to use various deep lear

iwatake 154 Aug 1, 2022
Boki: Stateful Serverless Computing with Shared Logs [SOSP '21]

Boki Boki is a research FaaS runtime for stateful serverless computing with shared logs. Boki exports the shared log API to serverless functions, allo

Operating Systems and Architecture 34 Jul 20, 2022
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. All NLP modules are based on Timbl, the Tilburg memory-based learning software package.

Frog - A Tagger-Lemmatizer-Morphological-Analyzer-Dependency-Parser for Dutch Copyright 2006-2020 Ko van der Sloot, Maarten van Gompel, Antal van den

Language Machines 69 Jun 20, 2022
WeNet is to close the gap between research and production end-to-end (E2E) speech recognition models,

WeNet is to close the gap between research and production end-to-end (E2E) speech recognition models, to reduce the effort of productionizing E2E models, and to explore better E2E models for production.

null 2.3k Aug 9, 2022
Blend between two non-concentric non-circular cones for use in a Pose Space system

BlurRelax Why does maya not have a smooth or relax deformer? They've got brushes, sure, but no deformers. This one is pretty fast. I've got it working

Tyler Fox 2 Dec 27, 2021
A coverage-guided and memory-detection enabled fuzzer for windows applications.

WDFuzzer Manual 中文手册见 README_CN.md WDFuzzer:winafl + drmemory WDFuzzer is an A coverage-guided and memory detection abled fuzzer for for windows softw

Jingyi Shi 25 Aug 1, 2022
Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"

SSM-VLN Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation". Environment Installation Download Room-to-Room

hanqing 32 Jun 24, 2022
PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.

PSTensor : Custimized a Tensor Data Structure Compatible with PyTorch and TensorFlow. You may need this software in the following cases. Manage memory

Jiarui Fang 8 Feb 12, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

NeuroSim 23 Aug 5, 2022