MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

Overview
layout title nav_order
default
Home
1

MediaPipe


Live ML anywhere

MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

accelerated.png cross_platform.png
End-to-End acceleration: Built-in fast ML inference and processing accelerated even on common hardware Build once, deploy anywhere: Unified solution works across Android, iOS, desktop/cloud, web and IoT
ready_to_use.png open_source.png
Ready-to-use solutions: Cutting-edge ML solutions demonstrating full power of the framework Free and open source: Framework and solutions both under Apache 2.0, fully extensible and customizable

ML solutions in MediaPipe

Face Detection Face Mesh Iris Hands Pose Holistic
face_detection face_mesh iris hand pose hair_segmentation
Hair Segmentation Object Detection Box Tracking Instant Motion Tracking Objectron KNIFT
hair_segmentation object_detection box_tracking instant_motion_tracking objectron knift
Android iOS C++ Python JS Coral
Face Detection
Face Mesh
Iris
Hands
Pose
Holistic
Hair Segmentation
Object Detection
Box Tracking
Instant Motion Tracking
Objectron
KNIFT
AutoFlip
MediaSequence
YouTube 8M

See also MediaPipe Models and Model Cards for ML models released in MediaPipe.

MediaPipe in Python

MediaPipe offers customizable Python solutions as a prebuilt Python package on PyPI, which can be installed simply with pip install mediapipe. It also provides tools for users to build their own solutions. Please see MediaPipe in Python for more info.

MediaPipe on the Web

MediaPipe on the Web is an effort to run the same ML solutions built for mobile and desktop also in web browsers. The official API is under construction, but the core technology has been proven effective. Please see MediaPipe on the Web in Google Developers Blog for details.

You can use the following links to load a demo in the MediaPipe Visualizer, and over there click the "Runner" icon in the top bar like shown below. The demos use your webcam video as input, which is processed all locally in real-time and never leaves your device.

visualizer_runner

Getting started

Learn how to install MediaPipe and build example applications, and start exploring our ready-to-use solutions that you can further extend and customize.

The source code is hosted in the MediaPipe Github repository, and you can run code search using Google Open Source Code Search.

Publications

Videos

Events

Community

Alpha disclaimer

MediaPipe is currently in alpha at v0.7. We may be still making breaking API changes and expect to get to stable APIs by v1.0.

Contributing

We welcome contributions. Please follow these guidelines.

We use GitHub issues for tracking requests and bugs. Please post questions to the MediaPipe Stack Overflow with a mediapipe tag.

Comments
  • How to render different face effect

    How to render different face effect

    In face effect module I can 3d data as glasses.pbtxt facepaint.pngblob glasses.pngblob.

    I am trying to add few more models to experiment but i couldnt find any documenation or information of pngblob data. It seems like that using glasses.pbtxt 3d model is generated at runtime and glasses.pngblob is getting used as texture. Can you please clear that is it right and how is it happening

    Ques 1- Can you please provide any documentation of pngblob datatype. How can i create new 3d model (pngblob / binarypb ) to render on face. Most common format of 3d model data are OBJ, FBX, etc. Is there any way to convert these format of 3d data to binarypb / pngblob?

    Ques 2- it is mentioned in gl_animation_overlay_calculator.cc that .obj.uuu can be created using the mentioned SimpleObjEncryptor but I couldn't find that. can you please specify where to find that ?

    ANIMATION_ASSET (String, required):
    //     Path of animation file to load and render. Should be generated by
    //     //java/com/google/android/apps/motionstills/SimpleObjEncryptor with
    //     --compressed_mode=true.  See comments and documentation there for more
    //     information on custom .obj.uuu file format.
    
    type:research calculators solution:face mesh 
    opened by ashrvstv 68
  • Mediapipe CodePens don't run on iOS Safari

    Mediapipe CodePens don't run on iOS Safari

    Hello all,

    I have a project using Mediapipe Hands on iOS and I've been trying to update from the tfjs model to the new Mediapipe api but even when I enable WebGL2, it still fails to work. I've made sure I'm asking permission using navigator.getmedia properly. Wondering if anyone has any ideas on what's going wrong.

    Here's the codepen that I'm testing: https://codepen.io/aionkov/pen/MWjEqWa

    Here's the console:

    [Warning] I1223 11:05:16.032000 1 gl_context_webgl.cc:146] Successfully created a WebGL context with major version 3 and handle 3 (hands_solution_wasm_bin.js, line 9) [Warning] I1223 11:05:16.034000 1 gl_context.cc:340] GL version: 3.0 (OpenGL ES 3.0 (WebGL 2.0)) (hands_solution_wasm_bin.js, line 9) [Warning] W1223 11:05:16.034000 1 gl_context.cc:794] Drishti OpenGL error checking is disabled (hands_solution_wasm_bin.js, line 9) [Warning] E1223 11:05:16.711000 1 calculator_graph.cc:775] INTERNAL: CalculatorGraph::Run() failed in Run: (hands_solution_wasm_bin.js, line 9) [Warning] Calculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50 [type.googleapis.com/mediapipe.StatusList='\n\x84\x02\x08\r\x12\xff\x01\x43\x61lculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50'] (hands_solution_wasm_bin.js, line 9) [Warning] F1223 11:05:16.712000 1 solutions_wasm.embind.cc:585] Check failed: ::util::OkStatus() == (graph_->WaitUntilIdle()) (OK vs. INTERNAL: CalculatorGraph::Run() failed in Run: (hands_solution_wasm_bin.js, line 9) [Warning] Calculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50 [type.googleapis.com/mediapipe.StatusList='\n\x84\x02\x08\r\x12\xff\x01\x43\x61lculator::Open() for node "handlandmarktrackinggpu__handlandmarkgpu__InferenceCalculator" failed: [GL_INVALID_FRAMEBUFFER_OPERATION]: The framebuffer object is not complete.: glCreateShader in third_party/tensorflow/lite/delegates/gpu/gl/gl_shader.cc:50']) (hands_solution_wasm_bin.js, line 9) [Warning] *** Check failure stack trace: *** (hands_solution_wasm_bin.js, line 9) [Warning] undefined (hands_solution_wasm_bin.js, line 9) [Error] Unhandled Promise Rejection: RuntimeError: abort(undefined) at [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:67558 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:67737 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:41049 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands_solution_wasm_bin.js:9:179948 [email protected][wasm code]

    .wasm-function[10471]@[wasm code] .wasm-function[10466]@[wasm code] .wasm-function[10461]@[wasm code] .wasm-function[10458]@[wasm code] .wasm-function[10474]@[wasm code] .wasm-function[515]@[wasm code] .wasm-function[502]@[wasm code]

    [email protected][wasm code] [native code] SolutionWasm$send https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:33:352 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:10:295 https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:11:90 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/hands.js:22:322 [email protected][native code] (evaluating 'new WebAssembly.RuntimeError(what)') (anonymous function) (hands_solution_wasm_bin.js:9:41099) promiseReactionJob

    platform:javascript 
    opened by ionif 59
  • ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings' on Raspberry Pi 3

    ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings' on Raspberry Pi 3

    Hello, I'm having problems using mediapipe on my raspberry pi 3. Using "import mediapipe" gives no error, however using "mp_drawing = mp.solutions.drawing_utils" (for example) gives out the following error message: ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings'

    My installation method was:

    1. sudo apt install ffmpeg python3-opencv python3-pip ;
    2. sudo apt install libxcb-shm0 libcdio-paranoia-dev libsdl2-2.0-0 libxv1 libtheora0 libva-drm2 libva-x11-2 libvdpau1 libharfbuzz0b libbluray2 libatlas-base-dev libhdf5-103 libgtk-3-0 libdc1394-22 libopenexr25 ;
    3. sudo pip3 uninstall mediapipe-rpi3 .

    I'm using a raspberry pi 3b with debian bullseye (32 bit), my python version is 3.9.2 and the opencv version is 4.2.1. P.S. the file name is "maos.py".

    Does anyone know what might be causing this error? (I annexed an image of the error aswell for clarity Error )

    type:build/install platform:python stalled 
    opened by goulaoalex 55
  • Accessing landmarks, tracking multiple hands, and enabling depth on desktop

    Accessing landmarks, tracking multiple hands, and enabling depth on desktop

    Hello,

    I found out about Mediapipe after seeing Google's blog post regarding hand tracking. Currently, I am working on using Mediapipe to build a cross platform interface using gestures to control multiple systems. I am using the Desktop CPU example as a base for how to move forward, and I have successfully retrieved the hand landmarks. I just want to ensure that I am retrieving them in the most efficient and proper way.

    The process I use is as follows:

    1. Create a listener of class OutputStreamPoller which listens for the hand_landmarks output stream in the HandLandmark subgraph.
    2. If there is an available packet, load the packet into a variable of class mediapipe::Packet using the .Next() method of the OutputStreamPoller class.
    3. Use the .Get() method of the Packet class and load into another variable called hand_landmarks.
    4. Loop through the variable and retrieve the x, y, and z coordinates and place them into a vector for processing.

    Is this process correct or is there a better way to go about retrieving the coordinates of the hand landmarks?

    I have additional questions, but I am unsure if I should place them in a separate issue. I will ask them here but please let me know if I should open a separate issue.

    1. In the hand tracking examples, only a single hand is to be detected. How would I alter the build such that it can detect multiple hands (specifically 2)?
    2. How would I enable the desktop implementations of hand tracking such that they can capture depth (similar to how the android/ios 3D builds can output z coordinates)?
    solution:hands platform:desktop 
    opened by JECBello 51
  • Temperature check on possible memory leak in Holistic JS solution

    Temperature check on possible memory leak in Holistic JS solution

    Hello! I'm using the Holistic web solution in a WebRTC streaming application.

    We've occasionally seen Holistic processing crash in a way that seems indicative of a memory leak: image

    I'm currently unsure about whether the Holistic processing crash is the cause or a symptom of another issue. I've done some memory profiling, but haven't found a reliable way of reproducing it yet.

    Since I don't have too much visibility into the Mediapipe JS internals, I was just hoping to get a temperature check on whether the Mediapipe team thinks:

    1. This issue could be related to the Mediapipe internals / you've seen it before in other contexts
    2. This is definitely not a Mediapipe issue and likely a bug in my application logic

    Basically just trying to determine where to invest additional debugging / investigation efforts.

    FWIW I also came across this thread https://bugs.chromium.org/p/chromium/issues/detail?id=1174675 which indicates there could be some memory leak issues in Chromium that can affect use cases like WebRTC, but the rate of leakage described there seems too slow compared to what I'm perceiving in my application.

    Thanks in advance for your help! Sorry that I'm not able to provide any more details besides that single stack trace. Please let me know if you need any additional information and I'd be happy to circle back with it.

    type:support platform:javascript solution:holistic 
    opened by codylieu 40
  • Hand tracking landmarks - Z value range

    Hand tracking landmarks - Z value range

    I am failing to find any kind of documentation or example that would explain the exact definition/behavior of the estimated Z coordinates returned by the hand tracking graph.

    We're able to successfully extract the landmark data as X, Y and Z coordinates. The X and Y coordinates are clearly normalized but the Z coordinates appear to take values to which I have no reference (they are not normalized, they are sometimes negative, sometimes positive and don't appear to adhere to any coherent scale. Clear is: They are most likely relative to each other.

    Could somebody shine some light on the estimated Z coordinates - especially the scale they adhere to?

    type:support solution:hands 
    opened by Tectu 37
  • base on opencv-4.5.1  , Use  build aar, crash ,dlopen failed: cannot locate symbol

    base on opencv-4.5.1 , Use build aar, crash ,dlopen failed: cannot locate symbol "__subtf3"

    System information (Please provide as much relevant information as possible)

    OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4): MediaPipe version: v0.8.6 Bazel version:bazel 4.1.0 Solution (e.g. FaceMesh, Pose, Holistic): HANDTRACK aar Programming Language and version ( e.g. C++, Python, Java): Java Describe the expected behavior:

    My goal was to create an aar to using the handtracking aar on Android Studio with gradle

    When I replace the .aar file in the libs folder (from the face_detection and multi_hand_tracking demo/example projects) with the one that I generated following the steps in https://google.github.io/mediapipe/getting_started/android_archive_library.html

    I get the following crash at System.loadLibrary("mediapipe_jni");

    java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "__subtf3" referenced by "/data/app/~~IXWF_H6noRNss6OBiB8kZQ==/com.my.mediapipe.apps.myapplication-Eds2VNkZzhvjhYw-zhjNaw==/lib/arm64/libmediapipe_jni.so"...

    So i Switch to OpenCV 4. but i need opencv-4.5.1
    sed -i -e 's:3.4.3/opencv-3.4.3:4.5.1/opencv-4.5.1:g' WORKSPACE sed -i -e 's:libopencv_java3:libopencv_java4:g' third_party/opencv_android.BUILD

    i use opencv-4.5.1 is different from opencv-4.0.1 in exzample . But it still reports an same error ! nm -D libmediapipe_jni.so |grep subtf3 the symbol is still stay in.

    type:build/install platform:android solution:hands android::aar 
    opened by chensisi0730 33
  • AttributeError: module 'mediapipe' has no attribute 'solutions'

    AttributeError: module 'mediapipe' has no attribute 'solutions'

    Has anyone had this error when importing the mediapipe library?

    AttributeError: partially initialized module 'mediapipe' has no attribute 'solutions'(most likely due to a circular import)

    import cv2
    import mediapipe as mp
    mp_drawing = mp.solutions.drawing_utils
    mp_face_mesh = mp.solutions.face_mesh
    
    type:support MediaPipe platform:python 
    opened by H4ckerman666 31
  • How do you use MediaPipe on the web in your own web app?

    How do you use MediaPipe on the web in your own web app?

    I don't need the visualizer I just want to be able to run the MediaPipe Hands with Multi-hand support in my webapp. From my understanding the code is compiled into wasm and then run from a webapp. How would I include the Hands with Multi-hand support app in my on web application?

    solution:hands MediaPipe platform:javascript 
    opened by delebash 31
  • Unable to load the hand detection model

    Unable to load the hand detection model

    I am trying to test the given model in my sample android application. When trying to load the model i face this issue:

    java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: Encountered unresolved custom op: Convolution2DTransposeBias.Node number 165 (Convolution2DTransposeBias) failed to prepare.

    Code: AssetFileDescriptor fileDescriptor = activity.getAssets().openFd("palm_detection.tflite");

    solution:hands 
    opened by srishtigoelroposo 31
  • how to support yolo model for object detect?

    how to support yolo model for object detect?

    i 've trained yolo model for object detect, and i want to integrating the model into mediapipe, whether it support or not, if support integrating, how to do, could you give me advice, thanks a lot.

    type:feature solution:object detection type:others 
    opened by bugmany 30
  • In some cases I got 2 Left hands (or Right)

    In some cases I got 2 Left hands (or Right)

    I use mediapipe version 0.8.11 and Hands detection with settings:

    hands = mpHands.Hands(
                    static_image_mode= False, 
                    max_num_hands = 2,
                    min_detection_confidence= 0.8,
                    min_tracking_confidence = 0.8
            )
    

    .....

        if results.multi_hand_landmarks:
            for handType, handLms in zip(results.multi_handedness, results.multi_hand_landmarks):
                mpDraw.draw_landmarks(img, handLms, mpHands.HAND_CONNECTIONS)
                print(handType)
            hand1_new = results.multi_hand_landmarks[0].landmark
            if len(results.multi_hand_landmarks) == 2:
                hand2_new = results.multi_hand_landmarks[1].landmark
    

    Python, Windows, JupiterNotebook.

    Describe the current behavior: In some cases (when hand is visible as fist) I have 2 Left hands with good probabilities. With Right hand it's also possible.

    Describe the expected behavior: I guess it should be only one Left hand if it's the same coords.

    2_left_hands

    type:bug 
    opened by PavelAgurov 0
  • ios example apps not building

    ios example apps not building

    Osx 12.4 / xcode 13.4.1

    mediapipe tulsi Version 0.20221005.88 (0.20221005.88)

    • Bazel version: 5.2.0
    • XCode and Tulsi versions (if iOS):

    Describe the problem:

    Trying to load an example ios app onto an iphone using the MediaPipe on iOS tutorial.

    After running tulsi, xcode loads. I select an example app (e.g. PoseTrackingGpuApp) and select automatic signing and team.

    Then I run python3 mediapipe/examples/ios/link_local_profiles.py in terminal in the mediapipe repo directory.

    Then build.

    Build fails with these errors:

    Creating symlink bazel-out/applebin_ios-ios_arm64-dbg-ST-2967bd56a867/bin/mediapipe/examples/ios/posetrackinggpu/PoseTrackingGpuApp-intermediates/embedded.mobileprovision failed: 1 input file(s) do not exist

    Creating symlink bazel-out/applebin_ios-ios_arm64-dbg-ST-2967bd56a867/bin/mediapipe/examples/ios/posetrackinggpu/PoseTrackingGpuApp-intermediates/embedded.mobileprovision failed: missing input file '//mediapipe:provisioning_profile.mobileprovision'

    type:build/install 
    opened by frey1esm 0
  • HTMLImageElement exception

    HTMLImageElement exception

    Please make sure that this is a bug and also refer to the troubleshooting, FAQ documentation before raising any issues.

    System information (Please provide as much relevant information as possible)

    • Have I written custom code (as opposed to using a stock example script provided in MediaPipe): Yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4): Mac OS Ventura M1
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
    • Browser and version (e.g. Google Chrome, Safari) if the issue happens on browser: Safari 16.1 (18614.2.9.1.12)
    • Programming Language and version ( e.g. C++, Python, Java): Javascript
    • MediaPipe version: https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/pose.js
    • Bazel version (if compiling from source): N/A
    • Solution ( e.g. FaceMesh, Pose, Holistic ): Pose
    • Android Studio, NDK, SDK versions (if issue is related to building in Android environment): N/A
    • Xcode & Tulsi version (if issue is related to building for iOS): N/A

    Describe the current behavior:

    Trying to load an HtmlImageElement into the pose.send and it throws an unhandled exception:

    [Error] Unhandled Promise Rejection: RuntimeError: abort(undefined) at [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/pose_solution_wasm_bin.js:9:69350 [email protected]://cdn.jsdelivr.net/npm/@mediapipe/[email protected] ua (pose.js:14:438) (anonymous function) (pose.js:15:192) e (pose.js:41:501) promiseReactionJob

    Describe the expected behavior: Should work just like a frame of web cam video

    Standalone code to reproduce the issue: Provide a reproducible test case that is the bare minimum necessary to replicate the problem. If possible, please share a link to Colab/repo link /any notebook:

    https://gist.github.com/wheelie33/03499b02c60edbbedc4239a13f690cb7

    Other info / Complete Logs : Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached

    type:bug 
    opened by wheelie33 0
  • yarn add @mediapipe/selfie_segmentation no longer works

    yarn add @mediapipe/selfie_segmentation no longer works

    Please make sure that this is a build/installation issue and also refer to the troubleshooting documentation before raising any issues.

    System information (Please provide as much relevant information as possible)

    • OS Platform and Distribution (e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4): Mac OS Ventura 13.0.1 (22A400)
    • Compiler version (e.g. gcc/g++ 8 /Apple clang version 12.0.0): dunno, not using it?
    • Programming Language and version ( e.g. C++ 14, Python 3.6, Java ): node -v 12.22.12
    • Installed using virtualenv? pip? Conda? (if python): no
    • MediaPipe version: @latest
    • Bazel version: N/A
    • XCode and Tulsi versions (if iOS): N/A
    • Android SDK and NDK versions (if android): N/A
    • Android AAR ( if android): N/A
    • OpenCV version (if running on desktop): huh?

    Describe the problem:

    Can no longer install this using yarn

    Provide the exact sequence of commands / steps that you executed before running into the problem:

    'yarn add @mediapipe/selfie_segmentation produces an error

    Complete Logs: Include Complete Log information or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached:

    % yarn add @mediapipe/[email protected] yarn add v1.22.19 [1/5]  Validating package.json... [2/5]  Resolving packages... [3/5]  Fetching packages... error An unexpected error occurred: "https://registry.yarnpkg.com/@mediapipe/selfie_segmentation/-/selfie_segmentation-0.1.1632777926.tgz: incorrectdata check".

    type:build/install 
    opened by geoidesic 2
  • pip install mediapipe giving an old version of Mediapipe

    pip install mediapipe giving an old version of Mediapipe

    Hi,

    When I install Mediapipe through pip and then try to use it with the Python code on the facemesh tutorial page, it gives me a lot of errors. It turns out that pip installs an older version of Mediapipe because many of the files on the Mediapipe github are not in the pip version.

    I tried copy pasting some files and lines of code here and there hoping to resolve the issue. My code finally ran without throwing in any errors, but face_mesh.process() doesn't give me anything for results.multi_face_landmarks (NoneType).

    Any help?

    opened by shreshtashetty 0
  • About face effect, How i can capture frame

    About face effect, How i can capture frame

    I want to get ByteBuffer from surfaceTexture . This surface is initialized in CameraXPreviewHelper and then passed down to C++ code. But i'm having a bit of trouble about getting out each frame whenever the surface is rendered . I'm thinking in two directions :

    1 . Use EGLContext from EglManager to use callback from ImageReader.OnImageAvailableListener when ever surface is rendered .

    1. I believe that, in effect_renderer_calculator.cc there are things that i need .

    How can i do?

    Thanks a lot!

    type:support framework solution:face geometry 
    opened by hopdd 0
Releases(v0.8.11)
Owner
Google
Google ❤️ Open Source
Google
UE4 MediaPipe plugin

UE4 MediaPipe plugin Platforms: Win64 2D features: Face, Iris, Hands, Pose, Holistic 3D features: Face Mesh, World Pose Demo video: https://www.youtub

null 223 Dec 1, 2022
Example Qt application that demonstrates how to integrate Mediapipe

Mediapipe integration to Qt application example Example on how to integrate mediapipe as a dynamic library into Qt applicaton on Linux. Resulting appl

null 39 Nov 26, 2022
A mediapipe-hand demo infer by ncnn

The Android demo of Mediapipe Hand infer by ncnn Please enjoy the mediapipe hand demo on ncnn You can try this APK demo https://pan.baidu.com/s/1ArAMH

FeiGeChuanShu 47 Nov 29, 2022
The purpose of this project is to apply mediapipe to more AI chips.

1.About This Project Our Official Website: www.houmo.ai Who We Are: We are Houmo - A Great AI Company. We wish to change the world with unlimited comp

null 38 Nov 20, 2022
Native runtime package for MediaPipe.NET.

MediaPipe.NET.Runtime Native library package for MediaPipe.NET. This is the first half of the port of MediaPipeUnityPlugin, in order to use MediaPipe

Vignette 14 Oct 12, 2022
C++ Live Toolkit are tools subset used to perform on-the-fly compilation and running of cpp code

C++ Live Toolkit CLT (C++ Live Toolkit) is subset of tools that are very light in size, and maintained to help programmers in compiling and executing

MondeO 1 Jan 4, 2022
Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner

Anomaly Detection on Dynamic (time-evolving) Graphs in Real-time and Streaming manner. Detecting intrusions (DoS and DDoS attacks), frauds, fake rating anomalies.

Stream-AD 695 Nov 27, 2022
CNStream is a streaming framework for building Cambricon machine learning pipelines

CNStream is a streaming framework for building Cambricon machine learning pipelines

Cambricon Technologies 192 Nov 23, 2022
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator compatible with deep learning frameworks, PyTorch and TensorFlow/Keras, as well as classical machine learning libraries such as scikit-learn, and more.

Microsoft 7.8k Nov 30, 2022
A Cross platform implement of Wenet ASR. It's based on ONNXRuntime and Wenet. We provide a set of easier APIs to call wenet models.

RapidASR: a new member of RapidAI family. Our visio is to offer an out-of-box engineering implementation for ASR. A cpp implementation of recognize-on

RapidAI-NG 97 Nov 17, 2022
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Insight Software Consortium 1.1k Dec 4, 2022
Advent of Code 2021 optimized solutions in C++

advent2021-fast These solutions are a work in progress. Advent of Code 2021 optimized C++ solutions. Here are the timings from an example run on an i9

Andrew Skalski 10 Aug 19, 2022
E-Box solutions Batch 2023

EBox Codes for the Assessments in the E-Box platform General Information E-Box is a learning platform helps various students to develop their coding s

Mukesh.T 7 Jul 15, 2022
A C++-based, cross platform ray tracing library

Visionaray A C++ based, cross platform ray tracing library Getting Visionaray The Visionaray git repository can be cloned using the following commands

Stefan Zellmann 411 Nov 24, 2022
RapidOCR - A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR (捷智OCR) 简体中文 | English 目录 RapidOCR (捷智OCR) 简介 近期更新 ?? 2021-12-18 update 2021-11-28 update 2021-11-13 update 2021-10-27 update 2021-09-13 upda

RapidAI-NG 686 Dec 1, 2022
ClanLib is a cross platform C++ toolkit library.

ClanLib ClanLib is a cross platform toolkit library with a primary focus on game creation. The library is Open Source and free for commercial use, und

Kenneth Gangstø 306 Nov 24, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Nicholas Gillian 791 Nov 27, 2022
The Forge Cross-Platform Rendering Framework PC Windows, Linux, Ray Tracing, macOS / iOS, Android, XBOX, PS4, PS5, Switch, Quest 2

The Forge is a cross-platform rendering framework supporting PC Windows 10 / 7 with DirectX 12 / Vulkan 1.1 with DirectX Ray Tracing API DirectX 11 Fa

The Forge / Confetti 3.3k Nov 30, 2022
A Cross-Platform(Web, Android, iOS) app to Generate Faces of People (These people don't actually exist) made using Flutter.

?? ?? Flutter Random Face Generator A flutter app to generate random faces. The Generated faces do not actually exist in real life (in other words you

Aditya 88 Nov 26, 2022