Azure Percept DK advanced topics

Overview

Azure Percept DK Advanced Development

Please note! The experiences in this repository should be considered to be in preview/beta. Significant portions of these experiences are subject to change without warning. No part of this code should be considered stable.

Please consider providing feedback via this questionnaire. Your feedback will help us continue to fine-tune and improve the advanced tools experience.

Overview

This repository holds all the code and documentation for advanced development using the Azure Percept DK. In this repository, you will find:

  • azureeyemodule: The code for the azureeyemodule, which is the IoT module responsible for running the AI workload on the Percept DK.
  • machine-learning-notebooks: Example Python notebooks which show how to train up a few example neural networks from scratch (or using transfer learning) and get them onto your device.
  • Model and Data Protection: Azure Percept currently supports AI model and data protection as a preview feature.

General Workflow

One of the main things that this repository can be used for is to bring your own custom computer vision pipeline to your Azure Percept DK. The flow for doing that would be this:

  • Use whatever version of whatever DL framework you want (Tensorflow 2.x or 1.x, PyTorch, etc.)
  • Develop your custom DL model and save it to a format that can be converted to OpenVINO IR or OpenVINO Myriad X blob. However, make sure your ops/layers are supported by OpenVINO 2021.1. See here for a compatiblity matrix.
  • Use OpenVINO to convert it to IR or blob format.
    • I recommend using the OpenVINO Workbench to convert your model to OpenVINO IR (or to download a common, pretrained model from their model zoo).
    • You can use the scripts/run_workbench.sh script on Unix systems to run the workbench, or just run its single command in Powershell on Windows.
    • You can use a Docker container to convert IR to blob for our device. See the scripts/compile_ir.sh script and use it as a reference. Note that you will need to modify it to adjust for if you have multiple output layers in your network.
  • Develop a C++ subclass, using the examples we already have. See the azureeyemodule folder for how to do this.
  • The azureeyemodule is the IoT module running on the device responsible for doing inference. It will need to grab your model somehow. For development, you could package your model up with your custom azureeyemodule and then have the custom program run it directly. You could also have it pull down a model through the module twin (again, see the azureeyemodule folder for more details).

Model URLs

The Azure Percept DK's azureeeyemodule supports a few AI models out of the box. The default model that runs is Single Shot Detector (SSD), trained for general object detection on the COCO dataset. But there are a few others that can run without any hassle. Here are the links for the models that we officially guarantee (because we host them and test them on every release).

To use these models, you can download them through the Azure Percept Studio, or you can paste the URLs into your Module Twin as the value for "ModelZipUrl".

Model Source License URL
Faster RCNN ResNet 50 Intel Open Model Zoo Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/faster-rcnn-resnet50.zip
Open Pose Intel Open Model Zoo Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/openpose.zip
Optical Character Recognition Intel Open Model Zoo and Intel Open Model Zoo Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/ocr.zip
Person Detection Intel Open Model Zoo Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/person-detection-retail-0013.zip
Product Detection Custom Vision Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/product-detection.zip
SSD General Intel Open Model Zoo Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/ssdlite-mobilenet-v2.zip
Tiny YOLOv2 General Intel Open Model Zoo Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/tiny-yolo-v2.zip
Unet for Semantic Segmentation of Bananas (for this notebook) Trained from scratch GPLv3 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/binary-unet.zip
Vehicle Detection Custom Vision Apache 2.0 https://aedsamples.blob.core.windows.net/vision/aeddevkitnew/vehicle-detection.zip

Contributing

This repository follows the Microsoft Code of Conduct.

Please see the CONTRIBUTING.md file for instructions on how to contribute to this repository.

Trademark Notice

Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Reporting Security Vulnerabilities

Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include Microsoft, Azure, DotNet, AspNet, Xamarin, and our GitHub organizations.

If you believe you have found a security vulnerability in any Microsoft-owned repository that meets Microsoft's Microsoft's definition of a security vulnerability, please report it to us as described below.

Please do not report security vulnerabilities through public GitHub issues.

Instead, please report them to the Microsoft Security Response Center (MSRC) at https://msrc.microsoft.com/create-report.

If you prefer to submit without logging in, send email to [email protected]. If possible, encrypt your message with our PGP key; please download it from the Microsoft Security Response Center PGP Key page.

You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at microsoft.com/msrc.

Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:

  • Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
  • Full paths of source file(s) related to the manifestation of the issue
  • The location of the affected source code (tag/branch/commit or direct URL)
  • Any special configuration required to reproduce the issue
  • Step-by-step instructions to reproduce the issue
  • Proof-of-concept or exploit code (if possible)
  • Impact of the issue, including how an attacker might exploit the issue

This information will help us triage your report more quickly.

If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our Microsoft Bug Bounty Program page for more details about our active programs.

We prefer all communications to be in English.

Microsoft follows the principle of Coordinated Vulnerability Disclosure.

Comments
  • [BUG] RunHistory initialization failed: libffi.so.7: cannot open shared object file: No such file or directory

    [BUG] RunHistory initialization failed: libffi.so.7: cannot open shared object file: No such file or directory

    Describe the bug While following the steps in Banana Tutorial under the section "Running the notebook" I am seeing this error appear:

    "error": { "code": "ServiceError", "severity": null, "message": "AzureMLCompute job failed.\nServiceError: runTaskLetTask failed because: libffi.so.7: cannot open shared object file: No such file or directory\n\tReason: Job failed with non-zero exit Code", "messageFormat": null, "messageParameters": null, "referenceCode": null, "detailsUri": null, "target": null, "details": [], "innerError": null, "debugInfo": null, "additionalInfo": null }, "correlation": { "operation": "cf742550df05044dbf2b80b3397f31d7", "request": "8ae5bc2bda1f4e59" }, "environment": "australiaeast", "location": "australiaeast", "time": "2021-07-16T05:26:14.7993082+00:00", "componentName": "execution-worker"

    } <

    The note book I am running is SemanticsSegmentationUNet.ipynb

    Screenshots MicrosoftTeams-image (3) MicrosoftTeams-image (2) MicrosoftTeams-image (1)

    bug 
    opened by chrisbossard 8
  • Add inference script to the deployed model/ access the video frame

    Add inference script to the deployed model/ access the video frame

    Hi, is there a way to add an inference script (like when deploying a model in azure ml) to do per/postprocessing? if not is there a way to access the frames of the camera video stream?

    what I want to achieve is to use a pre-trained model deployed in a container to detect an object in an image frame and then do some postprocessing and send the result with the image frame to another deployed container

    opened by mouhannadali 4
  • [BUG] stoi error when running compile_and_test.[sh|ps1] scripts

    [BUG] stoi error when running compile_and_test.[sh|ps1] scripts

    Describe the bug Following the PyTorch from Scratch Tutorial I get the following error, when trying to run detection through the mock-eye-module:

    terminate called after throwing an instance of 'std::invalid_argument'  
      what():  stoi
    

    The model used is pulled from openvino workbench zoo. The default ssd model to test the mock-eye-module container. Hence error should not be related to the model ssd_mobilenet_v2_coco.

    Mock-eye-module by itself is compiling fine.

    Based on simple debugging, error seems to be from the G-API when executed.

    To Reproduce Steps to reproduce the behavior: Follow all steps in Prerequisites.

    When converted model, and video person-bicycle-car-detection.mp4 downloaded.

    I run the following on mac:

    ./scripts/compile_and_test.sh --video=test-artifacts/person-bicycle-car-detection.mp4 --weights=test-artifacts/ssd_mobilenet_v2_coco.bin --xml=test-artifacts/ssd_mobilenet_v2_coco.xml
    

    I run the following on windows:

    ./scripts/compile_and_test.ps1 -ipaddr <your IP address> -xml test-artifacts/ssd_mobilenet_v2_coco.xml -video test-artifacts/person-bicycle-car-detection.mp4
    

    Both return:

    [setupvars.sh] OpenVINO environment initialized
    -- The C compiler identification is GNU 7.5.0
    -- The CXX compiler identification is GNU 7.5.0
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Found OpenCV: /opt/intel/openvino_2021.1.110/opencv (found version "4.5.0") found components:  gapi highgui 
    -- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1") 
    -- Checking for module 'gstreamer-1.0>=1.14'
    --   Found gstreamer-1.0, version 1.16.2
    -- Found InferenceEngine: /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.0") 
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/openvino/tmp/build
    Scanning dependencies of target mock_eye_app
    [ 14%] Building CXX object CMakeFiles/mock_eye_app.dir/main.cpp.o
    [ 28%] Building CXX object CMakeFiles/mock_eye_app.dir/kernels/ssd_kernels.cpp.o
    [ 42%] Building CXX object CMakeFiles/mock_eye_app.dir/kernels/utils.cpp.o
    [ 57%] Building CXX object CMakeFiles/mock_eye_app.dir/modules/device.cpp.o
    [ 71%] Building CXX object CMakeFiles/mock_eye_app.dir/modules/objectdetection/object_detectors.cpp.o
    [ 85%] Building CXX object CMakeFiles/mock_eye_app.dir/modules/parser.cpp.o
    [100%] Linking CXX executable mock_eye_app
    [100%] Built target mock_eye_app
    Cannot open labelfile /home/openvino/tmp/labels.txt
    Labels will not be available.
    terminate called after throwing an instance of 'std::invalid_argument'
      what():  stoi
    

    Expected behavior Based on the documentation, a window should pop out and stream a video with object detection overplayed.

    Additional context Additional observation The scripts compile_and_test.sh expects an "example" folder in the mock-eye-module, not included when cloning the repo. The "mock-eye-module-debug" pulls "openvino/ubuntu18_runtime:latest" instead of "openvino/ubuntu18_runtime:2021.1".

    bug 
    opened by sadhoss 3
  • [BUG]

    [BUG]

    Step 1 of the tutorial fails.

    It states "create a new folder in your [AML] workspace"

    1. there is nothing called "workspace" but I do see a Azure Machine Learning (AML) Studio

    2. There is no 'folder' option in the Create New menu

    3. There really should be a link in step one to assist new developers to the correct 'workspace'.

    Essential the entire tutorial is out of reach because step 1 cannot be completed. temp

    bug 
    opened by gitTinker 3
  • [Need Help] myriad_compile step is failing in my environemt

    [Need Help] myriad_compile step is failing in my environemt

    Describe the bug Not sure if its a bug, my environment is not the same. Repeated error message below when doing the myriad_compile step E: [xLinkUsb] [ 367214] [myriad_compile] usb_find_device_with_bcd:266 Library has not been initialized when loaded

    To Reproduce Run up to Step 9 of https://github.com/microsoft/azure-percept-advanced-development/blob/main/machine-learning-notebooks/train-from-scratch/SemanticSegmentationUNet.ipynb I'm doing the steps locally (not on a Azure ML remote compute instance). I have the model built locally. Next, I was taking the model thru the conversion steps in the openvino container. Container is openvino/ubuntu18_dev:2021.3 Prior steps, listed below, succeeded:

    • pytorch model to onnx - OK
    • onnx to IR - OK
    • IR to blob - Fail

    The exact command is

    source /opt/intel/openvino_2021/bin/setupvars.sh 
    
    /opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/myriad_compile \                                                                                                                    
    8    -m intel/bananas.xml -o intel/bananas.blob -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8 -ip U8 -op FP32 
    

    Thank you for taking a look.

    Update: seems harmless as it generated a blob file.

    bug 
    opened by aurotripathy 3
  • Add Next Tutorial

    Add Next Tutorial

    This tutorial shows you how to:

    1. Train a PyTorch model in AML (not really the focus of the tutorial)
    2. Convert the model to OpenVINO
    3. Double check it still works
    4. Create a G-API graph for your custom model and run it on your PC
    5. Transition that graph to the Percept DK and run the custom model there

    Two things to note:

    1. The neural network is not great. I have not tuned it yet, but I can do that in my downtime. I don't want to block getting the tutorial up waiting for me to tune the network.
    2. I recommend people read the first tutorial before tackling this one.
    opened by MaxStrange 3
  • Audio stream or azureearmodule

    Audio stream or azureearmodule

    Hi,

    great news with this development sdk. However, I was wondering where you document the custom possibilities with the audio module. Would be interested to get access to the audio stream. Do you plan to have a similar module as with azureeymodule?

    Thanks

    enhancement 
    opened by stefan-balke 3
  • Unable to start tutorial

    Unable to start tutorial

    While attempting the prerequisites, executing "run_workbench.ps1", I get this error.

    `.\run_workbench.ps1 : File C:\percept\azure-percept-advanced-development\scripts\run_workbench.ps1 cannot be loaded. The file C:\percept\azure-percept-advanced-development\scripts\run_workbench.ps1 is not digitally signed. You cannot run this script on the current system. For more information about running scripts and setting execution policy, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:1

    • .\run_workbench.ps1
    •   + CategoryInfo          : SecurityError: (:) [], PSSecurityException
        + FullyQualifiedErrorId : UnauthorizedAccess`
      
      

    How do I digitally sign this script for you?

    opened by gitTinker 2
  • Bump pillow from 8.1.2 to 8.2.0 in /machine-learning-notebooks/transfer-learning-custom-azureml

    Bump pillow from 8.1.2 to 8.2.0 in /machine-learning-notebooks/transfer-learning-custom-azureml

    Bumps pillow from 8.1.2 to 8.2.0.

    Release notes

    Sourced from pillow's releases.

    8.2.0

    https://pillow.readthedocs.io/en/stable/releasenotes/8.2.0.html

    Changes

    Dependencies

    Deprecations

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    8.2.0 (2021-04-01)

    • Added getxmp() method #5144 [UrielMaD, radarhere]

    • Add ImageShow support for GraphicsMagick #5349 [latosha-maltba, radarhere]

    • Do not load transparent pixels from subsequent GIF frames #5333 [zewt, radarhere]

    • Use LZW encoding when saving GIF images #5291 [raygard]

    • Set all transparent colors to be equal in quantize() #5282 [radarhere]

    • Allow PixelAccess to use Python int when parsing x and y #5206 [radarhere]

    • Removed Image._MODEINFO #5316 [radarhere]

    • Add preserve_tone option to autocontrast #5350 [elejke, radarhere]

    • Fixed linear_gradient and radial_gradient I and F modes #5274 [radarhere]

    • Add support for reading TIFFs with PlanarConfiguration=2 #5364 [kkopachev, wiredfool, nulano]

    • Deprecated categories #5351 [radarhere]

    • Do not premultiply alpha when resizing with Image.NEAREST resampling #5304 [nulano]

    • Dynamically link FriBiDi instead of Raqm #5062 [nulano]

    • Allow fewer PNG palette entries than the bit depth maximum when saving #5330 [radarhere]

    • Use duration from info dictionary when saving WebP #5338 [radarhere]

    • Stop flattening EXIF IFD into getexif() #4947 [radarhere, kkopachev]

    ... (truncated)

    Commits
    • e0e353c 8.2.0 version bump
    • ee635be Merge pull request #5377 from hugovk/security-and-release-notes
    • 694c84f Fix typo [ci skip]
    • 8febdad Review, typos and lint
    • fea4196 Reorder, roughly alphabetic
    • 496245a Fix BLP DOS -- CVE-2021-28678
    • 22e9bee Fix DOS in PSDImagePlugin -- CVE-2021-28675
    • ba65f0b Fix Memory DOS in ImageFont
    • bb6c11f Fix FLI DOS -- CVE-2021-28676
    • 5a5e6db Fix EPS DOS on _open -- CVE-2021-28677
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • azure percept point to new iothub

    azure percept point to new iothub

    If i create a brand new iothub with different name. how to i get azure percept to work again

    i tried editing the device_connection_string in /etc/iotedge/config.yaml it connects to the device in the new iothub but only has two modules. edgeAgent and edgeHub

    image

    opened by plantoscloud 2
  • Fix IoT Hub messages to be JSON format and utf-8 encoded.

    Fix IoT Hub messages to be JSON format and utf-8 encoded.

    Originally it was sending the content in base64 format to IoT Hub. And the content is non human readable if we routed to Azure storage. (Fig. 1) The pull request is to fix the content encoding from base64 to utf-8 and make it readable. (Fig. 2)

    | Fig.1 | | :--- | | Screenshot 2021-04-29 142918 |

    | Fig.2 | | :--- | | Screenshot 2021-04-29 143208 |

    opened by RuinedStar 2
  • Bump pillow from 8.3.2 to 9.3.0 in /machine-learning-notebooks/transfer-learning-custom-azureml

    Bump pillow from 8.3.2 to 9.3.0 in /machine-learning-notebooks/transfer-learning-custom-azureml

    Bumps pillow from 8.3.2 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • There are multiple links in your Readme that lead to a 404 error [BUG]

    There are multiple links in your Readme that lead to a 404 error [BUG]

    opened by mtyeager 0
  • [QUESTION] - How to interact with the bluetooth module

    [QUESTION] - How to interact with the bluetooth module

    I am trying to find a way to combine data from the vision module and data from the Bluetooth module. It is quite hard to determine the right approach for using the Bluetooth module. i was hoping to find any guidance on this here??

    enhancement 
    opened by dgcaron 0
  • [BUG] Code in ssd.cpp result in compiling error

    [BUG] Code in ssd.cpp result in compiling error

    Describe the bug Code azure-percept-advanced-development/ssd.cpp at main · Microsoft/azure-percept-advanced-development (github.com) results in compile error when considering camera feed.

    To Reproduce recompile the azureeyemodule using the current code

    Expected behavior no compile error

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context We modified the following to make it work:

    // before: auto pipeline = graph.compileStreaming(cv::compile_args(networks, kernels, cv::gapi::mx::mvcmdFile{ this->mvcmd })); 
    auto pipeline = graph.compileStreaming(cv::gapi::mx::Camera::params(), cv::compile_args(networks, kernels, cv::gapi::mx::mvcmdFile{ this->mvcmd }));
    ...
    // before: pipeline.setSource<cv::gapi::wip::GCaptureSource>(video_file_path);
    pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::mx::Camera>()); 
    ...
    // before: pipeline.setSource<cv::gapi::wip::GCaptureSource>(0); 
    pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::mx::Camera>());
    
    bug 
    opened by xiaolul 0
  • [FEATURE-REQUEST] Document Enhancement Request

    [FEATURE-REQUEST] Document Enhancement Request

    When reading the document, we feel the following aspects are not clear enough. Some points mentioned below are not easy to find:

    • An overview page that shows people e2e structure of IoT edge modules running on Azure Percept, including relationships between different modules, providing links to different resources.
    • A well-structured documentation that gives instructions to developers and machine learning engineers (or data scientists) on how to add custom modules.
    • A clear documentation that shows how to compile azureeyemodule end-to-end on Windows and Linux OS. Current document provided in github is not clear in the following aspects:
      • Differences between native compile and non-native compile?
      • When to use a native compile method and when to use non-native compile method?
      • how to revert (or fixed) if any steps go wrong? (reset?)
    • The documentation doesn't point out modules to be stopped if re-deploy custom azureeyemodule is needed -- we need to stop both azureeyemodule and IoT agent
    • The document doesn't tell the readers that rtsp:3000 is working when you customize it using a native way.
    • The document doesn't state that the readers can check 8554 using a VLC player
    • The document should clearly state expected drivers for external devices, for example, whether a driver for external monitor is expected.
    • The document should be revisited and updated frequently based on the OpenVINO update, and check if the recommended approaches of converting models to blob are still valid.
    • The document should state that it is possible to convert model on device using the built-in code provided by azureeyemodule, along with the required model files and format. Currently, when reading the code, we understand it is possible to do so, but no instructions or sample model files are provided.
    • It would be nice if the document provided a reference link to OpenCV G-API
    • It would be nice if the document shows the structure of the model zip file required by azureeyemodule.
    • It would be nice if the hardware spec could also provide the GFLOPs besides TOPs. Or provide latency benchmark on the default models provided in azureeyemodule repo.
    enhancement 
    opened by xiaolul 0
  • [BUG] Couldn't find the default object detection model in OpenVINO

    [BUG] Couldn't find the default object detection model in OpenVINO

    Describe the bug I tried to find the OpenVINO version of the default object detection model, ssd_mobilenet_v2_coco.blob, but couldn't find it.

    In README.md, it said the default SSD model is from here. However, I checked the labels, the model on Percept has 183 labels, but the model in OpenVINO has only 92 labels. Where is the OpenVINO version of the default ssd_mobilenet_v2_coco.blob?

    image

    To Reproduce None

    Expected behavior The labels on Azure Percept should be the same as The labels in OpenVINO.

    Screenshots None

    Logs None

    Additional context None

    bug 
    opened by tsuting 2
Releases(2112-1)
  • 2112-1(Dec 15, 2021)

    Release notes:

    • Updated azureeye base image to include a patch for the Eye SoM firmware. This fix may increase the stability of the azureeyemodule, which has been having stability issues since the firmware was upgraded in 2108-1. Not all firmware stability issues have been fixed by this patch, and we are actively working with Intel to increase the stability of the firmware.
    • Fixed a bug where horizontal gray lines could be seen in some images uploaded as part of data collection for the retraining loop.
    • Updated the azureeyemodule to use a non-root user by default. This is a security best practice, and was required by our security team.
    Source code(tar.gz)
    Source code(zip)
  • 2108-1(Sep 1, 2021)

    This release does the following things:

    1. Updates the firmware in the base image to the latest one from Intel (May release).
    2. Enables UVC (USB Video Class) camera as input source instead of the packaged camera.
    3. Fixes a bug where the connecting a client to the H.264 stream would crash the azureeyemodule after about 7 minutes.
    4. Adds the ability to turn off the H.264 stream by setting "H264Stream": false in the module twin.
    Source code(tar.gz)
    Source code(zip)
  • 2106-2(Aug 4, 2021)

    This release adds support for time-aligning the inferences of slow neural networks with the video streams. This will add latency into the video stream equal to approximately the latency of the neural network, but will result in the inferences (bounding boxes for example) being drawn over the video in the appropriate locations.

    To enable this feature, add "TimeAlignRTSP: true" to your module twin in the IoT Azure Portal.

    Source code(tar.gz)
    Source code(zip)
  • 2104-2(May 11, 2021)

    This release adds some bug fixes:

    • Fix IoT Hub message format to be UTF-8 encoded JSON (previously it was mostly useless 64-bit encoded nonsense)
    • Fix bug with Custom Vision classifier (previously, the Custom Vision classifier models were not working properly - they were interpreting the wrong dimension of the output tensor as the class confidences, which led to always predicting a single class, regardless of confidence)
    • Update H.264 to use TCP instead of UDP, which is a requirement for LVA integration
    Source code(tar.gz)
    Source code(zip)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
4eisa40 GPU computing : exploiting the GPU to execute advanced simulations

GPU-computing 4eisa40 GPU computing : exploiting the GPU to execute advanced simulations Activities Parallel programming Algorithms Image processing O

Ecam 4MIN repositories 2 Jan 10, 2022
X-CUBE-AZRTOS-F4 (Azure RTOS Software Expansion for STM32Cube) provides a full integration of Microsoft Azure RTOS in the STM32Cube environment for the STM32F4 series of microcontrollers.

X-CUBE-AZRTOS-F4 Azure RTOS Software Expansion for STM32Cube With Azure RTOS complementing the extensive STM32Cube ecosystem providing free developmen

STMicroelectronics 28 Dec 7, 2022
X-CUBE-AZRTOS-F7 (Azure RTOS Software Expansion for STM32Cube) provides a full integration of Microsoft Azure RTOS in the STM32Cube environment for the STM32F7 series of microcontrollers.

X-CUBE-AZRTOS-F7 Azure RTOS Software Expansion for STM32Cube With Azure RTOS complementing the extensive STM32Cube ecosystem providing free developmen

STMicroelectronics 7 Nov 17, 2022
📘 CHAPTER-6 📘 10 days of C++. Learn the basics of C++. Other topics will be covered in chapter-7

CPP-BOOK CHAPTER - 6: 10 days of C++. Learn the basics of C++. Other topics will be covered in the next chapter(premium) /* Multi-line Comment */ // S

CodeMacrocosm 10 Oct 29, 2022
🔥 A number of Flutter projects that cover slightly more complex topics.

Check out the YouTube videos to see the indepth process of each project! Reactive Grid https://youtu.be/OEtt_8_FU0s Fancy Full Screen Animation https:

Philip Vu 27 Dec 18, 2022
Built a client-server application using TCP and UDP sockets, in which the clients can subscribe/unsubscribe to various topics.

Built a client-server application using TCP and UDP sockets, in which the clients can subscribe/unsubscribe to various topics.

null 1 Jun 22, 2022
Practice C++ by solving well-prepared exercises on different topics

Practice C++ Practice C++ by solving well-prepared exercises! Complexity level is middle. Not "how to write a for loop and push to a vector", but rath

null 96 Dec 3, 2022
Topics we will learn under CPP-Bootcamp

Here you will find Practice Problems , Handwritten Notes , Assignments ......

Ankur Pandey 3 Jan 23, 2022
Bobby Cooke 328 Dec 25, 2022
Azure TTS(Text-to-Speech) plugin for Unreal Engine

Build apps and games that speak naturally, choosing from more than 250 voices and over 70 languages and variants. Differentiate your brand with a customized voice, and access voices with different speaking styles and emotional tones to fit your use case.

Kenn Zhang 13 Oct 27, 2022
Azure Key Vault and Managed HSM Engine, compatible with OpenSSL

Introduction The Azure Key Vault and Managed HSM Engine allows OpenSSL-based applications to use RSA/EC private keys protected by Azure Key Vault and

Microsoft 11 Nov 28, 2022
Exposes Azure Kinect Support for integration into Unreal Engine Applications.

Azure Kinect for Unreal Engine Exposes Azure Kinect Support for integration into Unreal Engine Applications. Mainly for depth and color textures creat

Ayumu Nagamatsu 45 Jan 4, 2023
Delphi projesini komut satırı veya azure devops ortamında nasıl derleyeceğinizi gösteren basit bir uygulama

DelphiXECompiller Delphi projesini komut satırı veya azure devops ortamında nasıl derleyeceğinizi gösteren basit bir uygulama dcc32.cfg yapılandırması

Bilal Baydur 2 Dec 24, 2021
Advanced 2D Plotting for Dear ImGui

ImPlot ImPlot is an immediate mode, GPU accelerated plotting library for Dear ImGui. It aims to provide a first-class API that ImGui fans will love. I

Evan Pezent 2.9k Jan 9, 2023
libass is a portable subtitle renderer for the ASS/SSA (Advanced Substation Alpha/Substation Alpha) subtitle format.

libass libass is a portable subtitle renderer for the ASS/SSA (Advanced Substation Alpha/Substation Alpha) subtitle format. It is mostly compatible wi

null 724 Dec 28, 2022
Sol3 (sol2 v3.0) - a C++ <-> Lua API wrapper with advanced features and top notch performance - is here, and it's great! Documentation:

sol2 sol2 is a C++ library binding to Lua. It currently supports all Lua versions 5.1+ (LuaJIT 2.0+ and MoonJIT included). sol2 aims to be easy to use

The Phantom Derpstorm 3.3k Jan 4, 2023
advanced, flexible JSON manipulation in C

WJElement - JSON manipulation in C WJElement is a very flexible JSON library developed by Messaging Architects. It was created for MA's "WARP" webserv

netmail 102 Dec 28, 2022
unlock the advanced menu of Lenovo Yoga Slim 7 BIOS

yoga-bios-unlock Based on FlyGoat's work to unlock the BIOS advanced menu documented here, I wrote that tool to unlock my yoga laptop without using a

crito 78 Dec 25, 2022
hashcat is the world's fastest and most advanced password recovery utility

hashcat is the world's fastest and most advanced password recovery utility, supporting five unique modes of attack for over 300 highly-optimized hashing algorithms. hashcat currently supports CPUs, GPUs, and other hardware accelerators on Linux, Windows, and macOS, and has facilities to help enable distributed password cracking.

null 16.3k Dec 30, 2022
Advanced keylogger written in C++ , works on all windows versions use it at your own risk !

About Keylogger Keyloggers or keystroke loggers are software programs or hardware devices that track the activities (keys pressed) of a keyboard. Key

anas 182 Dec 26, 2022