Mixed reality VR laser tag using Oculus Quest 2 and OAK-D depth cameras. First prize winner for North America region in OpenCV AI Competition 2021.

Overview

Mixed Reality Laser Tag

Copyright 2021 Bart Trzynadlowski

Overview

Mixed Reality Laser Tag

This is the source code to my Mixed Reality Laser Tag project, which won first prize for North America in the OpenCV AI Competition 2021. Mixed reality experiences that build a virtual world conforming to the exact geometry of a physical space are mind-blowingly awesome. However, this has traditionally required the use of expensive motion tracking cameras costing thousands of dollars each. The capital costs for a single venue like The Void can be as high as $500K USD or more. Can something comparable be achieved using inexpensive off-the-shelf depth cameras like the $200 OAK-D? This project proves we can get most of the way there at a fraction of the cost.

Please view the project video for an overview of the system in action and an explanation of its principles of operation. The system itself is a Rube Goldberg machine of components.

System diagram

Keep in mind this project is a very rough proof-of-concept developed in a matter of weeks. I don't expect anyone to actually attempt to run this code but it is technically possible to do so. I hope it serves as an inspiration for your own cool mixed reality projects.

NOTE: All licensed assets have been stripped out of this repository and replaced with "gray box" versions.

Setup Instructions

The physical set up and build process is documented to the best of my ability here. There is no automated build system. Windows on x86-64 is the only supported platform.

Camera Setup

At least one OAK-D device is required and up to two are supported (extending the code to support more would not be that difficult), in which case the fields of view must overlap slightly. To record my video, I placed two cameras on a speaker stand, as shown below.

Camera configuration

When using two cameras, a registration procedure involving a calibration target must be performed. The calibration target is a particular AprilTag printed on a rigid matte 22 x 28 inch poster board. A PDF of the target suitable for printing at e.g., FedEx, is available in assets/calibration_board. Note that the actual printed size may differ. Mine was about half an inch shorter along each dimension. The AprilTag itself is smaller than the poster board and its precise measurement should be taken and updated in src/python/vision/apriltags.py accordingly.

Wall Setup

For each wall in your convex play space, print the AprilTag in assets/walls on an 8.5 x 11 inch sheet of paper. The AprilTag itself should measure 15.3 cm on a side if printed correctly. You can measure and adjust the value in src/python/vision/apriltags.py as needed. Paste a tag onto each wall and make sure that the paper is as flat as possible against the wall. Any bowing of the paper will degrade the accuracy of the estimated pose. It is also possible to use a second tag (ID code 2 at assets/apriltags/tag36_11_00002.png) printed at a much larger size. A PDF for this is not provided but I printed it on poster paper at 43.7 cm. You can use any size you like provided you update apriltags.py accordingly.

Python Setup

Two different Python environments are required for running the laser tag server and the object classification service. I recommend using Anaconda to manage this. Once installed, create the first environment, for the laser tag server, as follows:

conda create -n mrlasertag python=3.8

Then, activate this environment:

conda activate mrlasertag

While in this environment, install the required packages. At the root of the project directory is requirements.txt. Install the packages enumerated therein:

pip install -m requirements.txt

Next, open a new Anaconda Command Prompt window and create a second environment for the classifier (note the different Python version):

conda create -n maskrcnn python=3.6

Activate the environment:

conda activate maskrcnn

And in thirdparty/maskrcnn, install the required packages for this environment.

cd thirdparty\maskrcnn
pip install -m requirements.txt

Building AprilTag Library for Python Server

The build process assumes MSYS2 or an equivalent UNIX-style terminal is being used. Ensure both cmake and mingw32-make are installed. To build, type:

mingw32-make

This will build the third-party apriltag library. To keep things as simple as possible, it is assumed that the library will not be modified and therefore, its source files are not tracked by the Makefile. They are built from a clean start each time if and only if the build output in the bin directory is missing. If you find yourself needing to make a change in the library, make sure to delete the build output before running mingw32-make again:

mingw32-make clean

Building Vuforia UWP Camera Driver DLL

Visual Studio 2019 is required for this step. For now, the Vuforia UWP camera driver project is not integrated with the Makefile and must be built manually. Open src/model_tracker/vuforia_driver/build/DriverTemplate.sln in Visual Studio. Make sure the selected configuration is Release and the architecture is x64, then build the solution (Ctrl+B). This will produce src/model_tracker/vuforia_driver/build/Release/DriverTemplate.dll, which we will later copy over to the Unity controller tracking app build directory.

Building VuforiaControllerTracker UWP App

Open the VuforiaControllerTracker Unity project. Select File then Build Settings. Make sure the selected platform is Universal Windows Platform, the target device is PC, and the architecture is x64.

Build settings

Click Build. When prompted for a folder, create a new folder named App, located at unity/VuforiaControllerTracker/App. Once finished, enter the model_tracker directory and run copy_driver.bat. From a Windows Command Prompt (rather than MSYS2), this would look like:

cd src\model_tracker
copy_driver.bat

This will copy the Vuforia UWP camera driver DLL, DriverTemplate.dll to two locations in the generated application build folder.

Finally, build the application by opening unity/VuforiaControllerTracker/App/VFModelTarget.sln.

In Visual Studio's Solution Explorer, right click on VFModelTarget (Universal Windows), select Publish and then Create App Packages....

Select Sideloading when prompted for the distribution method. For the signing method, the option to use a default certificate should be available as: Yes, use the current certificate. Finally, at the Select and configure packages dialog, make sure only the x64 architecture is selected with the Release (x64) configuration. Click Create and the package should be generated. You will then be able to open the folder and double-click on the .appx file to install and then launch it.

When installing the app package while a previous installation exists, the following error may occur:

App installation failed with error message: The current user has already installed an unpackaged version of this app. A packaged version cannot replace this. The conflicting package is Template3D and it was published by CN=DefaultCompany. (0x80073cfb)

To remove the existing package, open a PowerShell window with admin permissions and run the following command to get the full package name:

get-appxpackage -name Template3D -AllUsers

Then remove it, replacing <PackageFullName> with the name obtained from the previous command.

remove-appxpackage -package <PackageFullName> -AllUsers

The package should now install itself.

Building the LaserTag Unity App

The HMD scene located in Assets/Scenes of the LaserTag Unity project can be deployed to an Oculus Quest 2 by building an APK or can be run from the editor using Oculus Link. When building an APK, make sure only the HMD scene is included and follow the procedure recommended by Oculus.

Build settings for Oculus Quest 2

There is also a Spectator scene which is intended to be run on the PC. It can be run directly from the editor or built as a standalone Windows binary.

NOTE: The host address of the server is hard coded to 192.168.0.100 and must be changed before building to match your server PC address. This is located on the Network game object under either HMDApp or SpectatorApp, depending on which scene you are in. The port should not be changed because it is hard-coded to 6810 in src/python/networking/tcp.py.

Network game object in hierarchy Network inspector

Running

If using two cameras, camera registration must be performed initially and any time the cameras are moved. The calibration board must be captured by both cameras simultaneously before the app can be terminated. The calibration procedure is shown briefly in the project video. Run the registration process using the following command in the mrlasertag environment:

conda activate mrlasertag
python -m src.python.registercams --file=assets/registration.txt

When running the laser tag system, the VuforiaControllerTracker app should be launched first, followed by the classifier, which must be run in the maskrcnn environment:

conda activate maskrcnn
python -m src.python.classifier

Launch the server next from the mrlasertag environment:

conda activate mrlasertag
python -m src.python.mrlasertag -device=14442C10A165C0D200

Replace 14442C10A165C0D200 with the device ID of one of your cameras. Device IDs can be obtained by running registercams.

Finally, deploy the HMD scene of the LaserTag Unity app to one or more Quest 2 headsets. A spectator may also be brought up.

Headset Registration

Each time the Quest 2 LaserTag app is launched, headset registration must be performed. This is shown in the video. The procedure is:

  • Press B on the right-hand controller to enter registration mode.
  • Observe the VuforiaControllerTracker window by lifting the headset up slightly (do not remove it completely or the app will be suspended), which should now display the OAK-D camera stream.
  • Hold the controller in front of the camera and ensure the tracker window detects it and renders a virtual copy with the exact same position and orientation. Press A on the controller to capture a point.
  • Capture at least 2-3 more points (3-4 total), ensuring that the controller is being tracked each time.
  • Press B again when satisfied to end registration. Now the detected walls and objects should pop into existence at the correct locations.

Troubleshooting

  • If the Quest 2 app appears frozen upon launch, it is likely having trouble connecting to the server. I lazily left the blocking connection logic in the Unity main thread, which will completely freeze the application while connecting. Make sure you specified the correct IP address on your LAN prior to building and deploying.
  • If the VuforiaControllerTracker window displays only a black screen during headset registration, this likely means the Python server was not able to open the shared memory buffer because the UWP application ID on your machine differs from mine. This ID is hard-coded into the shared memory object handle strings in src/python/vision/vuforia.py. You can obtain the ID on your system by uncommenting the following code in src/model_tracker/vuforia_driver/src/RefDriverImpl.cpp, re-building the DLL and VuforiaControllerTracker, and running it. The named object path to use in vuforia.py will be printed as a debug string which, unfortunately, requires using an application like DebugView to observe.
  /*
  // This code gets the app container named object path
  wchar_t objectPath[1024];
  unsigned long objectPathLength = 0;
  if (!GetAppContainerNamedObjectPath(nullptr, nullptr, 1024, objectPath, &objectPathLength))
  {
    Platform::log("Failed to get app container's named object path!");
  }
  else
  {
    std::wstring ws(objectPath, objectPathLength);
    using convert_type = std::codecvt_utf8<wchar_t>;
    std::wstring_convert<convert_type, wchar_t> converter;
    std::string path = converter.to_bytes(ws);
    Platform::log(Util::Format() << "App container named object path: " << path);
  }
  */

Acknowledgments

Mixed Reality Laser Tag includes the following third-party code:

You might also like...
A simple facial recognition script using OpenCV's FaceRecognizer module implemented in C++
A simple facial recognition script using OpenCV's FaceRecognizer module implemented in C++

Local Binary Patterns Histogram Recognizer A proyect that implements the LBPHRecognizer class of the OpenCV library to determine if a detected face co

The SCRFD face detection, depends on ncnn library and opencv
The SCRFD face detection, depends on ncnn library and opencv

The SCRFD face detection, depends on ncnn library and opencv

This is a sample ncnn android project, it depends on ncnn library and opencv
This is a sample ncnn android project, it depends on ncnn library and opencv

This is a sample ncnn android project, it depends on ncnn library and opencv

cvnp: pybind11 casts between numpy and OpenCV, possibly with shared memory

cvnp: pybind11 casts and transformers between numpy and OpenCV, possibly with shared memory Explicit transformers between cv::Mat / cv::Matx and numpy

GPU PyTorch TOP in TouchDesigner with CUDA-enabled OpenCV

PyTorchTOP This project demonstrates how to use OpenCV with CUDA modules and PyTorch/LibTorch in a TouchDesigner Custom Operator. Building this projec

Libcamera with OpenCV in Raspberry Pi 64 bit Bullseye

Libcamera OpenCV RPi Bullseye 64OS Libcamera + OpenCV on a Raspberry Pi 4 with 64-bit Bullseye OS In the new Debian 11, Bullseye, you can only capture

liteCV is greater than OpenCV :)

liteCV liteCV is lightweight image processing library for C++11. Unlike OpenCV, liteCV must be SIMPLE. Unlike OpenCV, liteCV must be INDEPENDENCE. Ach

Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla
Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN, SNPE, Arm NN, NNAbla

InferenceHelper This is a helper class for deep learning frameworks especially for inference This class provides an interface to use various deep lear

2021/3/30 ~ 2021/7/12 に行われる企画「競プロ典型 90 問」の問題・解説・ソースコードなどの資料をアップロードしています。

競プロ典型 90 問 日曜を除く毎朝 7:40 に競プロやアルゴリズムの教育的な問題を Twitter(@e869120)に投稿する企画です。 本企画は、2021 年 3 月 30 日から 7 月 12 日まで行われる予定です。 企画の目的 「競プロ典型 90 問」は、競プロ初級者から中上級者(レー

Owner
null
ORB-SLAM3 is the first real-time SLAM library able to perform Visual, Visual-Inertial and Multi-Map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models.

Just to test for my research, and I add coordinate transformation to evaluate the ORB_SLAM3. Only applied in research, and respect the authors' all work.

B.X.W 5 Jul 11, 2022
Android example to get the rgb and disparity images from the OAK-D device connected to a phone.

depthai-android-jni-example (WIP) Android example to get the rgb and disparity images from the OAK-D device connected to a phone. DepthaiAndroidFast.m

Ibai Gorordo 30 Nov 27, 2022
Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor"

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 252 Dec 21, 2022
Muriqui Optimizer - A convex mixed integer nonlinear programming solver

Muriqui Optimizer Muriqui Optimizer is a solver for convex Mixed Integer Nonlinear Programming (MINLP) problems. For more informations and user manual

null 5 Oct 4, 2022
The Forge Cross-Platform Rendering Framework PC Windows, Linux, Ray Tracing, macOS / iOS, Android, XBOX, PS4, PS5, Switch, Quest 2

The Forge is a cross-platform rendering framework supporting PC Windows 10 / 7 with DirectX 12 / Vulkan 1.1 with DirectX Ray Tracing API DirectX 11 Fa

The Forge / Confetti 3.4k Jan 1, 2023
Homework of RoboWalker Vision team of USTC for DJI Robomaster competition.

USTC RoboWalker战队 视觉组2022练习作业 “极限犹可突破,至臻亦不可止。” 作业列表 0. 编程基础教程 Hello World 针对没有学过C++/Python、没有太多相关编程经验的新同学的C++ & Python编程入门教程。 0. Git基础教程 Hello Git 学习世

Zhehao Li 4 Feb 20, 2022
ORB-SLAM3-Monodepth is an extended version of ORB-SLAM3 that utilizes a deep monocular depth estimation network

ORB_SLAM3_Monodepth Introduction This repository was forked from [ORB-SLAM3] (https://github.com/UZ-SLAMLab/ORB_SLAM3). ORB-SLAM3-Monodepth is an exte

null 26 Jan 3, 2023
Example of using ultralytics YOLO V5 with OpenCV 4.5.4, C++ and Python

yolov5-opencv-cpp-python Example of performing inference with ultralytics YOLO V5, OpenCV 4.5.4 DNN, C++ and Python Looking for YOLO V4 OpenCV C++/Pyt

null 183 Jan 9, 2023
Example of using YOLO v4 with OpenCV, C++ and Python

yolov4-opencv-cpp-python Example of performing inference with Darknet YOLO V4, OpenCV 4.4.0 DNN, C++ and Python Looking for YOLO V5 OpenCV C++/Python

null 45 Jan 8, 2023