Isaac ROS common utilities and scripts for use in conjunction with the Isaac ROS suite of packages.

Overview

Isaac ROS Common

Isaac ROS common utilities and scripts for use in conjunction with the Isaac ROS suite of packages.

Docker Scripts

run_dev.sh creates a dev environment with ROS2 installed and key versions of NVIDIA frameworks prepared for both x86_64 and Jetson. By default, the directory /workspaces/isaac_ros-dev in the container is mapped from ~/workspaces/isaac_ros-dev on the host machine if it exists OR the current working directory from where the script was invoked otherwise. The host directory the container maps to can be explicitly set by running the script with the desired path as the first argument:

scripts/run_dev.sh 
   

   

System Requirements

This script is designed and tested to be compatible with ROS2 Foxy on Jetson hardware in addition to on x86 systems with an Nvidia GPU.

Jetson

  • AGX Xavier or Xavier NX
  • JetPack 4.6

x86_64

  • CUDA 11.1+ supported discrete GPU
  • VPI 1.1.11
  • Ubuntu 20.04+

You must first install the Nvidia Container Toolkit to make use of the Docker container development/runtime environment.

Configure nvidia-container-runtime as the default runtime for Docker by editing /etc/docker/daemon.json to include the following:

    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"

and then restarting Docker: sudo systemctl daemon-reload && sudo systemctl restart docker

Note: For best performance on Jetson, ensure that power settings are configured appropriately (Power Management for Jetson).

Troubleshooting

run_dev.sh on x86 fails with vpi-lib-1.1.11-cuda11-x86_64-linux.deb is not a Debian format archive

When building a Docker image, run_dev.sh may fail because some files seem to be invalid. Debian packages for VPI on x86 are packaged in Isaac ROS using git-lfs. These files need to be fetched in order to install VPI in the Docker image.

Symptoms

dpkg-deb: error: 'vpi-lib-1.1.11-cuda11-x86_64-linux.deb' is not a Debian format archive
dpkg: error processing archive vpi-lib-1.1.11-cuda11-x86_64-linux.deb (--install):
 dpkg-deb --control subprocess returned error exit status 2
Errors were encountered while processing:
 vpi-lib-1.1.11-cuda11-x86_64-linux.deb

Solution

Run git lfs pull in each Isaac ROS repository you have checked out, especially isaac_ros_common, to ensure all of the large binary files have been downloaded.

Updates

Date Changes
2021-10-20 Migrated to NVIDIA-ISAAC-ROS, added isaac_ros_nvengine and isaac_ros_nvengine_interfaces packages
2021-08-11 Initial release to NVIDIA-AI-IOT
Comments
  • Error when running docker

    Error when running docker

    Hello, I am trying to use ISAAC ROS environment with my Jetson Xavier NX board (Jetpack 4.6.1 L4T 32.6). My ultimate aim is to run HW accelerated April Tag detection code.

    ***When I run the script with sudo ./run_dev.sh Dockers builds successfully but I am getting the error below when it tries to create group id.

    'gid '0' already exists'

    ***If I comment out this part (Line 188-192) from Dockerfile.aarch64.base and run docker without user "admin" (to get rid of above error), I am getting the long error below:

    docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: src: /usr/lib/aarch64-linux-gnu/libcudnn.so.8, src_lnk: libcudnn.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn.so.8, dst_lnk: libcudnn.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn.so, src_lnk: /etc/alternatives/libcudnn_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn.so, dst_lnk: /etc/alternatives/libcudnn_so src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8, src_lnk: libcudnn_ops_infer.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8, dst_lnk: libcudnn_ops_infer.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so, src_lnk: /etc/alternatives/libcudnn_ops_infer_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so, dst_lnk: /etc/alternatives/libcudnn_ops_infer_so src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8, src_lnk: libcudnn_ops_train.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8, dst_lnk: libcudnn_ops_train.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so, src_lnk: /etc/alternatives/libcudnn_ops_train_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so, dst_lnk: /etc/alternatives/libcudnn_ops_train_so src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8, src_lnk: libcudnn_adv_infer.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8, dst_lnk: libcudnn_adv_infer.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so, src_lnk: /etc/alternatives/libcudnn_adv_infer_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so, dst_lnk: /etc/alternatives/libcudnn_adv_infer_so src: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8, src_lnk: libcudnn_cnn_infer.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8, dst_lnk: libcudnn_cnn_infer.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so, src_lnk: /etc/alternatives/libcudnn_cnn_infer_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so, dst_lnk: /etc/alternatives/libcudnn_cnn_infer_so src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8, src_lnk: libcudnn_adv_train.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8, dst_lnk: libcudnn_adv_train.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so, src_lnk: /etc/alternatives/libcudnn_adv_train_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so, dst_lnk: /etc/alternatives/libcudnn_adv_train_so src: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8, src_lnk: libcudnn_cnn_train.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8, dst_lnk: libcudnn_cnn_train.so.8.2.1 src: /usr/include/cudnn_adv_infer.h, src_lnk: /etc/alternatives/cudnn_adv_infer_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_adv_infer.h, dst_lnk: /etc/alternatives/cudnn_adv_infer_h src: /usr/include/cudnn_adv_train.h, src_lnk: /etc/alternatives/cudnn_adv_train_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_adv_train.h, dst_lnk: /etc/alternatives/cudnn_adv_train_h src: /usr/include/cudnn_backend.h, src_lnk: /etc/alternatives/cudnn_backend_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_backend.h, dst_lnk: /etc/alternatives/cudnn_backend_h src: /usr/include/cudnn_cnn_infer.h, src_lnk: /etc/alternatives/cudnn_cnn_infer_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_cnn_infer.h, dst_lnk: /etc/alternatives/cudnn_cnn_infer_h src: /usr/include/cudnn_cnn_train.h, src_lnk: /etc/alternatives/cudnn_cnn_train_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_cnn_train.h, dst_lnk: /etc/alternatives/cudnn_cnn_train_h src: /usr/include/cudnn.h, src_lnk: /etc/alternatives/libcudnn, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn.h, dst_lnk: /etc/alternatives/libcudnn src: /usr/include/cudnn_ops_infer.h, src_lnk: /etc/alternatives/cudnn_ops_infer_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_ops_infer.h, dst_lnk: /etc/alternatives/cudnn_ops_infer_h src: /usr/include/cudnn_ops_train.h, src_lnk: /etc/alternatives/cudnn_ops_train_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_ops_train.h, dst_lnk: /etc/alternatives/cudnn_ops_train_h src: /usr/include/cudnn_version.h, src_lnk: /etc/alternatives/cudnn_version_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_version.h, dst_lnk: /etc/alternatives/cudnn_version_h src: /etc/alternatives/libcudnn, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_v8.h src: /etc/alternatives/libcudnn_adv_infer_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_adv_infer_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8 src: /etc/alternatives/libcudnn_adv_train_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_adv_train_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8 src: /etc/alternatives/libcudnn_cnn_infer_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_cnn_infer_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8 src: /etc/alternatives/libcudnn_cnn_train_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_cnn_train_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8 src: /etc/alternatives/libcudnn_ops_infer_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_ops_infer_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8 src: /etc/alternatives/libcudnn_ops_train_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_ops_train_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8 src: /etc/alternatives/libcudnn_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn.so.8 src: /etc/alternatives/cudnn_adv_infer_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_adv_infer_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h src: /etc/alternatives/cudnn_backend_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_backend_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_backend_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_backend_v8.h src: /etc/alternatives/cudnn_cnn_train_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_train_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_cnn_train_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_train_v8.h src: /etc/alternatives/cudnn_ops_train_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_train_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_ops_train_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_train_v8.h src: /etc/alternatives/cudnn_adv_train_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_train_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_adv_train_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_train_v8.h src: /etc/alternatives/cudnn_cnn_infer_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_infer_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_cnn_infer_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_infer_v8.h src: /etc/alternatives/cudnn_ops_infer_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_infer_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_ops_infer_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_infer_v8.h src: /etc/alternatives/cudnn_version_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_version_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_version_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_version_v8.h src: /usr/lib/aarch64-linux-gnu/libcudnn_static.a, src_lnk: /etc/alternatives/libcudnn_stlib, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_static.a, dst_lnk: /etc/alternatives/libcudnn_stlib src: /usr/lib/libvisionworks_sfm.so, src_lnk: libvisionworks_sfm.so.0.90, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_sfm.so, dst_lnk: libvisionworks_sfm.so.0.90 src: /usr/lib/libvisionworks_sfm.so.0.90, src_lnk: libvisionworks_sfm.so.0.90.4, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_sfm.so.0.90, dst_lnk: libvisionworks_sfm.so.0.90.4 src: /usr/lib/libvisionworks.so, src_lnk: libvisionworks.so.1.6, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks.so, dst_lnk: libvisionworks.so.1.6 src: /usr/lib/libvisionworks_tracking.so, src_lnk: libvisionworks_tracking.so.0.88, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_tracking.so, dst_lnk: libvisionworks_tracking.so.0.88 src: /usr/lib/libvisionworks_tracking.so.0.88, src_lnk: libvisionworks_tracking.so.0.88.2, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_tracking.so.0.88, dst_lnk: libvisionworks_tracking.so.0.88.2 src: /usr/lib/aarch64-linux-gnu/libnvinfer.so.8, src_lnk: libnvinfer.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer.so.8, dst_lnk: libnvinfer.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8, src_lnk: libnvinfer_plugin.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8, dst_lnk: libnvinfer_plugin.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvparsers.so.8, src_lnk: libnvparsers.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvparsers.so.8, dst_lnk: libnvparsers.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8, src_lnk: libnvonnxparser.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8, dst_lnk: libnvonnxparser.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvinfer.so, src_lnk: libnvinfer.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer.so, dst_lnk: libnvinfer.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so, src_lnk: libnvinfer_plugin.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so, dst_lnk: libnvinfer_plugin.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvparsers.so, src_lnk: libnvparsers.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvparsers.so, dst_lnk: libnvparsers.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvonnxparser.so, src_lnk: libnvonnxparser.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvonnxparser.so, dst_lnk: libnvonnxparser.so.8 src: /etc/vulkan/icd.d/nvidia_icd.json, src_lnk: /usr/lib/aarch64-linux-gnu/tegra/nvidia_icd.json, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/vulkan/icd.d/nvidia_icd.json, dst_lnk: /usr/lib/aarch64-linux-gnu/tegra/nvidia_icd.json src: /usr/lib/aarch64-linux-gnu/libcuda.so, src_lnk: tegra/libcuda.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcuda.so, dst_lnk: tegra/libcuda.so src: /usr/lib/aarch64-linux-gnu/libdrm_nvdc.so, src_lnk: tegra/libdrm.so.2, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libdrm_nvdc.so, dst_lnk: tegra/libdrm.so.2 src: /usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999, src_lnk: tegra/libnvv4l2.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999, dst_lnk: tegra/libnvv4l2.so src: /usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999, src_lnk: tegra/libnvv4lconvert.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999, dst_lnk: tegra/libnvv4lconvert.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so, src_lnk: ../../../tegra/libv4l2_nvargus.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so, dst_lnk: ../../../tegra/libv4l2_nvargus.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvcuvidvideocodec.so, src_lnk: ../../../tegra/libv4l2_nvcuvidvideocodec.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvcuvidvideocodec.so, dst_lnk: ../../../tegra/libv4l2_nvcuvidvideocodec.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so, src_lnk: ../../../tegra/libv4l2_nvvidconv.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so, dst_lnk: ../../../tegra/libv4l2_nvvidconv.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so, src_lnk: ../../../tegra/libv4l2_nvvideocodec.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so, dst_lnk: ../../../tegra/libv4l2_nvvideocodec.so src: /usr/lib/aarch64-linux-gnu/libvulkan.so.1.2.141, src_lnk: tegra/libvulkan.so.1.2.141, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libvulkan.so.1.2.141, dst_lnk: tegra/libvulkan.so.1.2.141 src: /usr/lib/aarch64-linux-gnu/tegra/libcuda.so, src_lnk: libcuda.so.1.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libcuda.so, dst_lnk: libcuda.so.1.1 src: /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so, src_lnk: libnvbufsurface.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so, dst_lnk: libnvbufsurface.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so, src_lnk: libnvbufsurftransform.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so, dst_lnk: libnvbufsurftransform.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so, src_lnk: libnvbuf_utils.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so, dst_lnk: libnvbuf_utils.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvdsbufferpool.so, src_lnk: libnvdsbufferpool.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvdsbufferpool.so, dst_lnk: libnvdsbufferpool.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so, src_lnk: libnvid_mapper.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so, dst_lnk: libnvid_mapper.so.1.0.0 src: /usr/share/glvnd/egl_vendor.d/10_nvidia.json, src_lnk: ../../../lib/aarch64-linux-gnu/tegra-egl/nvidia.json, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/share/glvnd/egl_vendor.d/10_nvidia.json, dst_lnk: ../../../lib/aarch64-linux-gnu/tegra-egl/nvidia.json , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure [email protected]/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=10082 /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged] nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/dev/nvhost-nvdla0: cannot allocate memory: unknown. ~/Documents/kagan/code/Docker/docker_ws/isaac_ros_common/scripts

    ***All my nvidia container libraries seem up to date apt list --installed | grep nvidia corresponding to Jetpack 4.6:

    libnvidia-container-tools/stable,now 0.10.0+jetpack arm64 [installed] libnvidia-container0/stable,now 0.10.0+jetpack arm64 [installed] nvidia-container-csv-cuda/stable,now 10.2.460-1 arm64 [installed] nvidia-container-csv-cudnn/stable,now 8.2.1.32-1+cuda10.2 arm64 [installed] nvidia-container-csv-tensorrt/stable,now 8.0.1.6-1+cuda10.2 arm64 [installed] nvidia-container-csv-visionworks/stable,now 1.6.0.501 arm64 [installed] nvidia-container-runtime/stable,now 3.1.0-1 arm64 [installed] nvidia-container-toolkit/stable,now 1.0.1-1 arm64 [installed] nvidia-docker2/stable,now 2.2.0-1 all [installed] nvidia-l4t-3d-core/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-apt-source/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-bootloader/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-camera/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-configs/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-core/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-cuda/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-firmware/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-gputools/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-graphics-demos/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-gstreamer/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-init/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-initrd/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-jetson-io/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-jetson-multimedia-api/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-kernel/stable,now 4.9.253-tegra-32.6.1-20210726122859 arm64 [installed] nvidia-l4t-kernel-dtbs/stable,now 4.9.253-tegra-32.6.1-20210726122859 arm64 [installed] nvidia-l4t-kernel-headers/stable,now 4.9.253-tegra-32.6.1-20210726122859 arm64 [installed] nvidia-l4t-libvulkan/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-multimedia/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-multimedia-utils/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-oem-config/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-tools/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-wayland/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-weston/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-x11/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-xusb-firmware/stable,now 32.6.1-20210726122859 arm64 [installed]

    In general Docker (v19.03.15) runs correctly with 'sudo docker run hello-world'

    I am also able to run my first Nvidia container here without any problem https://developer.nvidia.com/embedded/learn/tutorials/jetson-container

    I am really stuck at this point )and really appreciate any solution that you can provide.

    Best, Kagan

    opened by kaganGH 12
  • Docker error running run_dev.sh

    Docker error running run_dev.sh

    Trying to build the dev env and when I run run_dev.sh I get this error message. failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown ~/gem_ws/src/isaac_ros_common nvidia-docker is there nvidia-runtime is enabled. this is on a jetson xavier.

    opened by griz1112 9
  • Error running run_dev.sh possible version dependency error caused by protobuf

    Error running run_dev.sh possible version dependency error caused by protobuf

    Hello,

    After upgrading to latest jetpack with ubuntu 20.04, running run_dev.sh ~/workspace causes the following error:

    Collecting protobuf>=3.12.2 (from onnx) Downloading https://files.pythonhosted.org/packages/6c/be/4e32d02bf08b8f76bf6e59 f2a531690c1e4264530404501f3489ca975d9a/protobuf-4.21.0-py2.py3-none-any.whl (164kB ) protobuf requires Python '>=3.7' but the running Python is 3.6.9 The command '/bin/bash -c wget https://nvidia.box.com/shared/static/p57jwntv436lfr d78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl && apt- get update && apt-get install -y libopenblas-base libopenmpi-dev && python 3 -m pip install -U numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl onnx' returned a non-zero code: 1 Running isaac_ros_dev-aarch64-container Unable to find image 'isaac_ros_dev-aarch64:latest' locally docker: Error response from daemon: pull access denied for isaac_ros_dev-aarch64, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. ~/isaac_ros-dev/src/isaac_ros_common/scripts

    protobuf complains about the python version, and then the docker image does not get build.

    any ideas/recomendations/help on this problem greatly appreciated.

    best regards, can

    verify to close 
    opened by altineller 7
  • Question: How do I build isaac_ros_* packages inside docker images?

    Question: How do I build isaac_ros_* packages inside docker images?

    Hi, Thank you for your great work on ROS2 nodes (with CUDA capabilities) for the Jetson boards. Reading the aarch64 and amd64 docker image files in the repository, they do not seem to compile either of the isaac_ros_* packages. When I understood correctly, those docker images provide a ROS2 installation with CUDA libraries installed. Both architectures, amd64 and aarch64 are possible.

    Should I use them as a base image and build isaac_ros_* packages on top of those images? If yes, they seems to have a lot of ROS2 packages/dependencies in them. Are they all needed?

    opened by maxpolzin 6
  • is there a way to take a snapshot of container and use it later on after running run_dev.sh

    is there a way to take a snapshot of container and use it later on after running run_dev.sh

    Hello,

    ./scripts/run_dev.sh takes a long time to prepare the container for use, so what is the proper way of taking a snapshot of running container and using it later on?

    in docker there are commits and checkoints. which one would you prefer?

    opened by altineller 6
  • Docker on release jetpack 5.0.2

    Docker on release jetpack 5.0.2

    Hi,

    When using prebuild docker image provided in this repo on jp5.0.2, building various packages (for eg. urg_node) leads to error shown below:

    Failed to find exported target names in '/opt/ros/humble/install/share/urg_node_msgs/cmake/export_urg_node_msgs__rosidl_generator_cExport.cmake'
    

    Also, as pointed in other issue, building image from start leads to following error:

    E: Unable to locate package tensorrt
    E: Unable to locate package vpi2-dev
    

    Do you know any way to solve above? Thanks

    verify to close 
    opened by automech-rb 5
  • run_dev.sh fails with `failed to register layer: Error processing tar file(exit status 1): archive/tar: invalid tar header`

    run_dev.sh fails with `failed to register layer: Error processing tar file(exit status 1): archive/tar: invalid tar header`

    Building /home/xxx/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.x86_64.humble.nav2 as image: x86_64-humble-nav2-image with base: 
    Sending build context to Docker daemon   80.9kB
    Step 1/1 : FROM nvcr.io/nvidia/isaac/ros:x86_64-humble-nav2_5bd606e569673db8cb1f78f393a5c46b
    x86_64-humble-nav2_5bd606e569673db8cb1f78f393a5c46b: Pulling from nvidia/isaac/ros
    3b65ec22a9e9: Pulling fs layer 
    fd80d866e8b2: Pulling fs layer 
    a364ca75fd6d: Pulling fs layer 
    3d4731d03623: Waiting 
    53a5c2e0251f: Waiting 
    b00ff40d02d9: Waiting 
    3036e9b94123: Waiting 
    453fdcdda788: Waiting 
    35e12ec5e515: Waiting 
    11f61a475a23: Waiting 
    24280cf31c9a: Waiting 
    79007799e2ed: Waiting 
    03eb76abf1e5: Waiting 
    4f4fb700ef54: Waiting 
    8c80ae8980cd: Waiting 
    3918bec26dcf: Waiting 
    5988980892dd: Waiting 
    f4e902364789: Pulling fs layer 
    9e630d0e3180: Waiting 
    140ee16889c1: Waiting 
    da88e5d79c6f: Waiting 
    f61455589e43: Waiting 
    4363df7b85e8: Pulling fs layer 
    6e5e8533bd39: Pulling fs layer 
    f61455589e43: Download complete 
    0dc9580c2401: Download complete 
    15be8ac18616: Download complete 
    2d20876aec99: Download complete 
    b27a26b8bc54: Download complete 
    77e8e1895580: Download complete 
    58486e24d700: Download complete 
    ef9047d558d3: Download complete 
    2020f0da6e65: Download complete 
    e89c028f35a6: Download complete 
    57dc833364e9: Download complete 
    0e8234224d83: Download complete 
    ae4316a64573: Download complete 
    460a2092d34a: Download complete 
    25577f5f0c34: Download complete 
    470a4f1a3861: Download complete 
    c70aab8b800b: Download complete 
    b2c35c86bea9: Download complete 
    b2acdf4d640f: Download complete 
    627909d01b7c: Download complete 
    0a410ef4d0a2: Download complete 
    1e68f15edef6: Download complete 
    ab3a937b2ce7: Download complete 
    b707c805a446: Download complete 
    799b55167afb: Download complete 
    f85fee33b522: Download complete 
    4eac5839291b: Download complete 
    69b60e038821: Download complete 
    5376045668ac: Download complete 
    edff3e3858c0: Download complete 
    be17fae500ce: Download complete 
    b2b1d900c1aa: Download complete 
    82ebd5cb8096: Download complete 
    349cd3bf8b52: Download complete 
    failed to register layer: Error processing tar file(exit status 1): archive/tar: invalid tar header
    
    needs info 
    opened by haowu80s 4
  • Having problems pulling the images from the repository

    Having problems pulling the images from the repository

    Hi, I'm getting this error when trying to run run_dev.sh script:

    docker: Error response from daemon: pull access denied for isaac_ros_dev-aarch64, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

    Any idea how to fix this? I have tried pulling with git lfs, did not work.

    opened by omer-arad 4
  • Clarification on Dockers

    Clarification on Dockers

    Hi,

    I'm a bit confused by the instructions.

    I've cloned the repo and git-lfs'ed, so there is ~/your_ws/src/isaac_ros_common now.

    If I go to ~/your_ws/src/isaac_ros_common, and run ./scripts/run_dev.sh ~/your_ws it builds isaac_ros_dev-aarch64 with base aarch64-humble-nav2-image

    I see there's isaac_ros_common/docker/realsense-dockerfile-example/ with Dockerfile.realsense, and the two jetsonHacks scripts.

    From what I understand of the instructions, if I wanted to build realsense on top, I would put these lines in ~/your_ws/src/isaac_ros_common/scripts/.isaac_ros_common-config:

    CONFIG_IMAGE_KEY=humble.nav2.realsense CONFIG_DOCKER_SEARCH_DIRS=($HOME/src/isaac_ros_common/docker/)

    Then from ~/your_ws/src/isaac_ros_common, I run ./scripts/run_dev.sh ~/your_ws I tried the external directory name, too, in case it's meant to be that. Tried with the realsense-dockerfile-example directory too.

    But it just builds using key aarch64.humble.nav2.user, and gives no indication that the config had any effect.

    Could someone please clarify the usage? Thanks

    question 
    opened by javadan 3
  • cuda / gpu not available (agx orin) in docker container

    cuda / gpu not available (agx orin) in docker container

    Before I get to what I ran into let me say that this is going to be an awesome framework! Congrats and thank-you. I was able to install isaac_ros_common and additional docker container on my linux workstation and found it quite straightforward and had no issues.

    However, when I repeated the process on my agx orin (latest software), the base isaac_ros_common container does not seem to have access to the gpu/cuda.

    For example: [email protected]:/workspaces/isaac_ros-dev$ python3 Python 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    import torch print(torch.version) 1.12.0 torch.cuda.is_available() False


    I have another docker container that was previously built using nvcr.io/nvidia/l4t-pytorch:r34.1.0-pth1.12-py3, and cuda and the gpu are available, and I didn't notice any significant difference in arguments as run_dev.sh, which suggests that everything to support using the gpu in a container is set up correctly:

    Python 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    import torch print(torch.version) 1.12.0a0+2c916ef.nv22.3 torch.cuda.is_available() True


    So I am at a loss to understand what the issue is. Here are things that I have checked:

    1. All software on the orin is up to date
    2. I followed the set-up instructions so the nvidia container toolkit etc. are correct and the correct versions.

    If there is additional information, I can provide, please let me know. Thanks for your help.

    bb

    verify to close 
    opened by bblumberg 3
  • Build errors with galactic

    Build errors with galactic

    Hi, I am trying to use this repo and https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline to do hardware acceleration on a Ros2 (galactic) based system that uses realsense cameras. I am using an AGX with ubuntu 18.04 and jetpack 4.6.1. I am running into install errors when building isaac_ros_common inside my base docker image. I noticed the readme says it is designed to be compatible with ros2 foxy. are there any plans to support ros2 galactic? thanks.

    opened by benbarron 3
  • Example of isaac_ros packages layers on top of base image to setup the environment

    Example of isaac_ros packages layers on top of base image to setup the environment

    Hi, I need some guidance in adding layers (isaac_ros packages) on top of the base images provided in Isaac ROS to setup the environment. As an example, I have used the attached Dockerfile.nvblox to be available in the image. Dockerfile.nvblox.txt

    I have followed the following steps, as mentioned in the README:

    1. Added the CONFIG_IMAGE_KEY="humble.nav2.nvblox"
    2. run_dev.sh

    The config image key was resolved as x86_64.humble.nav2.nvblox.user which is expected. I think that there are no root permissions to create a folder is /opt, therefore, the image failed and shows the following error:

    Cloning into 'isaac_ros_nvblox'...
    Host key verification failed.
    fatal: Could not read from remote repository.
    
    Please make sure you have the correct access rights
    and the repository exists.
    

    My question is more about the best practices to create images that already have isaac_ros packages required for the development. I would appreciate it if you may include an example of a ROS package, in particular, that requires the CUDA architectures such as Nvblox, etc. Thanks in advance.

    verify to close needs info 
    opened by arainbilal 4
Releases(v0.20.0-dp)
Owner
NVIDIA Isaac ROS
High-performance computing for robotics
NVIDIA Isaac ROS
A combined suite of utilities for manipulating binary data files.

BinaryTools A combined suite of utilities for manipulating binary data files. It was developed for use on Windows but might compile on other systems.

David Walters 6 Oct 1, 2022
A tool for generating build scripts for C++20 projects that use modules.

cpp_module_parser [cmop] A tool for generating build scripts for C++20 projects that use modules. The intent is to provide a backend for premake, for

Alexander Christensen 2 Nov 23, 2021
Utilities for use in a DPP based discord bot

DPPUtils NOTE: This repo is in development, use these utilities at your own risk Numerous utilities for use in your DPP bot. List of Utilities Youtube

Daniel Wykerd 6 Nov 5, 2022
The goal of arrowvctrs is to wrap the Arrow Data C API and Arrow Stream C API to provide lightweight Arrow support for R packages

The goal of arrowvctrs is to wrap the Arrow Data C API and Arrow Stream C API to provide lightweight Arrow support for R packages to consume and produce streams of data in Arrow format. Right now it’s just a fun way for me to learn about Arrow!

Dewey Dunnington 30 Aug 5, 2022
Compile and execute C "scripts" in one go!

c "There isn't much that's special about C. That's one of the reasons why it's fast." I love C for its raw speed (although it does have its drawbacks)

Ryan Jacobs 2k Dec 26, 2022
fpicker is a Frida-based fuzzing suite supporting various modes (including AFL++ in-process fuzzing)

fpicker fpicker is a Frida-based fuzzing suite that offers a variety of fuzzing modes for in-process fuzzing, such as an AFL++ mode or a passive traci

Dennis Heinze 184 Dec 30, 2022
The libxo library allows an application to generate text, XML, JSON, and HTML output using a common set of function calls. The application decides at run time which output style should be produced.

libxo libxo - A Library for Generating Text, XML, JSON, and HTML Output The libxo library allows an application to generate text, XML, JSON, and HTML

Juniper Networks 253 Dec 10, 2022
MacFlim flim player source code and utilities

MacFlim Video player source code Please do not barf on code quality. It was not in releasable state, but people wanted to use it. You may even be one

Fred Stark 71 Jan 1, 2023
mpiFileUtils - File utilities designed for scalability and performance.

mpiFileUtils provides both a library called libmfu and a suite of MPI-based tools to manage large datasets, which may vary from large directory trees to large files.

High-Performance Computing 133 Jan 4, 2023
Panda - is a set of utilities used to research how PsExec encrypts its traffic.

Panda Panda - is a set of utilities used to research how PsExec encrypts its traffic. Shared library used to inject into lsass.exe process to log NTLM

Pavel 11 Jul 17, 2022
Dead by Daylight utilities created while researching

DeadByDaylight Research material and PoC for bugs found during the reversal of the game Dead by Daylight. All information provided is for educational

Layle | Luca 12 Dec 26, 2022
Utilities to extract secrets from 1Password

1PasswordSuite Blog https://posts.specterops.io/1password-secret-retrieval-methodology-and-implementation-6a9db3f3c709 1PasswordExtract This .NET appl

Dwight Hohnstein 100 Dec 7, 2022
cavi is an open-source library that aims to provide performant utilities for closed hierarchies (i.e. all class types of the hierarchy are known at compile time).

cavi cavi is an open-source library that aims to provide performant utilities for closed hierarchies (i.e. all class types of the hierarchy are known

Baber Nawaz 5 Mar 9, 2022
personal organization utilities

orgutils: Personal Organization Utilities orgutils are a set of utilities for personal and project organization. Each program has

Seninha 5 Dec 8, 2021
Header-only lock-free synchronization utilities (one writer, many readers).

stupid Header-only lock-free synchronization utilities (one writer, many readers). No queues Base functionality The base functionality of this library

Colugo 14 Nov 28, 2022
provide SFML Time utilities in pure C++20, no dependencies

SFML-Time-utilities-without-SFML provide SFML Time utilities in pure C++20, no dependencies Example int main() { Clock clock; Sleep(1000);

null 1 Apr 28, 2022
A tool for use with clang to analyze #includes in C and C++ source files

Include What You Use For more in-depth documentation, see docs. Instructions for Users "Include what you use" means this: for every symbol (type, func

null 3.2k Jan 4, 2023
A simple and easy-to-use library to enjoy videogames programming

hb-raylib v3.5 Harbour bindings for raylib 3.5, a simple and easy to use library to learn videogames programming raylib v3.5. The project has an educa

MarcosLMG 1 Aug 28, 2022
Haxe bindings for raylib, a simple and easy-to-use library to learn videogame programming

Haxe bindings for raylib, a simple and easy-to-use library to learn videogame programming, Currently works only for windows but feel free the expand t

FSasquatch 36 Dec 16, 2022