Nvvl - A library that uses hardware acceleration to load sequences of video frames to facilitate machine learning training

Overview

NVVL is part of DALI!

DALI (Nvidia Data Loading Library) incorporates NVVL functionality and offers much more than that, so it is recommended to switch to it. DALI source code is also open source and available on the GitHub. Up to date documentation can be found here. NVVL project will still be available on the GitHub but it won't be maintained. All issues and request for the future please submit in the DALI repository.

NVVL

NVVL (NVIDIA Video Loader) is a library to load random sequences of video frames from compressed video files to facilitate machine learning training. It uses FFmpeg's libraries to parse and read the compressed packets from video files and the video decoding hardware available on NVIDIA GPUs to off-load and accelerate the decoding of those packets, providing a ready-for-training tensor in GPU device memory. NVVL can additionally perform data augmentation while loading the frames. Frames can be scaled, cropped, and flipped horizontally using the GPUs dedicated texture mapping units. Output can be in RGB or YCbCr color space, normalized to [0, 1] or [0, 255], and in float, half, or uint8 tensors.

Note that, while we hope you find NVVL useful, it is example code from a research project performed by a small group of NVIDIA researchers. We will do our best to answer questions and fix small bugs as they come up, but it is not a supported NVIDIA product and is for the most part provided as-is.

Using compressed video files instead of individual frame image files significantly reduces the demands on the storage and I/O systems during training. Storing video datasets as video files consumes an order of magnitude less disk space, allowing for larger datasets to both fit in system RAM as well as local SSDs for fast access. During loading fewer bytes must be read from disk. Fitting on smaller, faster storage and reading fewer bytes at load time allievates the bottleneck of retrieving data from disks, which will only get worse as GPUs get faster. For the dataset used in our example project, H.264 compressed .mp4 files were nearly 40x smaller than storing frames as .png files.

Using the hardware decoder on NVIDIA GPUs to decode images significantly reduces the demands on the host CPU. This means fewer CPU cores need to be dedicated to data loading during training. This is especially important in servers with a large number of GPUs per CPU, such as the in the NVIDIA DGX-2 server, but also provides benefits for other platforms. When training our example project on a NVIDIA DGX-1, the CPU load when using NVVL was 50-60% of the load seen when using a normal dataloader for .png files.

Measurements that quantify the performance advantages of using NVVL are detailed in our super resolution example project.

Most users will want to use the deep learning framework wrappers provided rather than using the library directly. Currently a wrapper for PyTorch is provided (PR's for other frameworks are welcome). See the PyTorch wrapper README for documentation on using the PyTorch wrapper. Note that it is not required to build or install the C++ library before building the PyTorch wrapper (its setup scripts will do so for you).

Building and Installing

NVVL depends on the following:

  • CUDA Toolkit. We have tested versions 8.0 and later but earlier versions may work. NVVL will perform better with CUDA 9.0 or later1.
  • FFmpeg's libavformat, libavcodec, libavfilter, and libavutil. These can be installed from source as in the example Dockerfiles or from the Ubuntu 16.04 packages libavcodec-dev libavfilter-dev libavformat-dev libavutil-dev. Other distributions should have similar packages.

Additionally, building from source requires CMake version 3.8 or above and some examples optionally make use of some libraries from OpenCV if they are installed.

The docker directory contains Dockerfiles that can be used as a starting point for creating an image to build or use the NVVL library. The example's docker directory has an example Dockerfile that actually builds and installs the NVVL library.

CMake 3.8 and above provides builtin CUDA language support that NVVL's build system uses. Since CMake 3.8 is relatively new and not yet in widely used Linux distribution, it may be required to install a new version of CMake. The easiest way to do so is to make use of their package on PyPI:

pip install cmake

Alternatively, or if pip isn't available, you can install to /usr/local from a binary distribution:

wget https://cmake.org/files/v3.10/cmake-3.10.2-Linux-x86_64.sh
/bin/sh cmake-3.10.2-Linux-x86_64.sh --prefix=/usr/local

See https://cmake.org/download/ for more options.

Building and installing NVVL follows the typical CMake pattern:

mkdir build && cd build
cmake ..
make -j
sudo make install

This will install libnvvl.so and development headers into appropriate subdirectores under /usr/local. CMake can be passed the following options using cmake .. -DOPTION=Value:

  • CUDA_ARCH - Name of a CUDA architecture to generate device code for, seperated via a semicolon. Valid options are Kepler, Maxwell, Pascal, and Volta. You can also use specific architecture names such as sm_61. Default is Maxwell;Pascal;Volta.

  • CMAKE_CUDA_FLAGS - A string of arguments to pass to nvcc. In particular, you can decide to link against the static or shared runtime library using -cudart shared or -cudart static. You can also use this for finer control of code generation than CUDA_ARCH, see the nvcc documentation. Default is -cudart shared.

  • WITH_OPENCV - Set this to 1 to build the examples with the optional OpenCV functionality.

  • CMAKE_INSTALL_PREFIX - Install directory. Default is /usr/local.

  • CMAKE_BUILD_TYPE - Debug or Release build.

See the CMake documentation for more options.

The examples in doc/examples can be built using the examples target:

make examples

Finally, if Doxygen is installed, API documentation can be built using the doc target:

make doc

This will build html files in doc/html.

Preparing Data

NVVL supports the H.264 and HEVC (H.265) video codecs in any container format that FFmpeg is able to parse. Video codecs only store certain frames, called keyframes or intra-frames, as a complete image in the data stream. All other frames require data from other frames, either before or after it in time, to be decoded. In order to decode a sequence of frames, it is necessary to start decoding at the keyframe before the sequence, and continue past the sequence to the next keyframe after it. This isn't a problem when streaming sequentially through a video; however, when decoding small sequences of frames randomly throughout the video, a large gap between keyframes results in reading and decoding a large amount of frames that are never used.

Thus, to get good performance when randomly reading short sequences from a video file, it is necessary to encode the file with frequent key frames. We've found setting the keyframe interval to the length of the sequences you will be reading provides a good compromise between filesize and loading performance. Also, NVVL's seeking logic doesn't support open GOPs in HEVC streams. To set the keyframe interval to X when using ffmpeg:

  • For libx264 use -g X
  • For libx265 use -x265-params "keyint=X:no-open-gop=1"

The pixel format of the video must also be yuv420p to be supported by the hardware decoder. This is done by passing -pix_fmt yuv420p to ffmpeg. You should also remove any extra audio or video streams from the video file by passing -map v:0 to ffmpeg after the input but before the output.

For example to transcode to H.264:

ffmpeg -i original.mp4 -map v:0 -c:v libx264 -crf 18 -pix_fmt yuv420p -g 5 -profile:v high prepared.mp4

Basic Usage

This section describes the usage of the base C/C++ library, for usage of the PyTorch wrapper, see the README in the pytorch directory.

The library provides both a C++ and C interface. See the examples in doc/examples for brief example code on how to use the library. extract_frames.cpp demonstrates the C++ interface and extract_frames_c.c the C interface. The API documentation built with make doc is the canonical reference for the API.

The basic flow is to create a VideoLoader object, tell it which frame sequences to read, and then give it buffers in device memory to put the decoded sequences into. In C++, creating a video loader is straight forward:

auto loader = NVVL::VideoLoader{device_id};

You can then tell it which sequences to read via read_sequence:

loader.read_sequence(filename, frame_num, sequence_length);

To receive the frames from the decoder, it is necessary to create a PictureSequence to tell it how and where you want the decoded frames provided. First, create a PictureSequence, providing a count of the number of frames to receive from the decoder. Note that the count here does not need to be the same as the sequence_length provided to read_sequence; you can read a large sequence of frames and receive them as multiple tensors, or read multiple smaller sequences and receive them concatenated as a single tensor.

auto seq = PictureSequence{sequence_count};

You now create "Layers" in the sequence to provide the destination for the frames. Each layer can be a different type, have different processing, and contain different frames from the received sequence. First, create a PictureSequence::Layer of the desired type:

auto pixels = PictureSequence::Layer<float>{};

Next, fill in the pointer to the data and other details. See the documentation in PictureSequence.h for a description of all the available options.

float* data = nullptr;
size_t pitch = 0;
cudaMallocPitch(&data, &pitch,
                crop_width * sizeof(float),
                crop_height * sequence_count * 3);
pixels.data = data;
pixels.desc.count = sequence_count;
pixels.desc.channels = 3;
pixels.desc.width = crop_width;
pixels.desc.height = crop_height;
pixels.desc.scale_width = scale_width;
pixels.desc.scale_height = scale_height;
pixels.desc.horiz_flip = false;
pixels.desc.normalized = true;
pixels.desc.color_space = ColorSpace_RGB;
pixels.desc.stride.x = 1;
pixels.desc.stride.y = pitch / sizeof(float);
pixels.desc.stride.c = pixels.desc.stride.y * crop_height;
pixels.desc.stride.n = pixels.desc.stride.c * 3;

Note that here we have set the strides such that the dimensions are "nchw", we could have done "nhwc" or any other dimension order by setting the strides appropriately. Also note that the strides in the layer description are number of elements, not number of bytes.

We now add this layer to our PictureSequence, and send it to the loader:

seq.set_layer("pixels", pixels);
loader.receive_frames(seq);

This call to receive_frames will be asynchronous. receive_frames_sync can be used if synchronous reading is desired. When we are ready to use the frames we can insert a wait event into the CUDA stream we are using for our computation:

seq.wait(stream);

This will insert a wait event into the stream stream, causing any further kernels launched on stream to wait until the data is ready.

The C interface follows a very similar pattern, see doc/examples/extract_frames_c.c for an example.

Reference

If you find this library useful in your work, please cite it in your publications using the following BibTeX entry:

@misc{nvvl,
  author = {Jared Casper and Jon Barker and Bryan Catanzaro},
  title = {NVVL: NVIDIA Video Loader},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/NVIDIA/nvvl}}
}

Footnotes

[1] Specifically, with nvidia kernel modules version 384 and later, which come with CUDA 9.0+, CUDA kernels launched by NVVL will run asynchronously on a separate stream. With earlier kernel modules, all CUDA kernels are launched on the default stream.

Comments
  • Added get_label function to pytorch bindings (nvvl.VideoDataset)

    Added get_label function to pytorch bindings (nvvl.VideoDataset)

    Added an optional parameter to the pytorch nvvl.VideoDataset that is a callable which obtains the labels for a filename/frame.

    I changed the return for __next__ on VideoLoader to include the labels. I also updated every reference I could find to VideoLoader.next in the other code (it's entirely possible I missed one; I couldn't test the other code because I don't have access to the base docker image in //pytorch/test/docker/Dockerfile.

    I believe that labels are ubiquitous to use cases, so I think that it is acceptable that it is always returned.

    I couldn't find any uses of VideoDataset.getitem in the code base.

    Please advise for any changes/tests you want.

    opened by Multihuntr 10
  • Made pytorch VideoLoader's buffer_length an __init__ parameter.

    Made pytorch VideoLoader's buffer_length an __init__ parameter.

    I had a problem that the default value was using up too much memory (4K frames are big, you know). I figure this should be an init parameter.

    P.S. Sorry that there's a bunch of pull requests from me. There's just stuff I need NVVL to do. Hope that's cool (i.e. let me know if I should just stop)

    opened by Multihuntr 5
  • Fix invalid crop position for flipped frames

    Fix invalid crop position for flipped frames

    There's a typo/bug when calculating horizontal cropping position.

    Example (only X coordinate is mentioned for simplicity):

    • input frame is 1000px
    • flip frame
    • scale to 500px (fx=2)
    • crop [50:350]

    Assuming dst_x == 280, src_x should be (500 - 50 - 280) * 2 = 340 With current implementation result is (300 - 50 - 280) * 2 = -60 As a result, wrong memory is accessed and image is not generated properly.

    opened by metopa 1
  • Avoid uninitialized vid_decoder_ in VideoLoader::impl::read_file()

    Avoid uninitialized vid_decoder_ in VideoLoader::impl::read_file()

    This simple program crashes.

    #include "VideoLoader.h"
    
    int main() {
        NVVL::VideoLoader(0);
    }
    

    This is because:

    • VideoLoader::~VideoLoader() is called
    • VideoLoader::pimpl::finish() is called and done_ is set to true.
    • VideoLoader::impl::read_file() breaks from its while loop
    • vid_decoder_ is uninitialized because nothing has pushed to send_queue

    This PR fixes the bug by simply check if vid_decoder_ is NULL.

    opened by keisukefukuda 1
  • Fix layer names in extract_frames with OpenCV

    Fix layer names in extract_frames with OpenCV

    doc/examples/extract_frames is broken when built with OpenCV. This PR fixes it.

    • "pixels" -> "data" in get_layer<T>
    • sequence.height -> pixels.desc.height (same for width, stride.y, normalized, and color_space)

    Tested locally with against OpenCV 3.1, but the changes are only in NVVL, so it should work for other OpenCV versions.

    opened by keisukefukuda 1
  • Rescale PTS value to nvdecoder time base before submitting to NvDecod…

    Rescale PTS value to nvdecoder time base before submitting to NvDecod…

    …er as stream timebase might be different for clips. Different streams have different time base, currently NvDecoder is initialized with the time base of first stream which is decoded. If the subsequent stream's time base doesnt match , frame number generated from PTS by nvdecoder will be wrong.

    opened by swagat25 0
  • Make stream non-blocking

    Make stream non-blocking

    The stream used for decoding is currently not created as non-blocking. This is fine if the client code doesn't run on the default stream, but since e.g. PyTorch uses the default stream for almost everything, that results in the default stream causing implicit synchronization so the work is effectively serialized. This PR changes that because the stream being created using the cudaStreamCreateWithFlags API with the cudaStreamNonBlocking flag, rather than via the cudaStreamCreate API.

    opened by mkolod 0
  • Some small fixes and improvements to the example project

    Some small fixes and improvements to the example project

    See commit messages, but mostly fix the Dockerfile and some errors in some variable names in the tools/ scripts introduced in the last minute changes. While I was in there I made a couple of tweaks as well.

    opened by jaredcasper 0
  • Minimises number of frames sent to decoder by keeping track of which …

    Minimises number of frames sent to decoder by keeping track of which …

    …frames are actually needed, and halting once all frames required have been sent.

    I won't be offended if this is rejected: just thought I'd present it as a possible modification. I think this should be fine in any case I can think of (and it works for my videos), but maybe there's something I don't know?

    opened by Multihuntr 6
Owner
NVIDIA Corporation
NVIDIA Corporation
Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"

Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"

Beidi Chen 1k Dec 25, 2022
Offical repo for "Moynihan, M., Ruano, S., Pagés, R. and Smolic, A., 2021. Autonomous Tracking For Volumetric Video Sequences"

MeshTracker A segmentation-based tracking algorithm for registering volumetric video meshes (ply/obj) in C++. This is the official implementation of t

V-Sense 22 Nov 7, 2022
OpenEmbedding is an open source framework for Tensorflow distributed training acceleration.

OpenEmbedding English version | 中文版 About OpenEmbedding is an open-source framework for TensorFlow distributed training acceleration. Nowadays, many m

4Paradigm 19 Jul 25, 2022
Implementation of Univaraint Linear Regresion (Supervised Machine Learning) in c++. With a data set (training set) you can predict outcomes.

Linear-Regression Implementation of Univaraint Linear Regresion (Supervised Machine Learning) in c++. With a data set (training set) you can predict o

vincent laizer 1 Nov 3, 2021
Radeon Rays is ray intersection acceleration library for hardware and software multiplatforms using CPU and GPU

RadeonRays 4.1 Summary RadeonRays is a ray intersection acceleration library. AMD developed RadeonRays to help developers make the most of GPU and to

GPUOpen Libraries & SDKs 980 Dec 29, 2022
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Libonnx A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. Getting Started The library's

xboot.org 442 Dec 25, 2022
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Amazon Archives 4.4k Dec 30, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch.

Onsets and Frames TensorRT inference This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch (http

Xianke Wang 6 Jan 13, 2022
A system to flag anomalous source code expressions by learning typical expressions from training data

A friendly request: Thanks for visiting control-flag GitHub repository! If you find control-flag useful, we would appreciate a note from you (to niran

Intel Labs 1.2k Dec 30, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

dgSPARSE 59 Dec 5, 2022
Edge ML Library - High-performance Compute Library for On-device Machine Learning Inference

Edge ML Library (EMLL) offers optimized basic routines like general matrix multiplications (GEMM) and quantizations, to speed up machine learning (ML) inference on ARM-based devices. EMLL supports fp32, fp16 and int8 data types. EMLL accelerates on-device NMT, ASR and OCR engines of Youdao, Inc.

NetEase Youdao 179 Dec 20, 2022
Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, SRMD, RealSR, Anime4K, RIFE, CAIN, DAIN and ACNet.

Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, SRMD, RealSR, Anime4K, RIFE, CAIN, DAIN and ACNet.

Aaron Feng 8.7k Dec 31, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
A lightweight 2D Pose model can be deployed on Linux/Window/Android, supports CPU/GPU inference acceleration, and can be detected in real time on ordinary mobile phones.

A lightweight 2D Pose model can be deployed on Linux/Window/Android, supports CPU/GPU inference acceleration, and can be detected in real time on ordinary mobile phones.

JinquanPan 58 Jan 3, 2023
TengineFactory - Algorithm acceleration landing framework, let you complete the development of algorithm at low cost.eg: Facedetect, FaceLandmark..

简介 随着人工智能的普及,深度学习算法的越来越规整,一套可以低代码并且快速落地并且有定制化解决方案的框架就是一种趋势。为了缩短算法落地周期,降低算法落地门槛是一个必然的方向。 TengineFactory 是由 OPEN AI LAB 自主研发的一套快速,低代码的算法落地框架。我们致力于打造一个完全

OAID 89 Dec 11, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Siavash Eliasi 33 May 31, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Dec 31, 2022
A lightweight C++ machine learning library for embedded electronics and robotics.

Fido Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic

The Fido Project 413 Dec 17, 2022