A C++-based, cross platform ray tracing library


Build Status Join the chat at https://gitter.im/visionaray/Lobby


A C++ based, cross platform ray tracing library

Getting Visionaray

The Visionaray git repository can be cloned using the following commands:

git clone --recursive https://github.com/szellmann/visionaray.git

An existing working copy can be updated using the following commands:

git pull
git submodule sync
git submodule update --init --recursive

Build requirements

  • C++14 compliant compiler (tested with g++-7.4.0 on Ubuntu 18.04 x86_64, tested with clang-900.0.39.2 on Mac OS X 10.13, tested with clang-1200.0.32.28 on Mac OS X 11.0.1 arm-64 (M1), tested with Microsoft Visual Studio 2015 VC14 for x64)

  • CMake version 3.1.3 or newer

  • OpenGL

  • GLEW

  • NVIDIA CUDA Toolkit version 7.0 or newer (optional)

  • Libraries need to be installed as developer packages containing C/C++ header files

  • The OpenGL and GLEW dependency can optionally be relaxed by setting VSNRAY_GRAPHICS_API=None with CMake

In order to compile the viewer and the examples, the following additional packages are needed or recommended:

Building the Visionaray library and viewer

Linux and Mac OS X

It is strongly recommended to build Visionaray in release mode, as the source code relies heavily on function inlining by the compiler, and executables may be extremely slow without that optimization. It is also recommended to supply an architecture flag that corresponds to the CPU architecture you are targeting.

cd visionaray
mkdir build
cd build

cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=native"
make install

The headers, libraries and viewer will then be located in the standard install path of your operating system (usually /usr/local).

See the Getting Started Guide and the Troubleshooting section in the Wiki for further information.

Mini Example

The following example illustrates how easy it is to write a simple ray tracing program with Visionaray.

#include <random>
#include <vector>
#include <visionaray/math/math.h>
#include <visionaray/bvh.h>
#include <visionaray/pinhole_camera.h>
#include <visionaray/scheduler.h>
#include <visionaray/simple_buffer_rt.h>
#include <visionaray/traverse.h>
#include "stb/stb_image_write.h"

using namespace visionaray;

int main() {
    aabb bbox{{-1.f,-1.f,-1.f},{1.f,1.f,1.f}};
    pinhole_camera cam;

    simple_buffer_rt<PF_RGBA8, PF_UNSPECIFIED> renderTarget;

    int numThreads=8;
    tiled_sched<ray> sched(numThreads);
    auto sparams = make_sched_params(cam,renderTarget);

    std::default_random_engine rand;
    std::uniform_real_distribution<float> c(-1.f,1.f);
    std::uniform_real_distribution<float> r(.001f,.05f);
    std::vector<basic_sphere<float>> spheres(500);
    for (int i=0; i<500; ++i) {
        spheres[i].prim_id = i;
        spheres[i].center = vec3(c(rand),c(rand),c(rand));
        spheres[i].radius = r(rand);

    lbvh_builder builder;
    auto bvh = builder.build(index_bvh<basic_sphere<float>>{},spheres.data(),
    auto ref = bvh.ref();

    sched.frame([=](ray r) -> vec4 {
        auto hr = closest_hit(r,&ref,&ref+1);
        if (hr.hit) return vec4(vec3(1.f,.9f,.4f)*spheres[hr.prim_id].radius*20.f,1.f);
        else return vec4(0.f,0.f,0.f,1.f);
    }, sparams);


Just build the program by putting it in a file called mini.cpp and compile it as follows (this assumes that you have the STB headers in your include path (https://github.com/nothings/stb):

c++ mini.cpp -std=c++14 -I/path/to/visionaray/include -o mini

Note how the "shader code" goes in the function call to sched.frame(). The lambda that is called by that is executed for each camera ray.

Linking with the Visionaray library isn't necessary in this case. Only if you use CUDA or OpenGL features from Visionaray, you'll have to link with the library. (You still want Visionaray installed so that it can find its autogenerated config header!)

It's easy to adapt that to use CUDA, by using __device__ lambdas, different scheduler types, and a bit of thrust for data management. The changes would amount to 10-15 extra lines of codes. Check out the src/examples to see how this works.

This should generate the following image (subject to some potential, minor differences based on your STL implementation's random number generator):


Visionaray Viewer

Visionaray comes with a viewer that supports a number of different 3D file formats. The viewer is primarily targeted at developers, as a tool for debugging and testing. After being installed, the viewer can be invoked using the following command:

vsnray-viewer <file>

where file is either a path to a wavefront .obj file, a .ply file, or a .pbrt file.


Documentation can be found in the Wiki.

Source Code Organization


Visionaray is a template library, so that most algorithms are implemented in header files located under include/visionaray.

Visionaray can optionally interoperate with graphics and GPGPU APIs. Interoperability with the respective libraries is compiled into the Visionaray library. When GPU interoperability isn't required, chances are high that you don't need to link with Visionaray but can rather use it as a header only library.

Files in detail/ subfolders are not part of the public API. Code in namespace detail contains private implementation. Template class implementations go into files ending with .inl, which are included at the bottom of the public interface header file.


Visionaray comes with a viewer (see above) and a set of examples. Those can be found in

Common library

The viewer and the examples link with the Visionaray-common library that provides functionality such as windowing classes or mouse interaction. The Visionaray-common library is not part of the public API and interfaces may change between releases.

  • src/common: private library used by the viewer and example applications

Third-party libraries

The viewer and the examples use the following third-party libraries (the Visionaray library can be built without these dependencies):

  • CmdLine library to handle command line arguments in the viewer and examples. (Archived, TODO: port to CmdLine2.)
  • dear imgui library for GUI elements in the viewer and examples.
  • PBRT-Parser library to load 3D models in pbrt format.
  • RapidJSON library for parsing JSON scene descriptions.
  • tinyply library to load Stanford PLY models.

Revision History

See the file CHANGELOG.md for updates on feature addition and removals, bug fixes and general changes.


If you use Visionaray or some of its code for your scientific project, it would be nice if you cited this paper:

author = {Zellmann, Stefan and Wickeroth, Daniel and Lang, Ulrich},
title = {Visionaray: A Cross-Platform Ray Tracing Template Library},
booktitle = {2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2017},
publisher = {IEEE},
pages = {1-8},


Visionaray is licensed under the MIT License (MIT)

  • Handling Multiple BVHs

    Handling Multiple BVHs

    Not really a bug or anything, but I was wondering what the best approach is for handling multiple BVHs. I'd like to continue to use builtin the Whitted/path tracing kernels, if that's possible. Sifting through the code, it seems like you could treat each BVH as a whole primitive and write an intersector for it. I'm a little unsure though and was hoping to get a suggestion.

    In case it's relevant, the purpose of having multiple BVHs is to have one for static meshes and one for dynamic ones.

    opened by ghost 9
  • Single Header Generation

    Single Header Generation

    It would be really convenient to have this library as a single header file. There's a couple of tools that can generate single header files from a list of sources. If you added a script for this, perhaps in a CI step or just by itself, it would make integrating this library into other projects very easy.

    The opinions about single header distributions are a mixed bag, but there's little doubt that they're very convenient.

    Here's a link to a python one, called Quom: https://github.com/Viatorus/quom

    opened by ghost 9
  • Visionary : Is it a renderer or a Raytracing library ?

    Visionary : Is it a renderer or a Raytracing library ?


    First of all, thanks for your amazing work, I am sad to miss you at the LGM at London, I did not know this meeting! I am really interesting by your library for an experiment where I just want to launch a bunch of ray over an object and then know when they are going out their bounding box.

    Can Visionary do this ?

    Thanks, Stephane-lb

    opened by stephane-lb 9
  • Building without GLEW

    Building without GLEW

    I'm trying to build the library on a headless server without any Xorg libraries installed. Unfortunately, Cmake always complains about missing GLEW this way, even if I pass -DVSNRAY_ENABLE_VIEWER=OFF to disable the model viewer. As far as I understand it, the model viewer is the only component which uses GLEW, right? Is there something else I can disable to get around the GLEW requirement? I really don't want to install all the unnecessary Xorg libraries on the server just to do some offline rendering.

    Edit: The server is running Ubuntu 18.04.4.

    opened by jangxx 8
  • Loading mesh file into vsnray-viewer failed.

    Loading mesh file into vsnray-viewer failed.

    Hi, Szellmann.

    The visionary compiling succeed on NVCC 9.2, but a new problem occurred.

    When I use the vsnray-viewer to load a mesh file (.obj file), it failed.

    Loading model...
    Creating BVH...
    Segmentation fault (core dumped)

    Could you please give some suggestions or cues about that, thus I can figure out the reason by myself.

    Thanks again.

    opened by Hao-HUST 6
  • Building NVCC  Error

    Building NVCC Error


    Thanks for your amazing work.

    But, when I compile visionaray on the stage of Building NVCC, the errors occurred:

    [ 84%] Linking CXX static library libvisionaray_common.a [ 84%] Built target visionaray_common [ 86%] Building NVCC (Device) object src/viewer/CMakeFiles/cuda_compile_4.dir/cuda_compile_4_generated_viewer.cu.o /root/Project/Massive_Point_Cloud_Rendering/visionaray/include/visionaray/math/aabb.h(28): warning: device annotation on a defaulted function("basic_aabb") is ignored

    /root/Project/Massive_Point_Cloud_Rendering/visionaray/include/visionaray/math/aabb.h(28): warning: host annotation on a defaulted function("basic_aabb") is ignored

    /root/Project/Massive_Point_Cloud_Rendering/visionaray/include/visionaray/math/ray.h(28): warning: device annotation on a defaulted function("basic_ray") is ignored

    /root/Project/Massive_Point_Cloud_Rendering/visionaray/include/visionaray/math/ray.h(28): warning: host annotation on a defaulted function("basic_ray") is ignored

    Segmentation fault CMake Error at cuda_compile_4_generated_viewer.cu.o.Release.cmake:282 (message): Error generating file /root/Project/Massive_Point_Cloud_Rendering/visionaray/build/src/viewer/CMakeFiles/cuda_compile_4.dir//./cuda_compile_4_generated_viewer.cu.o

    src/viewer/CMakeFiles/viewer.dir/build.make:84: recipe for target 'src/viewer/CMakeFiles/cuda_compile_4.dir/cuda_compile_4_generated_viewer.cu.o' failed make[2]: *** [src/viewer/CMakeFiles/cuda_compile_4.dir/cuda_compile_4_generated_viewer.cu.o] Error 1 CMakeFiles/Makefile2:1061: recipe for target 'src/viewer/CMakeFiles/viewer.dir/all' failed make[1]: *** [src/viewer/CMakeFiles/viewer.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2

    Is there any idea about that? Thanks, Hao

    opened by Hao-HUST 6
  • Weird aliasing issue

    Weird aliasing issue

    I'm currently getting started on trying to build a serious project using Visionaray. To familiarize myself with the library, I started with a minimal example, which uses the following simple kernel:

    result_record<scalar_type> result;
    auto hit = intersect(ray, m_testBbox);
    auto hit_pos = ray.ori + hit.tnear * ray.dir;
    if ( any(hit.hit) ) {
    	result.color = color_type(
    } else {
    	result.color = color_type(0.0f, 0.0f, 1.0f, 1.0f);
    result.hit = hit.hit;
    return result;

    where m_testBbox is just a simple 2x2x2 cube. The resulting image looks weirdly aliased however:

    Each of the jaggies is two pixels large, which seems wrong to me, since I expect the scheduler to emit at least one ray per pixel, so the jaggies would only be one pixel in size.

    Is there any way to improve the quality without resorting to SSAA? Maybe increase the number of rays, or decrease the "size" of each ray?

    Thanks for your answer and the great library.

    opened by jangxx 4
  • PR: Changed some conditional preprocessor directives

    PR: Changed some conditional preprocessor directives

    I'm now trying to build visionaray as a static library using MSVC. But MSVC does not supports syntax like this:


    So I changed these cases to:

    #if defined(VSNRAY_HAVE_GLEW)

    It seems that you're preparing for supporting build on MSVC, so I'm making a PR.

    Thanks for creating amazing libaray.

    opened by wldhg 4
  • CMakeLists.txt: Propagating include directory.

    CMakeLists.txt: Propagating include directory.

    This just makes it easier when using the library as a subdirectory (via add_subdirectory()) so that you don't have to install it into a temporary directory.

    opened by ghost 2
  • pathtracing.inl: wrong shadow rays for delta lights

    pathtracing.inl: wrong shadow rays for delta lights

    When the lights are sampled in visionray::pathtracing::kernel, the shadow is cast with max length of ld at: https://github.com/szellmann/visionaray/blob/f557d7d206b0ff3431ba634be9d05619d541ad3f/include/visionaray/detail/pathtracing.inl#L155

    However, for delta lights ld is always set to one (presumably due to handle the solid angle computations later) at: https://github.com/szellmann/visionaray/blob/f557d7d206b0ff3431ba634be9d05619d541ad3f/include/visionaray/detail/pathtracing.inl#L140

    Using the proper length for the shadow ray seems to fix this issue. Eg:

    auto lhr = any_hit(shadow_ray, params.prims.begin, params.prims.end, length(ls.pos - hit_rec.isect_pos) - S(2.0f * params.epsilon), isect);
    opened by jampekka 2
  • Check why float8-fallback unittests fail

    Check why float8-fallback unittests fail

    .../visionaray/test/unittests/math/simd/simd.cpp:366: Failure
    Value of: all(saturate(fmin) == numeric_limits<float>::min())
      Actual: false
    Expected: true
    .../visionaray/test/unittests/math/simd/simd.cpp:367: Failure
    Value of: all(saturate(fmax) == 1.0f)
      Actual: false
    Expected: true
    .../visionaray/test/unittests/math/simd/simd.cpp:369: Failure
    Value of: all(saturate(fp) == 1.0f)
      Actual: false
    Expected: true
    .../visionaray/test/unittests/math/simd/simd.cpp:366: Failure
    Value of: all(saturate(fmin) == numeric_limits<float>::min())
      Actual: false
    Expected: true
    .../visionaray/test/unittests/math/simd/simd.cpp:367: Failure
    Value of: all(saturate(fmax) == 1.0f)
      Actual: false
    Expected: true
    .../visionaray/test/unittests/math/simd/simd.cpp:369: Failure
    Value of: all(saturate(fp) == 1.0f)
      Actual: false
    Expected: true
    [  FAILED  ] SIMD.Math (0 ms)
    opened by szellmann 1
  • Exception: bad color resource mapped

    Exception: bad color resource mapped

    I try running the viewer program, using the path tracing algorithm, and an exception is thrown:

    Screenshot from 2021-11-01 08-15-11

    I made a very simple model in Blender to reproduce this with.

    simple model.zip

    I'm using GCC 9 and Nvidia's compiler, on Ubuntu and in release mode.

    opened by tay10r 4
  • Obtaining all hits

    Obtaining all hits


    Firstly this project is great, really excellent work!

    I have a problem I am trying to solve, and currently have a solution using POVRay, however, the solution is slow and bulky and I think visionaray could help speed it up but I'm a bit lost on the approach to take.

    Basically, I have a very low-resolution point cloud, made up of spheres, and I need to know to which x-y image coordinates each sphere gets projects to, as well as whether the sphere is occluded by other parts of the scene. Is there a way to get the intersection with a sphere, but then allow the ray to pass through that sphere, as well as reflect off of it (so I can get which pixels in the image plane are combinations of which scene pixels?)

    My current POVRay approach is to raytrace each sphere independently of the others, and then apply a lot of post-processing and additional simulations to get all the information required. However, even when running on 40 cores, it can take weeks for the 3million points I have.

    Thanks :)

    opened by system123 4
  • Precalculated normals and generic_primitive

    Precalculated normals and generic_primitive

    Consider generic_primitive<prim1_t, prim2_t>, where prim1_t has three precalculated normals per instance, and prim2_t has four precalculated normals per instance. Such a setup will fail with the current interface, where a single list containing normals for all primitives is passed to the built-in kernels' parameter list.

    opened by szellmann 0
GPU ray tracing framework using NVIDIA OptiX 7

GPU ray tracing framework using NVIDIA OptiX 7

Shunji Kiuchi 27 Dec 22, 2022
Radeon Rays is ray intersection acceleration library for hardware and software multiplatforms using CPU and GPU

RadeonRays 4.1 Summary RadeonRays is a ray intersection acceleration library. AMD developed RadeonRays to help developers make the most of GPU and to

GPUOpen Libraries & SDKs 980 Dec 29, 2022
RapidOCR - A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR (捷智OCR) 简体中文 | English 目录 RapidOCR (捷智OCR) 简介 近期更新 ?? 2021-12-18 update 2021-11-28 update 2021-11-13 update 2021-10-27 update 2021-09-13 upda

RapidAI-NG 754 Jan 4, 2023
A Cross platform implement of Wenet ASR. It's based on ONNXRuntime and Wenet. We provide a set of easier APIs to call wenet models.

RapidASR: a new member of RapidAI family. Our visio is to offer an out-of-box engineering implementation for ASR. A cpp implementation of recognize-on

RapidAI-NG 97 Nov 17, 2022
ClanLib is a cross platform C++ toolkit library.

ClanLib ClanLib is a cross platform toolkit library with a primary focus on game creation. The library is Open Source and free for commercial use, und

Kenneth Gangstø 309 Dec 18, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Nicholas Gillian 793 Dec 29, 2022
MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

Cross-platform, customizable ML solutions for live and streaming media.

Google 20k Jan 9, 2023
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator compatible with deep learning frameworks, PyTorch and TensorFlow/Keras, as well as classical machine learning libraries such as scikit-learn, and more.

Microsoft 8k Jan 2, 2023
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Insight Software Consortium 1.1k Dec 26, 2022
A Cross-Platform(Web, Android, iOS) app to Generate Faces of People (These people don't actually exist) made using Flutter.

?? ?? Flutter Random Face Generator A flutter app to generate random faces. The Generated faces do not actually exist in real life (in other words you

Aditya 94 Jan 3, 2023
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. All NLP modules are based on Timbl, the Tilburg memory-based learning software package.

Frog - A Tagger-Lemmatizer-Morphological-Analyzer-Dependency-Parser for Dutch Copyright 2006-2020 Ko van der Sloot, Maarten van Gompel, Antal van den

Language Machines 70 Dec 14, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

dgSPARSE 59 Dec 5, 2022
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library

Build Status Travis CI VM: Linux x64: Raspberry Pi 3: Jetson TX2: Backstory I set to build ccv with a minimalism inspiration. That was back in 2010, o

Liu Liu 6.9k Jan 6, 2023
Distributed machine learning platform

Veles Distributed platform for rapid Deep learning application development Consists of: Platform - https://github.com/Samsung/veles Znicz Plugin - Neu

Samsung 897 Dec 5, 2022
Hopsworks - Data-Intensive AI platform with a Feature Store

Give us a star if you appreciate what we do What is Hopsworks? Quick Start Development and Operational ML on Hopsworks Docs Who’s behind Hopsworks? Op

Logical Clocks AB 843 Jan 3, 2023
Machine Learning Platform for Kubernetes

Reproduce, Automate, Scale your data science. Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applica

polyaxon 3.2k Jan 1, 2023
from Microsoft STL, but multi-platform

mscharconv <charconv> from Microsoft STL, but multi-platform. Tested with MSVC, gcc, and clang on Windows, Linux, and macOS (will likely work on other

Borislav Stanimirov 37 Dec 29, 2022
Zero-latency convolution on Bela platform

bela-zlc Zero-latency convolution on Bela platform | Report | Video | Overview Convolution has many applications in audio, such as equalization and ar

Christian J. Steinmetz 19 Jun 25, 2022
Read and write rosbag on a platform without ROS installed, using MQTT for message delivery.

EasyRosBag x86_64: Test on Ubuntu18.04 arm64 : Test on Ubuntu21.04(raspberry Pi4) Introducton ROS(Robot Operation System) is too fat!!! We do not ins

afei 9 May 23, 2022