An implementation of [Jimenez et al., 2016] Ground Truth Ambient Occlusion, MIT license

Overview

XeGTAO

Introduction

XeGTAO is an open source, MIT licensed, DirectX/HLSL implementation of the Practical Realtime Strategies for Accurate Indirect Occlusion, GTAO [Jimenez et al., 2016], suitable for use on a wide range of modern PC integrated and discrete GPUs. The main benefit of GTAO over other screen space algorithms is that it uses a radiometrically-correct ambient occlusion equation, providing more physically correct AO term.

We have implemented and tested the core algorithm that computes and spatially filters the ambient occlusion integral. Implementing the Directional GTAO component (bent normals) is the next planned step.

Our implementation relies on an integrated spatial denoising filter and will leverage TAA for temporal accumulation when available. When used in conjunction with TAA it is both faster, provides higher detail effect on fine geometry features and is more radiometrically correct than other common public SSAO implementations such as closed the source [HBAO+], and open source [ASSAO].

High quality preset, computed at full resolution, costs roughly 1.4ms at 3840x2160 on RTX 3070, 0.56ms at 1920x1080 on RTX 2060 and 2.39ms at 1920x1080 on 11th Gen Intel(R) Core(TM) i7-1195G7 integrated graphics. Faster but lower quality preset is also available.

This sample project (Vanilla.sln) was tested with Visual Studio 2019 16.10.3, DirectX 12 GPU, Windows version 10.0.19041.

XeGTAO OFF/ON comparison in Amazon Lumberyard Bistro; click on image to embiggen:
thumb1 thumb2 thumb3 thumb4

AO term only, left: ASSAO Medium (~0.72ms), right: XeGTAO High (~0.56ms), as measured on RTX 2060 at 1920x1080: ASSAO vs GTAO

Implementation and integration overview

We focus on simplicity and ease of integration with all relevant code provided in a 2-file, header only-like format:

  • XeGTAO.h provides the glue between the user codebase and the effect; this is where macro definitions, settings, constant buffer updates and optional ImGui debug settings are handled.
  • XeGTAO.hlsli provides the core shader code for the effect.

These two files contain the minimum required to integrate the effect, with the amount of work depending on the specific platform and engine details. In an ideal case, the user codebase can include these two files above with little or no modifications, and provide additional resources including working textures, a constant buffer, compute shaders, as well as codebase-specific shader code used to load screen space normals and etc, as shown in the usage example (see vaGTAO.h, vaGTAO.cpp, vaGTAO.hlsl).

The effect is usually computed just after the depth data becomes available (after the depth pre-pass or a g-buffer draw). It takes depth buffer and (optional) screen space normals as inputs, produces a single channel AO buffer as the output and consists of three separate compute shader passes:

  • PrefilterDepths pass: inputs depth buffer; performs input depth conversion to viewspace and generation of depth MIP chain; outputs intermediary viewspace depth buffer with a MIP chain
  • MainPass: inputs intermediary depth buffer and (optional) screen space normals; performs the core GTAO algorithm; outputs unfiltered AO term and intermediary edge information by the denoiser
  • Denoise: inputs unfiltered AO term and intermediary edge information; performs the spatial denoise filter; outputs the final AO term

Implementation details

Following is a list of implementation details and differences from the original GTAO paper [Jimenez et al., 2016]:

Automatic heuristic tuning based on ray-traced reference

In order to best reproduce the work from the original paper and tune the heuristics, we built a simple ray tracer within the development codebase that can render AO ground truth. The ray tracer uses cosine-weighted hemisphere Monte Carlo sampling to approximate Lambertian reflectance for a hemisphere defined by a given point and geometry normal, using near-field bounding long visibility rays.

reference raytracer
left: Reference diffuse-only raytracer, 512spp; right: XeGTAO High preset

Using the ray-traced output as a ground truth, we then tune the XeGTAO heuristics for a best match across several scenes and locations and different near-field bound radii settings. We rely in big part on an automatic system informally called auto-tune, where selected settings (such as thickness heuristic, radius multiplier, falloff range, etc.) can be automatically tuned together. Given min/max ranges for each setting, the auto-tune will run through all permutations across pre-defined scene locations, searching for the lowest overall average MSE between the XeGTAO and raytraced ground outputs. For practical reasons we employ a multi-pass search based on narrowing the setting ranges.

Denoising

Original GTAO implementation is described as using spatio-temporal sampling with temporal reprojection; our current approach uses only a 5x5 depth-aware spatial denoising filter and relies on TAA for the temporal component, when available. This is a compromise that allows for the effect to still be used by codebases that do not employ TAA. When TAA is available, we indirectly leverage it by enabling temporal noise. The downside is that we must keep temporal variance low enough to avoid having TAA mischaracterizing this noise as features, which limits the amount of temporal supersampling that we can leverage. Depending on user feedback and future experimentation we are likely to go with the combined spatio-temporal approach in the future.

denoising
left: raw 3 slices 6 samples per pixel (18spp) XeGTAO output; middle: +5x5 spatial denoiser; right: +TAA and temporal noise

Resolution and sampling

Original GTAO implementation runs at half-resolution with one slice per pixel, 12 samples per hemisphere slice (6 per side; the hemisphere slice term is defined in the original paper). We default to running at full resolution, 3 slices per pixel with 6 samples (3 per side) each for a total of 18spp, and also have a lower quality preset with 2 slices per pixel and 4 samples (2 per side) for a total of 8spp. The reasoning behind reduction in samples per slice is described in 'Thickness Heuristic' section. This balance is likely to change if we move to a combined spatio-temporal approach in the future.

Sample distribution

In order to better capture thin crevices and similar small features, we use x = pow( x, 2 ) distribution for samples along the slice direction, where x is a normalized screen space distance from the evaluated pixel's center to the maximum distance (representing the worldspace 'Effect radius'). This is another setting where we used auto-tune to find the most optimal value, which was around 2.1. We decided to round it down to 2 for simplicity and performance reasons.

denoising
different sample power distribution settings; left: setting of 1.0; right: setting of 2.0, clumping more samples around the center

Near-field bounding

Like the [Jimenez et al., 2016], we attenuate the effect from distant samples based on near-field occlusion radius setting ('Effect radius') over a a range ('Falloff range'). This provides stable and predictable results and is easier to use in conjunction with longer-range, lower-frequency GI. Unlike the original, we do not linearly interpolate the sample horizon angle cosine towards -1 but towards the hemisphere horizon, computed as cos(normal_angle+PI/2) in one direction and cos(normal_angle-PI/2) in the other.

reference raytracer
out of bounds sample interpolation, left: towards -1; middle: ray traced ground truth; right: ours, towards sample horizon, resulting in less detail loss, noticeable around the window and curtain areas

This makes the attenuation function independent from the projected normal vector, avoid haloing or loss of detail under certain view angles, providing results that are on average closer to the ground truth.

Thin occluder conundrum

The main difficulty of approximating AO from the depth buffer is that the depth buffer is effectively a viewspace heightmap and does not correctly represent the actual scene geometry. This leads to visual artifacts such as thin features at depth discontinuities casting too much occlusion (please see 'Height-field assumption considerations' from the [Jimenez et al., 2016]) or haloing effects. The larger the near-field bounding radius setting, the worse the mismatch usually is. Conversely, with a radius that is small in proportion to geometry features, and using various heuristics to minimize the side-effects, a reasonably good approximation can be achieved. There are other solutions to improving the quality of the source geometry representation (such as Multi-view AO, [Vardis et al. 2013], or the more recent Stochastic-Depth Ambient Occlusion [Vermeer et al, 2021] ) which we did not consider due to complexity as they require changes to the rendering pipeline, but which could certainly be adopted for use with XeGTAO.

The original paper describes a conservative thickness heuristic that is derived from the assumption that the thickness of an object is similar to its size in screen space; the end result of it is that "a single sample that is behind the horizon will not significantly decrease the computed horizon, but many of them (in e.g. a thin feature) will considerably attenuate it". In our experimentation we found that increasing the number of slices while undersampling the horizon search (using lower number of samples per slice) achieves very similar result with the same overall number of samples. This removes the need for the somewhat computationally expensive heuristic.

We also experimented with a different heuristic that biases the near-field bounding falloff along the view vector, and in effect reducing the impact of samples that are in front of the evaluated pixel's depth (closer to the camera plane). This provided results closer to the ground truth compared to the heuristic from the original paper and this is now exposed through the "Thin occluder compensation" setting. With 6 (3+3) samples per slice, the (auto-tuned) optimum setting value yields a relatively small improvement, so we disabled it by default for performance reasons. It can be easily enabled if higher quality is required.

reference raytracer
left: default 'Thin occluder compensation' of 0; middle: ray traced ground truth; right: 'Thin occluder compensation' of 0.7

Above image demonstrates two opposing scenarios: in the top row, even the default settings (left column) over-compensate the thin occluder issue due to shelves being very deep, and increasing thin occluder compensation setting (right column) serves only to further deviate from the ground truth (middle column). This is in contrast to the bottom row where pipe and chair legs are very thin, and a high occluder compensation setting (right column) matches ground truth more closely.

Sampling noise

As with any technique based on Monte Carlo integration, a a good sampling method can significantly reduce the number of samples needed for the same quality. The original GTAO paper describes a tileable spatial noise of 4x4 with 6 different temporal rotations.

For stratified sampling we map screen coordinates to a Hilbert curve index, using it to drive Martin Robert's R2 quasi-random sequence. This was inspired by an excellent shadertoy example by user 'paniq' with the only difference in our code being that we use a R2 sequence instead of R1 since we need two low-discrepancy samples for chosing slice angle and step offset. We use a 6 level Hilbert curve, providing a 64x64 repeating tile, and for the temporal component we add an offset of 288*(frameIndex%64), found empirically.

Before we settled on the Hilbert curve + R2 sequence, we used a 2 channel 64x64 tileable blue noise from Christoph Peters's blog to drive slice rotations and individual sample noise offsets. This worked well for spatial-only noise but adding temporal offsets/rotations caused overlaps which would often show as temporal artifacts. We then switched to a 3D noise (from the sequel blogpost which worked well with TAA but was fairly big in size and did not work well when using spatial-only filtering (to quote the blog, "Good 3D noise is a bad 2D blue noise").

Since computing Hilbert curve index in the compute shader adds measurable cost (~7%), we optionally precompute it into a lookup texture which reduces this overhead. C++/HLSL code to compute the Hilbert Index is available in XeGTAO.h and the user can choose between the (simpler) GPU arithmetic or (usually faster) LUT-based codepaths.

reference raytracer
5x5 spatial with 8 frame temporal filter, left: using hash-based pseudo-random noise; right: using Hilbert curve index driving R2 sequence

Memory bandwidth bottleneck

Most screen space effects are performance-limited by the available memory bandwidth and texture caching efficiency, and XeGTAO is no different.

One common approach, which we rely on, relies on a technique presented in Scalable Ambient Obscurance [McGuire et al, 2012] and involves pre-filtering depth buffer into MIP hierarchy, allowing the more distant (from the evaluated pixel's center) locations to be sampled using lower detail MIP level. We follow the same approach as in the paper, with the exception of the choice of the depth MIP filter, for which we use a weighted average of samples, with the weight determined by whether the depth difference from the most distant sample is within a predefined threshold (please refer to DepthMIPFilter in the code for details). Using the most distant sample introduces a natural thin occluder bias and is more stable under motion compared to rotated grid subsampling (from the SAO paper), while averaging provides least precision errors on most slopes.

reference raytracer
left: color-coded sample MIP levels; middle: example of detail loss with a too low 'Depth MIP sampling offset' of 2.0; right: depth MIP mapping disabled

'Depth MIP sampling offset' user setting controls the MIP level selection (mipLevel = max( 0, log2( sampleOffsetLength ) - DepthMIPSamplingOffset )). The lower the value, the lower detailed MIPs are used, reducing memory bandwidth but also reducing quality. It defaults to the value of 3.15 which is the point below which there is no measurable performance increase on any of the tested hardware.

Another popular solution to this problem is presented in Deinterleaved Texturing for Cache-Efficient Interleaved Sampling [Bavoil, 2014] and involves a divide and conquer technique where the working dataset is subdivided into smaller parts that are processed in sequence, ensuring a much better utilization of memory cache structures. The downside is that by the definition, the processing of one dataset part can only rely on sampling data from that part, which constrains the sampling pattern. This was a significant issue for GTAO with its specific sampling pattern (samples lie on a straight line, etc.) where constraining them to a subset of data significantly limited flexibility, consistency and amplified precision issues.

Misc

  • We added a global 'Final power' heuristic that modifies the visibility with a power function. While this has no basis in physical light transfer, we found that auto-tune can use it to achieve better ground truth match in combination with all other settings.
  • In order to minimize bandwidth use we rely on 16-bit floating point buffer to store viewspace depth. This does cause some minor precision quality issues but yields better performance on most hardware. It is however not compatible with built-in screen space normal map generator.
  • It is always advisable to provide screen space normals to the effect, but in case that is not possible we provide a built-in depth to normal map generator.
  • We have enabled fp16 (half float) precision shader math on most places where the loss in precision was acceptable; this provides 5-20% performance boost on various hardware that we have tested on but is entirely optional.
  • In the Bistro scene lighting we use the AO term to attenuate diffuse and specular irradiance from light probes using the multi-bounce diffuse and GTSO approaches detailed in the original GTAO work. We also slightly attenuate direct lighting using the micro-shadowing approximation from Material Advances in Call of Duty: WWII [Chan 2018] / SIGGRAPH 2016: Technical Art of Uncharted. Our current renderer's AO term usage is ad-hoc and has not been matched to ground truth, and is not meant as a reference.

ASSAO vs GTAO
left: ASSAO Medium (~0.72ms*), right: XeGTAO High (~0.56ms*) (*as measured on RTX 2060 at 1920x1080)

FAQ

  • Q: It is still too slow for our target platform, what are our options?
  • A: The "Medium" quality preset is roughly 2/3 of the cost of the "High" preset (for ex., 1.5ms vs 2.2ms at 1920x1080, GTX 1050), while the "Low" quality preset is roughly 2/3 of the cost of the "Medium" preset. For anything faster we advise further reducing sliceCount (in the call XeGTAO_MainPass) at the expense of more noise, or using lower resolution rendering (half by half or checkerboard) and upgrading the denoiser pass with a bilateral upsample.
  • Q: Why is there support for both half (fp16) and single (fp32) precision shader paths?
  • A: While the quality loss on the fp16 path is minimal, we found that some GPUs can suffer from unexpected performance regression on it, sometimes depending on the driver version. For that reason, while enabled by default, we leave it as an optional switch.
  • Q: Any plans for a Vulkan port?
  • A: Upgrades to other platforms/APIs will be added based on interest. Please feel free to submit an issue with a request.

Version log

1.0 - 2021-07-13

  • Initial release

Authors

XeGTAO was created by Filip Strugar and Steve Mccalla, feel free to send any feedback directly to [email protected] and [email protected].

Credits

Many thanks to Jorge Jimenez, Xian-Chun Wu, Angelo Pesce and Adrian Jarabo, authors of the original paper. This implementation would not be possible without their seminal work.

Thanks to Trapper McFerron for implementing the DoF effect and other things, Lukasz Migas for his excellent TAA implementation, Andrew Helmer (https://andrewhelmer.com/) for help with the Owen-Scrambled Sobol noise sequences, Adam Lake and David Bookout for reviews, bug reports and valuable suggestions!

Many thanks to: Amazon and Nvidia for providing the Amazon Lumberyard Bistro dataset through the Open Research Content Archive (ORCA): https://developer.nvidia.com/orca/amazon-lumberyard-bistro; author of the spaceship model available on Sketchfab; Khronos Group for providing the Flight Helmet model and other reference GLTF models.

Many thanks to the developers of the following open-source libraries or projects that make the Vanilla sample framework possible:

References

License

Sample and its code provided under MIT license, please see LICENSE. All third-party source code provided under their own respective and MIT-compatible Open Source licenses.

Copyright (C) 2021, Intel Corporation

Issues
  • Merge dev to master

    Merge dev to master

    Changes to the way UAVs are cleared which should fix issues on some HW; Fixes to the ground truth RTAO; Fixes to Readme.md links; Fixes normals on some models; New lights/lighting for testing of directional AO; Various other bug fixes

    opened by fstrugar 3
  • why the spatial denoise kernal size is 5x5?

    why the spatial denoise kernal size is 5x5?

    thank you very much for providing this implementation and improve for GTAO :) I‘m confused about the kernal size when i read the source in XeGTAO_Denoise function image

    Is it a 3x3 spatial kernal size actually...?

    opened by ZhangRuFu 2
  • Sample using geometry and not shading normals

    Sample using geometry and not shading normals

    One of the last updates introduced a bug where screen screen space normals exported by the sample (and consumed by XeGTAO) are geometry and not shading (material) normals, causing a loss of detail in AO term on some surfaces.

    This does not affect XeGTAO code itself!

    To be fixed in next update.

    bug 
    opened by fstrugar 1
  • Thank you!

    Thank you!

    We focus on simplicity and ease of integration with all relevant code provided in a 2-file, header only-like format

    Thank you very much for providing a simple example to show the ease of integration!

    Build started...
    1>------ Build started: Project: PreBuild, Configuration: Debug x64 ------
    2>------ Build started: Project: IntegratedExternals, Configuration: Debug x64 ------
    2>cencode.c
    2>json_exporter.cpp
    2>mesh_splitter.cpp
    2>AssxmlExporter.cpp
    2>BlenderBMesh.cpp
    2>BlenderCustomData.cpp
    2>BlenderDNA.cpp
    1>BinaryToCppEmbedder: 126 files embedded into 'C:\Users\iamvfx\Desktop\XeGTAO\VisualStudio\PreBuild\..\..\Source\\EmbeddedMedia.inc'.
    3>------ Build started: Project: AllModules, Configuration: Debug x64 ------
    2>BlenderLoader.cpp
    2>BlenderModifier.cpp
    3>simplexnoise1234.c
    3>xxhash.c
    2>BlenderScene.cpp
    3>vaBenchmarkTool.cpp
    3>vaLargeBitmapFile.cpp
    2>BlenderTessellator.cpp
    3>vaMiniScript.cpp
    2>BVHLoader.cpp
    3>vaPoissonDiskGenerator.cpp
    2>AssimpCExport.cpp
    2>CInterfaceIOWrapper.cpp
    2>ColladaExporter.cpp
    2>Assimp.cpp
    3>vaPropertyContainer.cpp
    2>BaseImporter.cpp
    3>vaResourceFormats.cpp
    2>BaseProcess.cpp
    2>Bitmap.cpp
    2>CreateAnimMesh.cpp
    2>DefaultIOStream.cpp
    3>vaXXHash.cpp
    2>DefaultIOSystem.cpp
    3>vaPlatformFileStream.cpp
    2>DefaultLogger.cpp
    3>vaPlatformFileTools.cpp
    3>vaPlatformSocket.cpp
    2>Exporter.cpp
    2>Importer.cpp
    2>ImporterRegistry.cpp
    3>vaPlatformThreading.cpp
    3>vaApplicationWin.cpp
    2>PostStepRegistry.cpp
    2>RemoveComments.cpp
    2>scene.cpp
    2>SceneCombiner.cpp
    3>vaInputKeyboard.cpp
    2>ScenePreprocessor.cpp
    2>SGSpatialSort.cpp
    2>simd.cpp
    2>SkeletonMeshBuilder.cpp
    2>SpatialSort.cpp
    2>SplitByBoneCountProcess.cpp
    2>StandardShapes.cpp
    3>vaInputMouse.cpp
    2>Subdivision.cpp
    3>vaPlatformBase.cpp
    2>TargetAnimation.cpp
    2>Version.cpp
    2>VertexTriangleAdjacency.cpp
    2>ZipArchiveIOSystem.cpp
    2>FBXAnimation.cpp
    3>vaPlatformStringTools.cpp
    2>FBXBinaryTokenizer.cpp
    3>vaSplashScreen.cpp
    2>FBXConverter.cpp
    2>FBXDeformer.cpp
    3>vaCompressionStream.cpp
    2>FBXDocument.cpp
    2>FBXDocumentUtil.cpp
    3>vaFileTools.cpp
    3>vaMemoryStream.cpp
    2>FBXExporter.cpp
    2>FBXExportNode.cpp
    2>FBXExportProperty.cpp
    2>FBXImporter.cpp
    2>FBXMaterial.cpp
    3>vaThreading.cpp
    2>FBXMeshGeometry.cpp
    2>FBXModel.cpp
    3>vaApplicationBase.cpp
    2>FBXNodeAttribute.cpp
    2>FBXParser.cpp
    3>vaConcurrency.cpp
    2>FBXProperties.cpp
    2>FBXTokenizer.cpp
    3>vaCore.cpp
    2>FBXUtil.cpp
    2>glTF2Exporter.cpp
    2>glTF2Importer.cpp
    3>vaEvent.cpp
    2>glTFCommon.cpp
    3>vaGeometry.cpp
    2>glTFExporter.cpp
    3>vaLog.cpp
    2>glTFImporter.cpp
    2>StepFileGen2.cpp
    3>vaMath.cpp
    3>vaMemory.cpp
    3>vaProfiler.cpp
    2>StepFileGen3.cpp
    2>StepFileImporter.cpp
    2>STEPFileEncoding.cpp
    3>vaSerializer.cpp
    3>vaStringTools.cpp
    2>STEPFileReader.cpp
    2>MaterialSystem.cpp
    2>ObjExporter.cpp
    3>vaUI.cpp
    2>ObjFileImporter.cpp
    2>ObjFileMtlImporter.cpp
    2>ObjFileParser.cpp
    3>vaUIDObject.cpp
    2>ArmaturePopulate.cpp
    2>CalcTangentsProcess.cpp
    2>ComputeUVMappingProcess.cpp
    3>vaDirectXRecOMatic.cpp
    3>vaDirectXTools.cpp
    2>ConvertToLHProcess.cpp
    3>vaGBufferDX.cpp
    2>DeboneProcess.cpp
    3>vaGPUTimerDX12.cpp
    2>DropFaceNormalsProcess.cpp
    2>EmbedTexturesProcess.cpp
    2>FindDegenerates.cpp
    2>FindInstancesProcess.cpp
    2>FindInvalidDataProcess.cpp
    2>FixNormalsStep.cpp
    3>vaRenderBuffersDX12.cpp
    2>GenBoundingBoxesProcess.cpp
    2>GenFaceNormalsProcess.cpp
    3>vaRenderDeviceContextDX12.cpp
    2>GenVertexNormalsProcess.cpp
    2>ImproveCacheLocality.cpp
    2>JoinVerticesProcess.cpp
    3>vaRenderDeviceDX12.cpp
    2>LimitBoneWeightsProcess.cpp
    2>MakeVerboseFormat.cpp
    2>OptimizeGraph.cpp
    2>OptimizeMeshes.cpp
    2>PretransformVertices.cpp
    2>ProcessHelper.cpp
    2>RemoveRedundantMaterials.cpp
    2>RemoveVCProcess.cpp
    3>vaRenderMaterialDX12.cpp
    2>ScaleProcess.cpp
    2>SortByPTypeProcess.cpp
    2>SplitLargeMeshes.cpp
    2>TextureTransform.cpp
    2>TriangulateProcess.cpp
    3>vaRenderMeshDX12.cpp
    2>ValidateDataStructure.cpp
    2>RawLoader.cpp
    2>StepExporter.cpp
    2>FIReader.cpp
    2>o3dgcArithmeticCodec.cpp
    2>o3dgcTools.cpp
    2>o3dgcTriangleFans.cpp
    3>vaSceneLightingDX12.cpp
    2>StepFileGen1.cpp
    3>vaSceneRaytracingDX12.cpp
    3>vaShaderDX12.cpp
    3>vaTextureDX12.cpp
    3>vaASSAOLite.cpp
    3>vaCMAA2.cpp
    2>DDSTextureLoader.cpp
    2>BC.cpp
    2>BC4BC5.cpp
    2>BC6HBC7.cpp
    2>BCDirectCompute.cpp
    3>vaCMAA2DX12.cpp
    2>DirectXTexCompress.cpp
    3>vaDepthOfField.cpp
    2>DirectXTexCompressGPU.cpp
    2>DirectXTexConvert.cpp
    3>vaGTAO.cpp
    2>DirectXTexD3D11.cpp
    3>vaPostProcess.cpp
    2>DirectXTexDDS.cpp
    3>vaPostProcessBlur.cpp
    3>vaPostProcessTonemap.cpp
    2>DirectXTexFlipRotate.cpp
    2>DirectXTexHDR.cpp
    2>DirectXTexImage.cpp
    2>DirectXTexMipmaps.cpp
    3>vaSky.cpp
    3>vaSkybox.cpp
    2>DirectXTexMisc.cpp
    2>DirectXTexNormalMaps.cpp
    2>DirectXTexPMAlpha.cpp
    3>vaTAA.cpp
    2>DirectXTexResize.cpp
    2>DirectXTexTGA.cpp
    2>DirectXTexUtil.cpp
    3>vaImageCompareTool.cpp
    3>vaTextureReductionTestTool.cpp
    2>DirectXTexWIC.cpp
    2>ScreenGrab.cpp
    2>WICTextureLoader.cpp
    3>vaZoomTool.cpp
    2>imgui_impl_dx12.cpp
    3>vaAssetPack.cpp
    2>imgui_impl_win32.cpp
    2>imgui.cpp
    3>vaDebugCanvas.cpp
    2>ImCurveEdit.cpp
    2>ImGradient.cpp
    2>ImGuizmo.cpp
    2>ImSequencer.cpp
    2>imgui_draw.cpp
    2>imgui_tables.cpp
    2>imgui_widgets.cpp
    2>imgui_stdlib.cpp
    2>allocator.cpp
    2>clusterizer.cpp
    2>indexcodec.cpp
    2>indexgenerator.cpp
    2>overdrawanalyzer.cpp
    2>overdrawoptimizer.cpp
    2>simplifier.cpp
    2>spatialorder.cpp
    2>stripifier.cpp
    3>vaGBuffer.cpp
    3>vaGPUTimer.cpp
    2>vcacheanalyzer.cpp
    2>vcacheoptimizer.cpp
    2>vertexcodec.cpp
    2>vertexfilter.cpp
    2>vfetchanalyzer.cpp
    2>vfetchoptimizer.cpp
    2>tinyxml2.cpp
    2>vaImguiIntegration.cpp
    2>vaTaskflowIntegration.cpp
    3>vaIBL.cpp
    3>vaPathTracer.cpp
    3>vaRenderBuffers.cpp
    2>DDSTextureLoader12.cpp
    2>DirectXTexD3D12.cpp
    2>ScreenGrab12.cpp
    2>WICTextureLoader12.cpp
    3>vaRenderCamera.cpp
    3>vaRenderDevice.cpp
    2>imgui_demo.cpp
    2>adler32.c
    2>compress.c
    3>vaRenderDeviceContext.cpp
    2>crc32.c
    2>deflate.c
    2>gzclose.c
    2>gzlib.c
    2>gzread.c
    2>gzwrite.c
    2>infback.c
    2>inffast.c
    2>inflate.c
    2>inftrees.c
    2>trees.c
    2>uncompr.c
    2>zutil.c
    3>vaRenderGlobals.cpp
    3>vaRendering.cpp
    3>vaRenderInstanceList.cpp
    3>vaRenderMaterial.cpp
    3>vaRenderMesh.cpp
    3>vaSceneLighting.cpp
    3>vaSceneRaytracing.cpp
    3>vaSceneRenderer.cpp
    3>vaSceneRenderInstanceProcessor.cpp
    3>vaSceneRenderViews.cpp
    3>vaShader.cpp
    3>vaStandardShapes.cpp
    3>vaTexture.cpp
    3>vaTextureHelpers.cpp
    3>vaAssetImporter.cpp
    3>vaAssetImporter_Assimp.cpp
    2>IntegratedExternals.vcxproj -> C:\Users\iamvfx\Desktop\XeGTAO\VisualStudio\.intermediate\.lib\IntegratedExternals\x64\Debug\IntegratedExternals.lib
    3>vaCameraBase.cpp
    3>vaCameraControllers.cpp
    3>vaScene.cpp
    3>vaSceneComponentCore.cpp
    3>vaSceneComponents.cpp
    3>vaSceneComponentsIO.cpp
    3>vaSceneComponentsUI.cpp
    3>vaSceneSystems.cpp
    3>vaSceneAsync.cpp
    3>vaSimpleParticles.cpp
    3>AllModules.vcxproj -> C:\Users\iamvfx\Desktop\XeGTAO\VisualStudio\.intermediate\.lib\AllModules\x64\Debug\AllModules.lib
    4>------ Build started: Project: Vanilla, Configuration: Debug x64 ------
    4>Asteroids.cpp
    4>Samples.cpp
    4>Vanilla.cpp
    4>Workspaces.cpp
    4>   Creating library C:\Users\iamvfx\Desktop\XeGTAO\VisualStudio\..\Build\VanillaD.lib and object C:\Users\iamvfx\Desktop\XeGTAO\VisualStudio\..\Build\VanillaD.exp
    4>Vanilla.vcxproj -> C:\Users\iamvfx\Desktop\XeGTAO\Build\VanillaD.exe
    ========== Build: 4 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
    
    opened by procedural 1
Releases(2021-12-15-bins)
An implementation on Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process.

An implementation on "Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239"

Wangxu1996 31 Jun 28, 2022
IA-LIO-SAM is enhanced LIO-SAM using Intensity and Ambient channel from OUSTER LiDAR.

IA-LIO-SAM Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is

Kevin Jung 70 Jun 26, 2022
Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering)

my_detection Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering) 本项目使用设置地面坡度阈值的方法,滤除地面点,使用三维弯曲体素聚类法完成点云的聚类,包围盒参数由Apol

null 8 Jun 7, 2022
License plate parsing using Darknet and YOLO

DarkPlate Note that DarkPlate by itself is not a complete software project. The intended purpose was to create a simple project showing how to use Dar

Stéphane Charette 27 Apr 11, 2022
Implementation of "An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems"

An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems Implementation of "An Analytical Solution to the IMU Initializati

David Zuniga-Noel 68 Apr 29, 2022
TensorRT implementation of RepVGG models from RepVGG: Making VGG-style ConvNets Great Again

RepVGG RepVGG models from "RepVGG: Making VGG-style ConvNets Great Again" https://arxiv.org/pdf/2101.03697.pdf For the Pytorch implementation, you can

weiwei zhou 61 Jun 21, 2022
A CUDA implementation of Lattice Boltzmann for fluid dynamics simulation

Lattice Boltzmann simulation I am conscious of being only an individual struggling weakly against the stream of time. But it still remains in my power

Long Nguyen 17 Mar 1, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 84 Jun 15, 2022
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. [Paper] [Supplementary Material] Langlois, P. A., Boulch, A

null 67 Jun 26, 2022
C++ Implementation of "An Equivariant Filter for Visual Inertial Odometry", ICRA 2021

EqF VIO (Equivariant Filter for Visual Inertial Odometry) This repository contains an implementation of an Equivariant Filter (EqF) for Visual Inertia

null 50 Jun 15, 2022
Unofficial third-party implementation of FFD (fast feature detector) published in IEEE TIP 2020.

fast_feature_detector Unofficial third-party implementation of FFD (fast feature detector) published in IEEE TIP 2020. Caution I have not got any perm

kamino410 12 Feb 17, 2022
ResNet Implementation, Training, and Inference Using LibTorch C++ API

LibTorch C++ ResNet CIFAR Example Introduction ResNet implementation, training, and inference using LibTorch C++ API. Because there is no native imple

Lei Mao 20 Jun 20, 2022
Swin Transformer C++ Implementation

This is Swin Transformer C++ Implementation, inspired by swin-transformer-pytorch.

null 14 Apr 21, 2022
Implementation of EVO (RA-L 17)

EVO: Event based Visual Odometry Credit, License, and Patent Citation This code implements the event-based visual odometry pipeline described in the p

Robotics and Perception Group 111 Jun 9, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Jiabao Lei 36 May 21, 2022
AlphaZero like implementation for Oware Abapa game

CGZero AlphaZero like implementation for Oware abapa game, in Codingame (https://www.codingame.com/multiplayer/bot-programming/oware-abapa) See https:

null 20 Jun 18, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

null 36 May 21, 2022
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 151 Jul 3, 2022