Easy to integrate memory allocation library for Direct3D 12

Overview

D3D12 Memory Allocator

Easy to integrate memory allocation library for Direct3D 12.

Documentation: Browse online: D3D12 Memory Allocator (generated from Doxygen-style comments in src/D3D12MemAlloc.h)

License: MIT. See LICENSE.txt

Changelog: See CHANGELOG.md

Product page: D3D12 Memory Allocator on GPUOpen

Build status:

Windows: Build status

Average time to resolve an issue

Problem

Memory allocation and resource (buffer and texture) creation in new, explicit graphics APIs (Vulkan® and Direct3D 12) is difficult comparing to older graphics APIs like Direct3D 11 or OpenGL® because it is recommended to allocate bigger blocks of memory and assign parts of them to resources. Vulkan Memory Allocator is a library that implements this functionality for Vulkan. It is available online since 2017 and it is successfully used in many software projects, including some AAA game studios. This is an equivalent library for D3D12.

Features

This library can help developers to manage memory allocations and resource creation by offering function Allocator::CreateResource similar to the standard ID3D12Device::CreateCommittedResource. It internally:

  • Allocates and keeps track of bigger memory heaps, used and unused ranges inside them, finds best matching unused ranges to create new resources there as placed resources.
  • Automatically respects size and alignment requirements for created resources.
  • Automatically handles resource heap tier - whether it's D3D12_RESOURCE_HEAP_TIER_1 that requires to keep certain classes of resources separate or D3D12_RESOURCE_HEAP_TIER_2 that allows to keep them all together.

Additional features:

  • Support for resource aliasing (overlap).
  • Virtual allocator - possibility to use core allocation algorithm without using real GPU memory, to allocate your own stuff, e.g. sub-allocate pieces of one large buffer.
  • Well-documented - description of all classes and functions provided, along with chapters that contain general description and example code.
  • Thread-safety: Library is designed to be used in multithreaded code.
  • Configuration: Fill optional members of ALLOCATOR_DESC structure to provide custom CPU memory allocator and other parameters.
  • Customization: Predefine appropriate macros to provide your own implementation of external facilities used by the library, like assert, mutex, and atomic.
  • Statistics: Obtain detailed statistics about the amount of memory used, unused, number of allocated blocks, number of allocations etc. - globally and per memory heap type.
  • Debug annotations: Associate string name with every allocation.
  • JSON dump: Obtain a string in JSON format with detailed map of internal state, including list of allocations and gaps between them.

Prerequisites

  • Self-contained C++ library in single pair of H + CPP files. No external dependencies other than standard C, C++ library and Windows SDK. Some features of C++14 used. STL containers, C++ exceptions, and RTTI are not used.
  • Object-oriented interface in a convention similar to D3D12.
  • Error handling implemented by returning HRESULT error codes - same way as in D3D12.
  • Interface documented using Doxygen-style comments.

Example

Basic usage of this library is very simple. Advanced features are optional. After you created global Allocator object, a complete code needed to create a texture may look like this:

D3D12_RESOURCE_DESC resourceDesc = {};
resourceDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
resourceDesc.Alignment = 0;
resourceDesc.Width = 1024;
resourceDesc.Height = 1024;
resourceDesc.DepthOrArraySize = 1;
resourceDesc.MipLevels = 1;
resourceDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
resourceDesc.SampleDesc.Count = 1;
resourceDesc.SampleDesc.Quality = 0;
resourceDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
resourceDesc.Flags = D3D12_RESOURCE_FLAG_NONE;

D3D12MA::ALLOCATION_DESC allocationDesc = {};
allocDesc.HeapType = D3D12_HEAP_TYPE_DEFAULT;

D3D12Resource* resource;
D3D12MA::Allocation* allocation;
HRESULT hr = allocator->CreateResource(
    &allocationDesc,
    &resourceDesc,
    D3D12_RESOURCE_STATE_COPY_DEST,
    NULL,
    &allocation,
    IID_PPV_ARGS(&resource));

With this one function call:

  1. ID3D12Heap memory block is allocated if needed.
  2. An unused region of the memory block assigned.
  3. ID3D12Resource is created as placed resource, bound to this region.

Allocation is an object that represents memory assigned to this texture. It can be queried for parameters like offset and size.

Binaries

The release comes with precompiled binary executable for "D3D12Sample" application which contains test suite. It is compiled using Visual Studio 2019, so it requires appropriate libraries to work, including "MSVCP140.dll", "VCRUNTIME140.dll", "VCRUNTIME140_1.dll". If its launch fails with error message telling about those files missing, please download and install Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019, "x64" version.

Copyright notice

This software package uses third party software:

For more information see NOTICES.txt.

Software using this library

  • The Forge - cross-platform rendering framework. Apache License 2.0.

Some other projects on GitHub and some game development studios that use DX12 in their games.

See also

Comments
  • Sporadic issues when copying between 3D textures with D3D12MA + WARP device

    Sporadic issues when copying between 3D textures with D3D12MA + WARP device

    Hi there! 👋 I've stumbled upon a weird issue while writing unit tests for my library ComputeSharp, which uses a 1:1 C# port of D3D12MA, and I'm investigating it to try to narrow down the root cause. I thought I'd also open an issue here for tracking as it seems to be related to D3D12MA (I've verified I cannot repro it if I just allocate my resources with ID3D12Device::CreateCommittedResource) in some way.

    I don't yet have a full minimal repro, but here's the general idea:

    • This issue only happens when using a WARP device.
    • This issue also only happens when using D3D12MA and a custom pool (here) for UMA devices.
    • This issue only happens when working with 3D textures and copying data between them.
    • I can only reproduce the issue when the test in question is running after another one with different parameters. If I run it on its own, it passes fine. This makes me think it might be somewhat related to how D3D12MA is reusing allocations.
    • I can repro this on 2 different machines with Windows 11. I've had another dev run the tests on his Windows 10 machine and they all passed. Another dev on another Windows 10 machine instead has even more tests failing for whatever reason.

    The repro steps I have so far:

    • Create a WARP device with D3D_FEATURE_LEVEL_11_0
    • Create a copy command queue, fence, etc. (the usual setup)
    • Create a D3D12MA allocator, and a pool with these parameters:
    D3D12MA_POOL_DESC poolDesc = default;
    poolDesc.HeapProperties.CreationNodeMask = 1;
    poolDesc.HeapProperties.VisibleNodeMask = 1;
    poolDesc.HeapProperties.Type = D3D12_HEAP_TYPE_CUSTOM;
    poolDesc.HeapProperties.CPUPageProperty = D3D12_CPU_PAGE_PROPERTY_WRITE_BACK;
    poolDesc.HeapProperties.MemoryPoolPreference = D3D12_MEMORY_POOL_L0;
    
    • Create a 3D texture with DXGI_FORMAT_R32_FLOAT format, D3D12_RESOURCE_FLAG_NONE flags, D3D12_RESOURCE_STATE_COMMON state, D3D12_HEAP_TYPE_DEFAULT heap, D3D12MA_ALLOCATION_FLAG_NONE D3D12MA flags, with the UMA pool.
    • Fill this texture with sample data, by creating an upload buffer with the allocator (D3D12_HEAP_TYPE_UPLOAD, D3D12_RESOURCE_FLAG_NONE, D3D12_RESOURCE_STATE_GENERIC_READ), using the right byte size depending on the copyable footprint retrieved from the previously allocated 3D texture. Map this resource, write the sample data to it, then create a copy command list and use ID3D12GraphicsCommandList::CopyTextureRegion to copy this data to the actual texture.
    • Create another 3D texture, leave this uninitialized (ie. use D3D12MA_ALLOCATION_FLAG_COMMITTED to force zeroing).
    • Copy a given volume from the first 3D texture to this new texture (ID3D12GraphicsCommandList::CopyTextureRegion).
    • Copy the entire second 3D texture back to be able to verify its contents after the previous copy. That is, allocate a readback buffer just like the upload buffer before, call ID3D12GraphicsCommandList::CopyTextureRegion to it, then map it and copy its contents somewhere you can easily read from (or alternatively just map the buffer and then read directly from there).

    Now, running these 2 tests one after the other causes the second one to fail:

    1. First test (passing ✅):
      • Source texture size: (512, 512, 3)
      • Destination texture size: (512, 512, 3)
      • Source copy offsets: (0, 0, 0)
      • Destination copy offsets: (0, 0, 1)
      • Copy volume size: (512, 512, 2)
    2. Second test (failing ❌):
      • Source texture size: (512, 512, 3)
      • Destination texture size: (512, 512, 4)
      • Source copy offsets: (0, 0, 1)
      • Destination copy offsets: (0, 0, 2)
      • Copy volume size: (512, 512, 2)

    By "test failing", I mean that when verifying the data read back from the second texture (the destination one), I get this:

    • The depth level 0 is correctly all zeroed ✅
    • The depth level 1 is not zeroed. It seems to still have the data from the previous test run, where items were copied at destination dpeth level 1 in the second texture. This time instead the destination depth level is 2, so depth level 1 should just all be 0. We're explicitly passing D3D12MA_ALLOCATION_FLAG_COMMITTED for the destination texture, so it should always be a new allocation with no previous data in it. ❌
    • Depth levels 2 and 3 are correctly holding the data copied from the source texture ✅

    Just a random thought, but if D3D12MA is in fact accidentlaly not using a committed allocation here and reusing a previous one, it might explain why the behavior is so inconsistent across different OS versions and machines, as the exact way previous allocations are reused and cleared by the OS/driver is undefined? Anyway this is as detailed I could be for this, let me know if there's anything else I can do! I can also try to actually come up with a minimal repro if it helps (though that'd be in C#).

    Thanks! 😄

    bug input needed 
    opened by Sergio0694 10
  • Add support for custom heaps (explicit D3D12_HEAP_PROPERTIES)

    Add support for custom heaps (explicit D3D12_HEAP_PROPERTIES)

    Overview

    Let me start by saying that D3D12MA is a great project and it's super easy to integrate, which is awesome 😄

    There's one big limitation currently in the way D3D12MA can be used, which is that there is no support for custom heaps. Even when using Allocator::CreatePool to create a custom pool, the POOL_DESC.HeapType specifically mentions in the documentation not to use the CUSTOM heap type, and there is also no way to pass other heap properties for the pool. In general, it would be necessary for POOL_DESC to allow callers to pass a D3D12_HEAP_PROPERTIES value directly (eg. replacing that single HeapType field), so that consumers would have full control on the target heap used for allocations when using that pool.

    This would enable a number of scenarios not possible today when using D3D12MA, such as:

    • Properly supporting UMA architectures (with a custom heap that's CPU visible)
    • Using a custom DEFAULT heap that's also CPU visible (using POOL_L0 and PAGE_ACCESS_WRITE_BACK)
    • Using a custom READBACK heap for transfer buffers that can be both read to and written to by the GPU
    • Etc.

    Proposed solution

    The simplest solution I can think of would be to just change this field in POOL_DESC:

     struct POOL_DESC
     {
    -    D3D12_HEAP_TYPE HeapType;
    +    D3D12_HEAP_PROPERTIES HeapProperties;
     };
    

    That HeapProperties would then be passed internally just like HeapType is today, and then MemoryBlock::Init would use that HeapProperties value to setup the D3D12_HEAP_DESC value before calling ID3D12Device::CreateHeap, instead of just setting the heap type like it does today. As in, minor changes would need to be done here:

    https://github.com/GPUOpen-LibrariesAndSDKs/D3D12MemoryAllocator/blob/e56c26d8105a679f22860b865f4c074d19006e88/src/D3D12MemAlloc.cpp#L3425-L3435

    This would give consumers of the library much greater flexibility, and it'd make the ability to create custom allocation pools much more worthwhile, as there could be way more customization done on the allocation parameters used by them.

    feature 
    opened by Sergio0694 10
  • Questions about Defragmentation Thread Safety

    Questions about Defragmentation Thread Safety

    Hello Adam ! Small question regarding defragmentation.

    In the documentation it is written :

    What it means in practice is that you shouldn't free any allocations from the defragmented pool since the moment a call to BeginPass begins.

    One solution that you give is :

    A solution to freeing allocations during defragmentation is to find such allocation on the list pass.pMoves[i] and set its operation to D3D12MA::DEFRAGMENTATION_MOVE_OPERATION_DESTROY instead of calling allocation->Release()

    However, it is only possible to set the operation after having the pass moves (after the return of the call). Do I need to keep track of allocations that want to be released during computation time or won't it be an issue if releasing occurs ? Could it be simpler to just add a reference to allocations that ends up being part of a move and remove a reference at the end of the pass ?

    Hope those questions are not stupid :) Have a nice day !

    question 
    opened by rbertin-aso 7
  • ComPtr support

    ComPtr support

    Is it possible to let the D3D12MA::Allocator inherit from IUnknown and provide implementations for AddRef (this won't conflict with the current use of the member method, AddRef), Release and QueryInterface to be used with Microsoft's intrusive ComPtr? This would be consistent with D3D12 types themselves and with 3th-party tools such as DirectXShaderCompiler which also mimic this former behavior.

    opened by matt77hias 7
  • Hey

    Hey

    With the memory allocator there are some issues from the gpu based validator.

    there is an uav which runs fine on the gpu, however i get warnings in the validator.

    They do not happen on the cpu validator. If i use only committed memory, there are no such errors. I am looking where this may come from

    D3D12 ERROR: GPU-BASED VALIDATION: Dispatch, Incompatible resource state: Resource: 0x000002BA10D56500:'Unnamed ID3D12Resource Object', Subresource Index: [0], Root descriptor type: UAV, Resource State: D3D12_RESOURCE_STATE_NON_PIXEL_SHADER_RESOURCE(0x40) (Promoted from COMMON state), Shader Stage: COMPUTE, Root Parameter Index: [2], Dispatch Index: [21], Shader Code: <couldn't find file location in debug info>, Asm Instruction Range: [0xa-0xffffffff], Asm Operand Index: [0], Command List: 0x000002BA10951520:'CommandList 2, Allocator 0', SRV/UAV/CBV Descriptor Heap: 0x000002BA10693200:'StaticHeapDX12', Sampler Descriptor Heap: 0x000002BA10696300:'StaticHeapDX12', Pipeline State: 0x000002BA4D5C6A00:'TressFXSDFCollision.hlslCollideHairVerticesWithSdf_forward', [ EXECUTION ERROR #942: GPU_BASED_VALIDATION_INCOMPATIBLE_RESOURCE_STATE]

    bug 
    opened by kingofthebongo2008 6
  • Question about residency management.

    Question about residency management.

    Hello, How do you recommend using this library with residency management. Microsoft has an example about this. Do you have any recommendations about this? Thank you

    question 
    opened by kingofthebongo2008 6
  • D3D12 Memory Alloc Question

    D3D12 Memory Alloc Question

    Hi, I am doing d3d12 programing. it's a few people do the similar work and it's very difficult to find someone to communication. I am try to create a texture, my code like this: ` D3D12_HEAP_PROPERTIES HeapPro; BuildTextureHeapPro(HeapPro); D3D12_RESOURCE_DESC TextureDesc; BuildTextureDesc(TextureDesc); TextureDesc.Alignment = 65536;

    D3D12_RESOURCE_ALLOCATION_INFO Info;
    Info = Device->GetResourceAllocationInfo(0, 1, &TextureDesc);
    mTextureLen = (int)Info.SizeInBytes;
    

    ` It works ok in most times. But when the texture width is 1108 and the height is 1440, B8G8R8A8 format, Info.SizeInBytes return a wrong size, which return 6619136. and 1108 * 1440 * 4 = 6382080. I try another way, for example, i try "Device->GetCopyableFootprints(&TextureDesc, 0, 1, 0, &Layouts, nullptr, &RowSizeInBytes, &RequiredSize);", and RequiredSize is 6635344, and it is not the right value. I think the right value is 6635520. Because the row pitch is 4608 and the right value is 4608 * 1440 = 6635520. what is the problem? i google for days and can't find the solution. please help, thankyou!

    opened by lygyue 5
  • TestStats and TestID3D12Device4 failing (master, e56c26d)

    TestStats and TestID3D12Device4 failing (master, e56c26d)

    Hello, I noticed that a couple of tests are currently failing, in particular TestStats and TestID3D12Device4. This happens both when running the pre-built executable, as well as when manually building and running the sample.

    I'm getting these two errors (the second is after commenting out the first and re-running the tests):

    Test stats Assertion failed: 0 && "C:\Users\Sergio\Documents\GitHub\D3D12MemoryAllocator\src\Tests.cpp" "(" "924" "): !( " "endStats.Total.AllocationCount == begStats.Total.AllocationCount + count" " )", file C:\Users\Sergio\Documents\GitHub\D3D12MemoryAllocator\src\Tests.cpp, line 924

    Test ID3D12Device4 Assertion failed: 0 && "C:\Users\Sergio\Documents\GitHub\D3D12MemoryAllocator\src\Tests.cpp" "(" "1372" "): FAILED( " "ctx.allocator->AllocateMemory1(&allocDesc, &heapAllocInfo, session, &alloc)" " )", file C:\Users\Sergio\Documents\GitHub\D3D12MemoryAllocator\src\Tests.cpp, line 1372

    System info:

    • Windows 10 Pro, x64 (build 19041.782)
    • GTX 1080 (driver 456.71)

    Hope this helps, keep up the good work! 🙌

    bug input needed 
    opened by Sergio0694 5
  • StrStrI identifier is undefined

    StrStrI identifier is undefined

    StrStrI is undefined no matter what I do.

    I've included it like this:

    #include <shlwapi.h> #inc.... #pragma comment (lib, "Shlwapi.lib")

    Please help.

    bug input needed 
    opened by EducatedMF 4
  • Crash on systems older than Windows 10 Build 20348

    Crash on systems older than Windows 10 Build 20348

    We upgraded D3D12MA and immediately started seeing an uptick in a crash in the wild. From what we can tell MemoryBlock::Init is blindly using GetDevice4() which may not be available in earlier versions of Windows 10 and code like this will GPF:

    #ifdef __ID3D12Device4_INTERFACE_DEFINED__
        HRESULT hr = m_Allocator->GetDevice4()->CreateHeap1(&heapDesc, pProtectedSession, D3D12MA_IID_PPV_ARGS(&m_Heap));
    #else
        D3D12MA_ASSERT(pProtectedSession == NULL);
        HRESULT hr = m_Allocator->GetDevice()->CreateHeap(&heapDesc, D3D12MA_IID_PPV_ARGS(&m_Heap));
    #endif
    
    bug next release compatibility 
    opened by shaggie76 3
  • Comptr Issues.

    Comptr Issues.

    Hi, I'm currently trying to implement this library with com smart pointers, but I've run into a problem.

    When I use Microsoft::WRL::ComPtrD3D12MA::Allocator allocator, I get the following compiler warning.

    C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\winrt\wrl/client.h(235,1): error C2440: '=': cannot convert from 'void' to 'unsigned long' [C:\Users\username\source\repos\D3D12\build\D3D12.vcxproj] [build] C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\winrt\wrl/client.h(235,32): message : Expressions of type void cannot be converted to other types [C:\Users\username\source\repos\D3D12\build\D3D12.vcxproj] [build] C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\winrt\wrl/client.h(228): message : while compiling class template member function 'unsigned long Microsoft::WRL::ComPtrD3D12MA::Allocator::InternalRelease(void) noexcept' [C:\Users\username\source\repos\D3D12\build\D3D12.vcxproj] [build] C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\winrt\wrl/client.h(290): message : see reference to function template instantiation 'unsigned long Microsoft::WRL::ComPtrD3D12MA::Allocator::InternalRelease(void) noexcept' being compiled [C:\Users\username\source\repos\D3D12\build\D3D12.vcxproj] [build] C:\Users\username\source\repos\D3D12\src\DXWindow.hpp(73): message : see reference to class template instantiation 'Microsoft::WRL::ComPtrD3D12MA::Allocator' being compiled [C:\Users\username\source\repos\D3D12\build\D3D12.vcxproj]

    However if I use CComPtrD3D12MA::Allocator allocator everything works fine.

    The gpuopen docs says this library should work with WRL pointers.

    I don't know if I should be mixing both types of smart pointer as my other com objects are using WRL.

    Any help is appreciated, thanks.

    compatibility input needed 
    opened by ghost 3
  • Regarding D3D12_HEAP_FLAG_CREATE_NOT_ZEROED

    Regarding D3D12_HEAP_FLAG_CREATE_NOT_ZEROED

    Hi Adam ! Hope everything is ok !

    I wanted to know if something was a feature or more of a bug.

    I have a resource that is created with the allocator. By default, this resource (because it can be created without the allocator) is using the D3D12_HEAP_FLAGS : D3D12_HEAP_FLAG_CREATE_NOT_ZEROED. Because of this extra flag, the function CalcDefaultPoolIndex considers that the resource should not be created in a default pool, return -1 and the resource will be created as committed.

    Is it the correct behaviour or should it be created as placed (since a placed resource will never be zeroed) ?

    bug next release 
    opened by rbertin-aso 1
  • AlignUp SizeInBytes for buffer memory

    AlignUp SizeInBytes for buffer memory

    https://github.com/GPUOpen-LibrariesAndSDKs/D3D12MemoryAllocator/blob/7597f717c7b32b74d263009ecc15985b517585c7/src/D3D12MemAlloc.cpp#L8023

    If you are using the same heap for buffers and textures, it would be possible for a "small" texture with 4kb alignment to be placed in that memory space after the buffer, if the size is not aligned up. Is there any other reason you align up the size when you allocate memory for a buffer?

    Or is the intention to always use separate heaps for buffers and textures?

    investigating optimization 
    opened by kruseborn 0
  • support for Xbox series S|X through XGDK

    support for Xbox series S|X through XGDK

    Just a few modifications added for compilation using XGDK and specific DX12 for xbox related headers. Also modified ressource alignement allocation management to work with UMA arch. (DX12 on xbox and consoles)

    investigating 
    opened by ozzyyzzo4096 1
Releases(v2.0.1)
  • v2.0.1(Apr 5, 2022)

    A maintenance release with some bug fixes and improvements. There are no changes in the library API.

    • Fixed an assert failing when detailed JSON dump was made while a custom pool was present with specified string name (#36, thanks @rbertin-aso).
    • Fixed image height calculation in JSON dump visualization tool "GpuMemDumpVis.py" (#37, thanks @rbertin-aso).
    • Added JSON Schema for JSON dump format - see file "tools\GpuMemDumpVis\GpuMemDump.schema.json".
    • Added documentation section "Resource reference counting".
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Mar 25, 2022)

    So much has changed since the first release that it doesn’t make much sense to compare the differences. Here are the most important features that the library now provides:

    • Powerful custom pools, which give an opportunity to not only keep certain resources together, reserve some minimum or limit the maximum amount of memory they can take, but also to pass additional allocation parameters unavailable to simple allocations. Among them, probably the most interesting is POOL_DESC::HeapProperties, which allows you to specify parameters of a custom memory type, which may be useful on UMA platforms. Committed allocations can now also be created in custom pools.
    • The API for statistics and budget has been redesigned - see structures Statistics, Budget, DetailedStatistics, TotalStatistics.
    • The library exposes its core allocation algorithm via the “virtual allocator” interface. This can be used to allocate pieces of custom memory or whatever you like, even something completely unrelated to graphics.
    • The allocation algorithm has been replaced with the new, more efficient TLSF.
    • Added support for defragmentation.
    • Objects of the library can be used with smart pointers designed for COM objects.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0+vs2017(Jan 24, 2020)

  • v1.0.0(Sep 2, 2019)

Owner
GPUOpen Libraries & SDKs
Libraries and SDKs from the GPUOpen initiative
GPUOpen Libraries & SDKs
Minimal A* implementation in C. No dynamic memory allocation.

Micro A Star Path Finder This is a minimal A* path finder implementation in C, without any dynamic memory allocation. Usage The maximum size of the ma

Felipe da Silva 96 Dec 24, 2022
Low Level Graphics Library (LLGL) is a thin abstraction layer for the modern graphics APIs OpenGL, Direct3D, Vulkan, and Metal

Low Level Graphics Library (LLGL) Documentation NOTE: This repository receives bug fixes only, but no major updates. Pull requests may still be accept

Lukas Hermanns 1.5k Jan 8, 2023
A minimal Direct3D 12 example that draws an animated triangle, written entirely in C-style C++, and all taking place inside a single function.

A minimal Direct3D 12 example that draws an animated triangle, written entirely in C-style C++, and all taking place inside a single function.

Taoufik Rida Bouftass 7 May 3, 2022
Direct3D to OpenGL abstraction layer

TOGL Direct3D -> OpenGL translation layer. Taken directly from the DOTA2 source tree; supports: Limited subset of Direct3D 9.0c Bytecode-level HLSL ->

Valve Software 2k Dec 27, 2022
Direct3D 12.0 quick reference guide

Direct3D 12.0 quick reference guide

Alessio1989 54 Oct 22, 2022
Simple console tool to get all the information from DXGI and Direct3D 12 on current system

D3d12info Simple console tool to get all the information from DXGI and Direct3D 12 (D3D12) on current system. Built and tested on Windows 10 64-bit us

Adam Sawicki 40 Dec 8, 2022
StereoKit is an easy-to-use open source mixed reality library for building HoloLens and VR applications with C# and OpenXR!

StereoKit is an easy-to-use open source mixed reality library for building HoloLens and VR applications with C# and OpenXR! Inspired by libraries like XNA and Processing, StereoKit is meant to be fun to use and easy to develop with, yet still quite capable of creating professional and business ready software.

Nick Klingensmith 730 Jan 4, 2023
OpenGL made easy.

SmartGL OpenGL made easy. Demo video: https://youtu.be/zDuNxg4LJ18 (sorry for low-quality recording) For an example of how my engine is used, please r

null 18 Nov 11, 2022
Powerful, easy to use, and portable visualization toolkit for mixed 3D and 2D content

Powerful, easy to use, and portable visualization toolkit for mixed 3D and 2D content

Microsoft 138 Jan 8, 2023
The official Open-Asset-Importer-Library Repository. Loads 40+ 3D-file-formats into one unified and clean data structure.

Open Asset Import Library (assimp) A library to import and export various 3d-model-formats including scene-post-processing to generate missing render

Open Asset Import Library 8.6k Jan 4, 2023
Cross-platform, graphics API agnostic, "Bring Your Own Engine/Framework" style rendering library.

bgfx - Cross-platform rendering library GitHub Discussions Discord Chat What is it? Cross-platform, graphics API agnostic, "Bring Your Own Engine/Fram

Бранимир Караџић 12.6k Jan 8, 2023
Modern C++14 library for the development of real-time graphical applications

CI Community Support bs::framework is a C++ library that aims to provide a unified foundation for the development of real-time graphical applications,

null 1.7k Jan 2, 2023
A modern cross-platform low-level graphics library and rendering framework

Diligent Engine A Modern Cross-Platform Low-Level 3D Graphics Library Diligent Engine is a lightweight cross-platform graphics API abstraction library

Diligent Graphics 2.6k Dec 30, 2022
A multi-platform library for OpenGL, OpenGL ES, Vulkan, window and input

GLFW Introduction GLFW is an Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platf

GLFW 10k Jan 1, 2023
Antialiased 2D vector drawing library on top of OpenGL for UI and visualizations.

This project is not actively maintained. NanoVG NanoVG is small antialiased vector graphics rendering library for OpenGL. It has lean API modeled afte

Mikko Mononen 4.6k Jan 2, 2023
An Open-Source subdivision surface library.

OpenSubdiv OpenSubdiv is a set of open source libraries that implement high performance subdivision surface (subdiv) evaluation on massively parallel

Pixar Animation Studios 2.7k Jan 2, 2023
C++ (with python bindings) library for easily reading/writing/manipulating common animation particle formats such as PDB, BGEO, PTC. See the discussion group @ http://groups.google.com/group/partio-discuss

Partio - A library for particle IO and manipulation This is the initial source code release of partio a tool we used for particle reading/writing. It

Walt Disney Animation Studios 412 Dec 29, 2022
ANSI C library for NURBS, B-Splines, and Bézier curves with interfaces for C++, C#, D, Go, Java, Lua, Octave, PHP, Python, R, and Ruby.

TinySpline TinySpline is a small, yet powerful library for interpolating, transforming, and querying arbitrary NURBS, B-Splines, and Bézier curves. Th

Marcel Steinbeck 895 Dec 28, 2022