ADOP: Approximate Differentiable One-Pixel Point Rendering

Overview

ADOP: Approximate Differentiable One-Pixel Point Rendering

Darius Rückert, Linus Franke, Marc Stamminger

Abstract: We present a novel point-based, differentiable neural rendering pipeline for scene refinement and novel view synthesis. The input are an initial estimate of the point cloud and the camera parameters. The output are synthesized images from arbitrary camera poses. The point cloud rendering is performed by a differentiable renderer using multi-resolution one-pixel point rasterization. Spatial gradients of the discrete rasterization are approximated by the novel concept of ghost geometry. After rendering, the neural image pyramid is passed through a deep neural network for shading calculations and hole-filling. A differentiable, physically-based tonemapper then converts the intermediate output to the target image. Since all stages of the pipeline are differentiable, we optimize all of the scene's parameters i.e. camera model, camera pose, point position, point color, environment map, rendering network weights, vignetting, camera response function, per image exposure, and per image white balance. We show that our system is able to synthesize sharper and more consistent novel views than existing approaches because the initial reconstruction is refined during training. The efficient one-pixel point rasterization allows us to use arbitrary camera models and display scenes with well over 100M points in real time.

[Paper] [Youtube] [Supplementary Material]

Compile Instructions

  • ADOP is implemented in C++/CUDA using libTorch.
  • A python wrapper for pyTorch is currently not available. Feel free to submit a pull-request on that issue.
  • The detailed compile instructions can be found here: src/README.md

Running ADOP on pretrained models

After a successful compilation, the best way to get started is to run adop_viewer on the tanks and temples scenes using our pretrained models. First, download the scenes and extract them into ADOP/scenes. Now, download the model checkpoints and extract them into ADOP/experiments. Your folder structure should look like this:

ADOP/
    build/
        ...
    scenes/
        tt_train/
        tt_playground/
        ...
    experiments/
        2021-10-15_08:26:49_multi_scene/
        ...

ADOP Viewer

The adop_viewer can now be run by passing the path to a scene. It will automatically search for fitting pretrained models in the experiments/ directory. For example:

cd ADOP
./build/bin/adop_viewer --scene_dir scenes/boat
  • The working dir of adop_viewer must be the ADOP root directory. This is required because the shaders and experiments are search on relative paths.
  • The most important keyboard shortcuts are:
    • F1: Switch to 3DView
    • F2: Switch to neural view
    • F3: Switch to split view (default)
    • WASD: Move camera
    • Center Mouse + Drag: Rotate around camera center
    • Left Mouse + Drag: Rotate around world center
    • Right click in 3DView: Select camera
    • Q: Move camera to selected camera

ADOP VR Viewer

We have implemented experimental VR support using OpenVR/SteamVR. Checkout src/README.md for the compilation requirements.

cd ADOP
./build/bin/adop_vr_viewer --scene_dir scenes/tt_playground
  • Tune the render_scale settings for a compromise between FPS and resolution
  • Requires a high-end GPU to run reasonable
  • Hopefully will be optimized in the future :)

HDR Scenes

ADOP supports HDR scenes due to the physically-based tone mapper. The input images can therefore have different exposure settings. The dynamic range of a scene is the difference between the smallest and largest EV of all images. For example, our boat scene (see below) has a dynamic range of ~10 stops. If you want to fit ADOP to your own HDR scene consider the following:

  • For small dynamic ranges (<4) you can use the default pipeline.
  • For scenes with a large dynamic range, change to the log texture format and reduce the texture learning rate. Use the train config of our boat scene as reference.
  • Check if an initial EV guess is available. Many cameras store the exposure settings in the EXIF data.
  • Set the scene EV in the dataset.ini to the mean EV of all frames. This keeps the weights in a reasonable range.

When viewing HDR scenes in the adop_viewer you can press [F7] to open the tone mapper tab. Here you can change the exposure value of the virtual camera. In the render settings you find an option to use OpenGL based tone mapping instead of the learned on.

viewer_boat.mp4

Scene Description

  • ADOP uses a simple, text-based scene description format.
  • To run ADOP on your scenes you have to convert them into this format.
  • After that you run adop_scene_preprocess to precompute various parameters.
  • If you have created your scene with COLMAP (like us) you can use the colmap2adop converter.
  • More infos on this topic can be found here: scenes/README.md

Training ADOP

The ADOP pipeline is fitted to your scenes by the adop_train executable. All training parameters are stored in a separate config file. The basic syntax is:

cd ADOP
./build/bin/adop_train --config configs/train_boat.ini

Make again sure that the working directory is the ADOP root. Otherwise, the loss models will not be found.

Parameters

In ADOP/configs/ you will find the train configurations files that we used to create the pretrained models. We recommend to start with one these for your scenes.

  • Choose configs/train_boat.ini as a starting point, if your scene has been captured with a high variation of exposure values. (> 5 Stops)
  • Choose configs/train_tank_and_temples_multi.ini as a starting point for indoor scenes or if the images have a similar exposure value.

Memory Consumption

Both of our reference training config files were created for a 40GB A100 GPU. If you run these on a lower-end GPU you will most likely run out of memory. The important config params that control memory consumption are:

# Settings for 40GB A100
# Size in pixels of the random crop during training
train_crop_size = 512
# How many crops are taken per image
inner_batch_size = 4
# How many images are batched together. One batch will have inner_batch_size x batch_size = 16 crops!
batch_size = 4
# Settings for 12GB Titan V
train_crop_size = 256
inner_batch_size = 8
batch_size = 2

Additionally, you will find that the point cloud size will also have a significant impact on memory consumption. On 12GB cards, we recommend to process only point clouds up to 100M points. Otherwise, the batch size will be too small for good results.

Duration

As you can see in configs, we usually train for 400 epochs. This will take between 12-24h depending on scene size and training hardware. However, after 100 epochs (3-6h) the novel view synthesis already works very well. You can use this checkpoint in the adop_viewer to check if everything is working.

Camera Models

ADOP currently supports two different camera models. The Pinhole/Distortion camera model and the Omnidirectional camera model.

Pinhole/Distortion Camera Model

  • The default model for photogrammetry software like COLMAP, Metashape and Capture Reality.
  • Our implementation is found here: Pinhole Part and Distortion Part

Omnidirectional Camera Model

  • A fisheye camera model extreme wide-angle angles.
  • Our implementation is found here: Model

Extending ADOP with other Camera Models

  1. Implement the camera model and its derivative. The derivative should be returned as the Jacobian matrix.
  2. Implement the forward and backward projection function here.
  3. Add a new type here, update the rasterization code and the wrapper code.

Supplementary Material

DOI

The supplementary material is published on Zenodo:

https://zenodo.org/record/5602606

This directory includes:

  • videos.zip
    • Additional separated video clips of the scenes.
    • Full-HD, 60 FPS
  • colmap.zip
    • The COLMAP reconstructions of our 5 scenes (boat + 4 tanks and temple scenes)
    • Includes triangle meshes to compare other approaches.
  • scenes.zip
    • The preprocessed scenes in our scene format
    • Required to run the pretrained model
  • experiments.zip
    • The pretrained models for all 5 scenes.
    • The 4 tanks and temples scenes were trained simultaneously and are therefore combined into a single experiment. They also share the same rendering network.

Preview Videos

tt_playground_composite_small.mp4
tt_train_composite_small.mp4
tt_m60_composite_small.mp4
Issues
  • Guide on exporting scene from COLMAP

    Guide on exporting scene from COLMAP

    Hi! I have some troubles exporting point cloud from COLMAP to compatible with ADOP format. Here is what I did:

    1. Autoreconstruction in COLMAP (both sparse and dense)
    2. File > Export As > as .ply
    3. preprocessed it with colmap2adop tool
    4. started training
    5. got an error that my point cloud file does not contain normal data
    6. I checked that your .ply files have normal data in headers, but I struggle to make COLMAP include it in mine
    opened by DenShlk 22
  • RAM usage when training dataset

    RAM usage when training dataset

    Through a lot of trial and error I managed to get both the ADOP viewer and trainer set up on Ubuntu 20.04.

    SETUP I'm trying to run my own dataset through the trainer to see how it is. I only have an Nvidia GTX 1070 card with 8GB of VRAM, 16GB of physical RAM but plenty of hard disk space. I've gotten to the point that the trainer runs without crashing. My dataset is 175 images, downscaled to 1000x562, converted from a ~7M point-cloud project in COLMAP.

    PROBLEM However, running the trainer seems to take up over 1GB per epoch on top of the base amount. This quickly sends my RAM usage to near full. Luckily I have a large swap partition but that too fills up over hundreds of epochs. Is this expected behaviour or a memory leak? Or could it be because my install or conda environment might be faulty? Or is it my incapable hardware? In any case, once all my swap space gets filled the program is unfortunately killed on its own.

    If tweaking any of the training config or dataset parameters could potentially fix or mitigate this issue, please do let me know. Thank you for your work on this, I find this to be a very interesting project.

    opened by ShinyLuxray 19
  • How to optimize for rendering

    How to optimize for rendering

    Hi I have a question regarding the adop_viewer. l tried the adop_viewer with pretrained models on tanks and temples dataset, it works perfectly on a laptop with a 3080 GPU (8GB VRAM) on all scenes.

    I guess you are using the 40GB A100 ini files for training these models, so I also tried training my data on a GPU with similar VRAM using exact settings from train_tank_and_temples_multi.ini. However, when I attempt to visualize my checkpoints on the 3080 GPU it shows CUDA out of memory for all my scenes.

    My question is: for rendering, did you make some optimizations to make the trained models fit in a GPU with less VRAM? Or is there certain configurations/arguments that allows the adop_viewer to take a model trained with high-end GPUs and render on low-end GPUs? Thanks in advance!

    opened by qiaosongwang 12
  • Issue with multi-camera COLMAP dataset

    Issue with multi-camera COLMAP dataset

    I have successfully trained with ADOP on scenes that use a single camera. I am having difficulty, however, with a larger dataset comprised of 882 cameras, 2/3 of which are from drone footage, the other 1/3 being a hand-held DSLR.

    COLMAP successfully generates the camera registrations and point cloud, and running colmap2adop correctly creates all the necessary files for training. I am noticing, however, that COLMAP resizes every single image, effectively generating 882 individual camera profiles. I thought at first this wouldn't necessarily be an issue, as adop_train can parse multiple cameras. I am running into this error, though:

    Assertion 'cam.w == image_size_input(0)' failed!
      File: /home/visionarymind/Downloads/ADOP/src/lib/data/Dataset.cpp:49
      Function: SceneDataTrainSampler::SceneDataTrainSampler(std::shared_ptr<SceneData>, std::vector<int>, bool, Saiga::ivec2, int, bool)
    Aborted (core dumped)
    

    I have looked at each camera.ini file, and the dimensions match each image. Would you have any idea why this error would be thrown and any way to work around it? Technically, there were only two cameras, but COLMAP resizes the images during registration.

    opened by VisionaryMind 10
  • Something worry in PointRenderer.cu & PointRenderer.h

    Something worry in PointRenderer.cu & PointRenderer.h

    2021-11-10 15-59-34屏幕截图 Can not Build CUDA object src/lib/CMakeFiles/NeuralPoints.dir/rendering/PointRenderer.cu.o

    /home/xgenie/projectfold/ADOP/src/lib/./config.h(25): error: inline specifier allowed on function declarations only

    /home/xgenie/projectfold/ADOP/src/lib/./config.h(25): error: inline specifier allowed on function declarations only

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(37): error: qualified name is not allowed

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(39): error: Function is not a template

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(39): error: not a class or struct name

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(42): error: identifier "variable_list" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(42): error: identifier "AutogradContext" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(42): error: identifier "Variable" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(42): error: identifier "Variable" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(42): error: identifier "Variable" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(43): error: identifier "Variable" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(43): error: identifier "Variable" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(43): error: identifier "IValue" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(45): error: identifier "variable_list" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(45): error: identifier "AutogradContext" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.h(45): error: identifier "variable_list" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(422): error: expected an identifier

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(422): error: identifier "K" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(422): error: expected a "]"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(422): warning: nonstandard use of "auto" to both deduce the type from an initializer and to announce a trailing return type

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(422): error: cannot deduce "auto" type (initializer required)

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(422): error: expected a ";"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(429): error: expected an identifier

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(429): error: identifier "aff" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(429): error: expected a "]"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(429): warning: nonstandard use of "auto" to both deduce the type from an initializer and to announce a trailing return type

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(429): error: cannot deduce "auto" type (initializer required)

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(429): error: expected a ";"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(621): error: expected an identifier

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(621): error: identifier "K" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(621): error: expected a "]"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(621): warning: nonstandard use of "auto" to both deduce the type from an initializer and to announce a trailing return type

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(621): error: cannot deduce "auto" type (initializer required)

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(621): error: expected a ";"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(622): error: expected an identifier

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(622): error: identifier "g_point" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(622): error: expected a "]"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(622): warning: nonstandard use of "auto" to both deduce the type from an initializer and to announce a trailing return type

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(622): error: cannot deduce "auto" type (initializer required)

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(622): error: expected a ";"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(686): error: expected an identifier

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(686): error: identifier "aff" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(686): error: expected a "]"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(686): warning: nonstandard use of "auto" to both deduce the type from an initializer and to announce a trailing return type

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(686): error: cannot deduce "auto" type (initializer required)

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(686): error: expected a ";"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(687): error: expected an identifier

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(687): error: identifier "g_point" is undefined

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(687): error: expected a "]"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(687): warning: nonstandard use of "auto" to both deduce the type from an initializer and to announce a trailing return type

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(687): error: cannot deduce "auto" type (initializer required)

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(687): error: expected a ";"

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(527): warning: variable "iw" was declared but never referenced

    /home/xgenie/projectfold/ADOP/src/lib/rendering/PointRenderer.cu(615): warning: variable "cam2" was set but never used

    opened by XGenietony 10
  • Unable to get output in the neural view

    Unable to get output in the neural view

    Hello, thanks for releasing your code!

    I tried running the program (the compilation was successful) but I am unable to obtain any output in the neural view.

    image

    This is the output I get from the terminal. I am using Ubuntu 20.04.3 LTS.

    (base) [email protected]:~/Documents/ADOP$ ./build/bin/adop_viewer --scene_dir scenes/tt_lighthouse/
    register neural render info
    Ref. 
    =============== Saiga ===============
    | Saiga Version     1.3.2           |
    | Eigen Version     3.3.91          |
    | Compiler          GNU             |
    |   -> Version      9.3.0           |
    | Build Type        RelWithDebInfo  |
    | Debug             0               |
    | Eigen Debug       0               |
    | ASAN              0               |
    | Asserts           1               |
    | Optimizations     0               |
    =====================================
    Initializing GLFW.
    Initializing GLFW sucessfull!
    Creating GLFW Window. 1920x1080 Mode=3 Fullscreen=1 Borderless=0
    =========================== OpenGL ===========================
    | OpenGL Version    3.2.0 NVIDIA 495.29.05                   | 
    | GLSL Version      1.50 NVIDIA via Cg compiler              | 
    | Renderer          NVIDIA GeForce GTX 1660 SUPER/PCIe/SSE2  | 
    | Vendor            NVIDIA Corporation                       | 
    ==============================================================
    [Renderer] Target resized to 1848x1016
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/imgui_gl.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/tone_map.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/tone_map_linear.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_extract_bright.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_downsample.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_upsample.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_combine_simple.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/copy_image.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/compute_blur.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/compute_blur.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_directional.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_directional.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_point.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_point.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_spot.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_spot.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_point.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/light_spot.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/stenciltest.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/post_processing/post_processing.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/lighting/blitDepth.glsl
    Deferred Renderer initialized. Render resolution: 1848x1016
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/post_processing/imagedisplay.glsl
    Program Initialized!
    Loading Scene scenes/tt_lighthouse/
    ====================================
    Scene Loaded
      Name       tt_lighthouse
      Path       /home/jeremy/Documents/ADOP/scenes/tt_lighthouse
      Image Size 2048x1080
      Aspect     1.8963
      K          1666.68 1661.18 1024    540     0      
      ocam       2048x1080 affine(1, 0, 0, 0, 0) cam2world() world2cam()
      ocam cut   1
      normalized center 0 0
      dist       -0.126731   0.0217503   0           0           0           0           0.000406465 0.000123456
      Points     12313620
      Colors     1
      Normals    1
      Num Images 309
      Num Cameras 1
    ====================================
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/asset/ColoredAsset.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/asset/ColoredAsset.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/asset/ColoredAsset.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/asset/LineVertexColoredAsset.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/asset/LineVertexColoredAsset.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/asset/LineVertexColoredAsset.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/tone_map.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/tone_map_linear.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_extract_bright.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_downsample.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_upsample.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/bloom_combine_simple.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/copy_image.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/compute_blur.glsl
    loading shader /home/jeremy/Documents/ADOP/External/saiga/shader/compute/compute_blur.glsl
    Found experiment experiments//2021-10-30_09:17:38_multi_scene_pose/ with 1 epochs
    Found experiment experiments//2021-10-30_09:14:22_boat/ with 1 epochs
    loading checkpoint 2021-10-30_09:17:38_multi_scene_pose -> ep0400
    Load Checkpoint render
    Total Model Params: 574651
    > Reset cuda cache
    GPU memory - Point Cloud: 295.527MB
    Pinhole Intrinsics:
    Tensor [1, 13] float cuda:0 Min/Max -0.126731 1666.68 Mean 376.289 Sum 4891.75 req-grad 1
    GPU memory - Texture: 197.018MB
    Load Checkpoint points 12313620 max uv: 12313619
    Load Checkpoint texture. Texels: 12313620 Channels: 4
    Load Checkpoint pose
    First pose before SE3(Quatwxyz(0.990142,-0.0887136,0.107975,0.00948205),Vec3(0.115149,0.198961,4.53537))
    First pose after SE3(Quatwxyz(0.990146,-0.0887041,0.107948,0.00948975),Vec3(0.114891,0.198873,4.53539))
    Load Checkpoint intrinsics
    Load Checkpoint environment_map
    Load Checkpoint vignette
    Load Checkpoint response
    Load Checkpoint exposures_values
    Using Adam texture optimzier
    optimizing texture with lr 0.08/0.004
    optimizing environment_map with lr 0.02
    optimizing response with lr 0.001
    optimizing exposure with lr 0.0005
    optimizing vignette with lr 5e-06
    optimizing poses with lr 0.005
    GPU memory - Texture: 197.018MB
    Current Best (img,cam) = (0,0) EV: -0.239136
    [Renderer] Target resized to 733x354
    [DeferredRenderer] Resize 733x354
    loading shader shader//point_render.glsl
    [Renderer] Target resized to 545x249
    [DeferredRenderer] Resize 545x249
    loading checkpoint 2021-10-30_09:17:38_multi_scene_pose -> ep0400
    Load Checkpoint render
    Total Model Params: 574651
    > Reset cuda cache
    GPU memory - Point Cloud: 295.527MB
    Pinhole Intrinsics:
    Tensor [1, 13] float cuda:0 Min/Max -0.126731 1666.68 Mean 376.289 Sum 4891.75 req-grad 1
    GPU memory - Texture: 197.018MB
    Load Checkpoint points 12313620 max uv: 12313619
    Load Checkpoint texture. Texels: 12313620 Channels: 4
    Load Checkpoint pose
    First pose before SE3(Quatwxyz(0.990146,-0.0887041,0.107948,0.00948975),Vec3(0.114891,0.198873,4.53539))
    First pose after SE3(Quatwxyz(0.990146,-0.0887041,0.107948,0.00948975),Vec3(0.114891,0.198873,4.53539))
    Load Checkpoint intrinsics
    Load Checkpoint environment_map
    Load Checkpoint vignette
    Load Checkpoint response
    Load Checkpoint exposures_values
    Using Adam texture optimzier
    optimizing texture with lr 0.08/0.004
    optimizing environment_map with lr 0.02
    optimizing response with lr 0.001
    optimizing exposure with lr 0.0005
    optimizing vignette with lr 5e-06
    optimizing poses with lr 0.005
    GPU memory - Texture: 197.018MB
    Current Best (img,cam) = (0,0) EV: -0.239136
    

    Any help would be greatly appreciated! Thank you!

    opened by Mickey1356 9
  • A question about CmakeList.txt

    A question about CmakeList.txt

    Hello, I am a beginner of neural network. I want to run your program on my computer, but when I Compiling ADOP,in the command "cmake -DCMAKE_PREFIX_PATH="${CONDA}/lib/python3.8/site-packages/torch/;${CONDA}" ..",there was an error,it was "The source directory XXXXdoes not appear to contain CMakeLists.txt."How can i solve this problem?Should i write the CmakeList on myself ?Is there sth wrong in other parts ?this may easy for most people,but i sincerely hope you can answer me,thank you!

    opened by Lu-pin-an 9
  • Getting

    Getting "Loss not finite :(".

    Hello, I'm preparing a dataset of the interior of an appartment. I checked and COLMAP seems to work well, despite that there is a fair amount of white walls. After a random number of epochs, I'm getting the following (pasting only the tail of the output):

    === Epoch 81 ===
    
    Train 81 |   0% |                              |    0/1016 [00:00:0000] [0.00 e/s] 
    Train 81 |   0% |                              |    0/1016 [00:00:0000] [0.00 e/s] 
    Train 81 |   4% |#                             |   40/1016 [00:05:0000] [8.00 e/s]  Cur=212.233490 Avg=302.391327
    Train 81 |   9% |##                            |   88/1016 [00:10:0000] [8.80 e/s]  Cur=333.448730 Avg=287.600220
    Train 81 |  13% |####                          |  136/1016 [00:15:0000] [9.07 e/s]  Cur=441.840668 Avg=295.512512
    Train 81 |  18% |#####                         |  184/1016 [00:20:0000] [9.20 e/s]  Cur=379.657135 Avg=317.960571
    Train 81 |  22% |######                        |  224/1016 [00:25:0000] [8.96 e/s]  Cur=277.145172 Avg=296.512878
    Train 81 |  27% |########                      |  272/1016 [00:30:0000] [9.07 e/s]  Cur=310.707367 Avg=302.186951
    Train 81 |  31% |#########                     |  320/1016 [00:35:0000] [9.14 e/s]  Cur=34.233173 Avg=287.412201
    Train 81 |  36% |##########                    |  368/1016 [00:40:0001] [9.20 e/s]  Cur=33.885960 Avg=282.051544
    Train 81 |  41% |############                  |  416/1016 [00:45:0001] [9.24 e/s]  Cur=267.568176 Avg=272.566742
    Train 81 |  46% |#############                 |  464/1016 [00:50:0001] [9.28 e/s]  Cur=376.509857 Avg=253.497742
    Train 81 |  50% |###############               |  512/1016 [00:55:0001] [9.31 e/s]  Cur=347.692017 Avg=255.870377
    Train 81 |  55% |################              |  560/1016 [01:00:0001] [9.33 e/s]  Cur=222.076523 Avg=251.570724
    Train 81 |  60% |#################             |  608/1016 [01:05:0001] [9.35 e/s]  Cur=184.181763 Avg=259.685944
    Train 81 |  65% |###################           |  656/1016 [01:10:0002] [9.37 e/s]  Cur=30.161480 Avg=255.755127
    Train 81 |  69% |####################          |  704/1016 [01:15:0002] [9.39 e/s]  Cur=34.554646 Avg=257.362579
    Train 81 |  74% |######################        |  752/1016 [01:20:0002] [9.40 e/s]  Cur=239.453049 Avg=248.403091
    Train 81 |  78% |#######################       |  792/1016 [01:25:0002] [9.32 e/s]  Cur=33.262161 Avg=248.473404
    Train 81 |  83% |########################      |  840/1016 [01:30:0002] [9.33 e/s]  Cur=348.131836 Avg=255.551346
    Train 81 |  87% |##########################    |  888/1016 [01:35:0003] [9.35 e/s]  Cur=547.986633 Avg=261.241058
    Train 81 |  92% |###########################   |  936/1016 [01:40:0003] [9.36 e/s]  Cur=268.251221 Avg=261.505127
    Tensor [8, 4, 512, 512] float cuda:0 Min/Max 1.3125e-06 20599.7 Mean 0.520595 Sum 4.36707e+06 sdev 7.16297 req-grad 1
    Tensor [8, 4, 256, 256] float cuda:0 Min/Max 3.05836e-05 34.5411 Mean 0.535766 Sum 1.12358e+06 sdev 0.771836 req-grad 1
    Tensor [8, 4, 128, 128] float cuda:0 Min/Max 1.22896e-05 33.5968 Mean 0.552693 Sum 289770 sdev 0.658228 req-grad 1
    Tensor [8, 4, 64, 64] float cuda:0 Min/Max 0.000194362 36.972 Mean 0.557636 Sum 73090.4 sdev 0.61885 req-grad 1
    
    Tensor [8, 2, 512, 512] float cuda:0 Min/Max -1 0.999414 Mean 0.083806 Sum 351508 sdev 0.473772 req-grad 0
    Tensor [1] float cuda:0 Min/Max nan nan Mean nan Sum nan sdev nan req-grad 1
    Tensor [8, 3, 512, 512] float cuda:0 Min/Max nan nan Mean nan Sum nan sdev nan req-grad 1
    Tensor [1, 1, 512, 512] float cuda:0 Min/Max 0 1 Mean 0.878906 Sum 230400 sdev 0.326237 req-grad 0
    Tensor [8, 3, 512, 512] float cuda:0 Min/Max 0 0.956863 Mean 0.653581 Sum 4.11197e+06 sdev 0.274293 req-grad 0
    
    Scene:
    Scene Log - Texture: Tensor [4, 11655018] float cuda:0 Min/Max -22.6205 15.4725 Mean -0.8064 Sum -3.75944e+07 sdev 1.10892 req-grad 1
    Background Desc:  -0.970885 -0.863956 -0.900406 -0.882389 
    Environment map: Tensor [1, 4, 1024, 512] float cuda:0 Min/Max -6.2707 3.55126 Mean -0.982767 Sum -2.06101e+06 sdev 0.274843 req-grad 1
    Poses: Tensor [254, 8] double cuda:0 Min/Max -6.83462 8.02766 Mean 0.196939 Sum 400.181 sdev 1.62054 req-grad 0
    Vignette params:  -0.0149647 -0.00850714 -0.00492687 |   0.00124693 -0.000540384
    terminate called after throwing an instance of 'std::runtime_error'
      what():  Loss not finite :(
    

    Could it be due to the lack of enough points on the white walls? If so, how would you go about it? I'm willing to provide any data if required to better assess this.

    opened by cduguet 8
  • Something wrong with the neural view using my own data

    Something wrong with the neural view using my own data

    After using the ADOP viewer with 2 kinds of my trained data, I got the following results.

    2022-06-29 10-50-44 的屏幕截图

    2022-06-29 11-08-36 的屏幕截图

    All neural views of them are definitely wrong. Is there the same problem with me?

    By the way, when I used the ADOP viewer with the scene of "boat", there is a correct colored display in the neural view.

    opened by HandsomeFYM 5
  • Strange random color when training

    Strange random color when training

    Hi,I have some doubt of my train result on other dataset. It looks like below:( left picture is the train result, right is ground trurh) image

    I can not find out why it happens, is it caused by the rasterization method?

    Below show some of the rasterization result:( left: train render result, medium: rgb ground truth, right: rasterization mask) image

    opened by Sylvia6 4
  • How to generate a video from a given trajectory?

    How to generate a video from a given trajectory?

    Hi, I saw you provided some videos for the trained models on the different scenes. I was wondering how you generated them? Is it possible to generate them with the viewer?

    opened by vahidEttehadi 4
  • Can not train

    Can not train

    Hi guys,

    I have used my own dataset, run the necessary COLMAP commands, and used colmap2adop. But when I try to used adop_train with train_boat.ini config, I get the following error

    register neural render info
    Git ref: e844ad79dabdce185332111139e7ebfca488d663
    Train Config: scenes/test/train_boat.ini
    Using Random Seed: 3746934646
    torch::cuda::cudnn_is_available() 1
    Aborted
    

    To get this error, I run, in ADOP/ ./build/bin/adop_train --config scenes/test/train_boat.ini I have tried to move train_boat.ini file in the test folder, and running it from test folder, but that does not seem to fix my issue. Thanks in advance for any help you guys can give!

    I am using ubuntu 20.04, cuda 11.7 on my GTX 1060 6GB.

    opened by MarvTheMarsian 0
  • Is there possible to provide the Office dataset ?

    Is there possible to provide the Office dataset ?

    I'm really interest with the ADOP with lidar point cloud. So is it possible to provide the office dataset ? Or some other dataset containg the 3D lidar point ?

    opened by RobotBytedance 0
  • Multi-gpu support

    Multi-gpu support

    Hello, is there a way to run training on multiple gpus? have a machine with 4 gpus and was wondering if there was a config option(or plans) to enable multi-gpu support.

    Thanks!

    opened by parrot1166 4
  • Error in adop_train

    Error in adop_train

    after throwing an instance of std::runtime_error what(): nvrtc: error: failed to open libnvrtc-builtins.so.11.2. Make sure that libnvrtc-builtins.so.11.2 is installed correctly. nvrtc compilation failed

    After trying setting the LD_LIBRARY_PATH as suggested, the error persists

    opened by cwchenwang 1
  • build failures, undefined references when building, Docker

    build failures, undefined references when building, Docker

    I'm trying to build ADOP without Conda so I can run it on a remote machine - the only machine I have access to with powerful enough GPU - for which I need to run with a Docker container.

    I have managed to build on my local machine, but no matter what settings I use on my trivial test dataset it fails to allocate memory on that machine's "meagre" 8GB 1070.

    Following the same procedure that gave me success, I believe I've installed all relevant dependencies. The base container is a cuda enabled container based on Ubuntu 20.04., and I've installed cuda, cudnn8, pre-compiled libTorch with modern ABI (building torch has too many headaches itself), MKL, libjpeg, libpng, protobuf, protobuf-compiler, python3-dev, ninja-build, cmake 3.19.5. I've also enabled headless build.

    When I used cuda 11.3 (which would match the current libtorch release), ADOP fails to build - or rather, when compiling PointRenderer.cu it stalls and remains on that step for > 24 hours.

    When I use cuda 11.2 or 11.4 I can get all the way through compilation, but the linking stage produces undefined references to functions in your Saiga library, despite including the Saiga libraries on the compile command.

    I've attached a file with the first linker error, and also my Dockerfile incase it can help - I suspect that I must be just missing some dependency, or have the wrong version of some dependency, given that I have one machine that did manage to build on, but I'm a bit stuck as to what it is now, so any help greatly appreciated.

    ADOP-link-error.txt Dockerfile-ADOP.txt

    opened by mureva 2
Releases(v1.0)
Owner
Darius Rückert
Darius Rückert
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 612 Jul 30, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 75 Jul 31, 2022
Efficient Differentiable Simulation of Articulated Bodies (ICML2021)

Efficient Differentiable Simulation of Articulated Bodies Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming C. Lin [Paper] [Video] [Slides] [Code] Set

YilingQiao 71 Jul 15, 2022
Header-only C++/python library for fast approximate nearest neighbors

Hnswlib - fast approximate nearest neighbor search Header-only C++ HNSW implementation with python bindings. NEWS: Hnswlib is now 0.5.2. Bugfixes - th

null 2.1k Aug 11, 2022
Parallel library for approximate inference on discrete Bayesian networks

baylib C++ library Baylib is a parallel inference library for discrete Bayesian networks supporting approximate inference algorithms both in CPU and G

Massimiliano Pronesti 21 Jun 20, 2022
Chunky pixel watch face for SQFMI's Watchy

pxl999 Pxl999 is a chunky pixel watch face for SQFMI's Watchy. This watch face features live weather updates every 30 minutes and NTP syncing twice a

null 28 Apr 17, 2022
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Oral)

Pixel-Perfect Structure-from-Motion Best student paper award @ ICCV 2021 We introduce a framework that improves the accuracy of Structure-from-Motion

Computer Vision and Geometry Lab 706 Aug 10, 2022
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Kangcheng LIU 68 Jun 23, 2022
An implementation on Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process.

An implementation on "Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239"

Wangxu1996 39 Aug 7, 2022
A LiDAR point cloud cluster for panoptic segmentation

Divide-and-Merge-LiDAR-Panoptic-Cluster A demo video of our method with semantic prior: More information will be coming soon! As a PhD student, I don'

YimingZhao 57 Aug 1, 2022
Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering)

my_detection Ground segmentation and point cloud clustering based on CVC(Curved Voxel Clustering) 本项目使用设置地面坡度阈值的方法,滤除地面点,使用三维弯曲体素聚类法完成点云的聚类,包围盒参数由Apol

null 9 Jul 15, 2022
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

PointCloud on Image The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera. Th

Edison Velasco Sánchez 4 Apr 21, 2022
An unified library for fitting primitives from 3D point cloud data with both C++&Python API.

PrimitivesFittingLib An unified library for fitting multiple primitives from 3D point cloud data with both C++&Python API. The supported primitives ty

Yueci Deng 10 Jun 30, 2022
LiDAR-Camera Calibration using 3D-3D Point correspondences

ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"

Ankit Dhall 1.1k Aug 5, 2022
3D-Lidar Camera Calibration using planar Point to to camera Plane Constraint

3D-Lidar Camera Calibration using planar Point to to camera Plane Constraint

Subodh Mishra 61 Jul 8, 2022
NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping by: Alexandre Boulch, Pierre-Alain Langlois, Gilles Puy a

valeo.ai 25 May 24, 2022
copc-lib provides an easy-to-use interface for reading and creating Cloud Optimized Point Clouds

copc-lib copc-lib is a library which provides an easy-to-use reader and writer interface for COPC point clouds. This project provides a complete inter

Rock Robotic 18 Aug 8, 2022
DeepI2P - Image-to-Point Cloud Registration via Deep Classification. CVPR 2021

#DeepI2P: Image-to-Point Cloud Registration via Deep Classification Summary Video PyTorch implementation for our CVPR 2021 paper DeepI2P. DeepI2P solv

Li Jiaxin 124 Jul 29, 2022
GA-NET: Global Attention Network for Point Cloud Semantic Segmentation

GA-NET: Global Attention Network for Point Cloud Semantic Segmentation We propose a global attention network, called GA-Net, to obtain global informat

null 4 Jul 18, 2022