Official Code for StyleMesh

Overview

StyleMesh

This is the official repository that contains source code for StyleMesh.

[Arxiv] [Project Page] [Video]

Teaser

If you find StyleMesh useful for your work please cite:

@misc{höllein2021stylemesh,
      title={StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions}, 
      author={Lukas Höllein and Justin Johnson and Matthias Nießner},
      year={2021},
      eprint={2112.01530},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Preprocessing

The following steps are necessary to prepare all data.

Texture Optimization

The following steps allow to optimize a texture for a specific scene/style. You can easily select scene/style by modifying the corresponding values in the scripts (--scene for ScanNet and additionally --region for Matterport). It also allows to fine-tune loss weights, if you want to experiment with your own settings.

All style images that are used in the main paper are listed in styles.

By default, run files (texture, hparams, logging) are saved in style-mesh/lightning_logs

The suffix "with_angle_and_depth" is used for comparisons in Fig. 4,5,6,7,8,9,11. The suffixes "only2D" and "with_angle" are used for ablation study in Fig. 7. The suffix "dip" is used for the DIP baseline in Fig. 4,5,6

Render optimized Texture

You can render images with Mipmapping and Shading with our OpenGL renderers for each dataset. Alternatively, you can use the generated texture files after each optimization together with the generated meshes and view the textured mesh in any mesh viewer (e.g. Meshlab or Blender).

Evaluate Reprojection Error

We use the file scripts/eval/eval_image_folders.py for calculation of the reprojection error (Tab. 1)

Evaluate Circle Metric

We use the file scripts/eval/measure_circles.py for calculation of the circle's metric (Tab. 2, Fig. 8)

You might also like...
This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.
This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

HIPIFY: Convert CUDA to Portable C++ Code

Tools to translate CUDA source code into portable HIP C++ automatically

This code accompanies the paper
This code accompanies the paper "Human-Level Performance in No-Press Diplomacy via Equilibrium Search".

Diplomacy SearchBot This code accompanies the paper "Human-Level Performance in No-Press Diplomacy via Equilibrium Search". A very brief orientation:

Code shown in the lectures

Reference Material for Getting Started with CP (an NPTEL course) This repository contains the code shown in the lectures for the NPTEL course on Getti

Provide sample code of efficient operator implementation based on the Cambrian Machine Learning Unit (MLU) .

Cambricon CNNL-Example CNNL-Example 提供基于寒武纪机器学习单元(Machine Learning Unit,MLU)开发高性能算子、C 接口封装的示例代码。 依赖条件 操作系统: 目前只支持 Ubuntu 16.04 x86_64 寒武纪 MLU SDK: 编译和

The code for C programming 2021, Department of Computer Science, National Taiwan University.

C2021 .c for sousce code, .in for input file, and .out for correct output. The numbers are the problem indices in the judge system. "make number" to m

YOLO v5 ONNX Runtime C++ inference code.
YOLO v5 ONNX Runtime C++ inference code.

yolov5-onnxruntime C++ YOLO v5 ONNX Runtime inference code for object detection. Dependecies: OpenCV 4.5+ ONNXRuntime 1.7+ OS: Windows 10 or Ubuntu 20

Code generation for automatic differentiation with GPU support.

Code generation for automatic differentiation with GPU support.

The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

PointCloud on Image The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera. Th

Comments
  • Rendered Image is different with refered one.

    Rendered Image is different with refered one.

    Hi, I found the scan images and the rendered images are different. For example, the following image is 22.jpg in ScanNet scene "scene0291_00" The top color image is extracted from the scanner and the bottom one is rendered by myself (Platform: MacOS). 22 22 textured My shell command to activate render is ./scannet_uv_renderer ./scene0291_00/scene0291_00_vh_clean_decimate_500000_uvs_blender.ply ./datasets/scannet/train/images/scene0291_00/pose ./datasets/scannet/train/images/scene0291_00/scene0291_00.txt . 0 320 240 ./datasets/scannet/train/scans/scene0291_00/epoch_3_texture.jpg What's wrong with my operation? Look forward to your favourable reply.

    opened by pean1128 5
  • fail to download vgg_conv.pth

    fail to download vgg_conv.pth

    Hi, when I try to use this code with my data, I found that I can not download the vgg_conv.pth through the given link. Could you provide a mirror link to achieve this file, thanks a lot.

    opened by pean1128 1
Owner
Lukas Hoellein
PhD Student @ TUM Visual Computing Group
Lukas Hoellein
kaldi-asr/kaldi is the official location of the Kaldi project.

Kaldi Speech Recognition Toolkit To build the toolkit: see ./INSTALL. These instructions are valid for UNIX systems including various flavors of Linux

Kaldi 12.2k Dec 28, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor"

Patchwork Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor

Hyungtae Lim 252 Dec 21, 2022
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 157 Dec 26, 2022
The official Brainfuckn't esolang

Brainfuckn't Backstory Brainfuckn't is an esolang created by me (4gboframram) that is similar to brainfuck, but definitely isn't. The name came from a

null 1 Nov 7, 2021
Praprotem Official Repository.

Praprotem V1.0.0 Praprotem Official Repository. Praprotem is a project management system being built to help users easily manage all projects from one

Praise Codes 2 Nov 19, 2021
Official Pytorch implementation of RePOSE (ICCV2021)

RePOSE: Fast 6D Object Pose Refinement via Deep Texture Rendering (ICCV2021) [Link] Abstract We present RePOSE, a fast iterative refinement method for

Shun Iwase 68 Nov 15, 2022
This is a code repository for pytorch c++ (or libtorch) tutorial.

LibtorchTutorials English version 环境 win10 visual sutdio 2017 或者Qt4.11.0 Libtorch 1.7 Opencv4.5 配置 libtorch+Visual Studio和libtorch+QT分别记录libtorch在VS和Q

null 464 Jan 9, 2023
Source code to the 1995 DOS roguelike game Alphaman

Alphaman Source and Files Source code and related files for the 1995 DOS roguelike game Alphaman by Jeffrey R. Olson. Jeff can be reached via email at

Jamie 36 Dec 6, 2022
Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"

SSM-VLN Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation". Environment Installation Download Room-to-Room

hanqing 35 Dec 3, 2022