An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

Overview

AnalyticMesh

Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and topological errors that result from insufficient sampling, by means of mathematically guaranteed analysis.

This repository gives an implementation of Analytic Marching algorithm. This algorithm is initially proposed in our conference paper Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks, then finally improved in our journal paper: Learning and Meshing from Deep Implicit Surface Networks Using an Efficient Implementation of Analytic Marching.

Our codes provide web pages for manipulating your models via graphic interface, and a backend for giving full control of the algorithm by writing python codes.

Installation

First please download our codes:

git clone https://github.com/Karbo123/AnalyticMesh.git --depth=1
cd AnalyticMesh
export AMROOT=`pwd`

Backend

Backend gives a python binding of analytic marching. You can write simple python codes in your own project after compiling the backend.

Our implementation supports pytorch, and possibly also other deep learning frameworks (e.g. tensorflow), but we do not test other frameworks yet.

Requirements:

Compilation:

cd $AMROOT/backend
mkdir build && cd build
cmake ..
make -j8
cd ..

If your pytorch version < 1.5.1, you may need to fix cpp extension compile failure on some envs.

Make sure compiled library can pass the tests. Run:

CUDA_VISIBLE_DEVICES=0 PYTHONDONTWRITEBYTECODE=1 pytest -s -p no:warnings -p no:cacheprovider

It will generate some files under folder $AMROOT/backend/tmp. Generally, those generated meshes (.ply) are watertight, you can check with meshlab.

If it passes all the tests, you can finally link to somewhere so that python can find it:

ln -s $AMROOT `python -c 'import site; print(site.getsitepackages()[0])'`

Frontend

We also provide an easy-to-use interactive interface to apply analytic marching to your input network model by just clicking your mouse. To use the web interface, you may follow steps below to install.

Requirement:

Before compiling, you may need to modify the server information given in file frontend/pages/src/assets/index.js. Then you can compile those files by running:

cd $AMROOT/frontend/pages
npm install
npm run build

The $AMROOT/frontend/pages/dist directory is ready to be deployed. If you want to deploy web pages to a server, please additionally follow these instructions.

To start the server, simply run:

cd $AMROOT/frontend && python server.py

You can open the interface via either opening file $AMROOT/frontend/pages/dist/index.html on your local machine or opening the url to which the page is deployed.

Demo

We provide some samples in $AMROOT/examples, you can try them.

Here we show a simple example (which is from $AMROOT/examples/2_polytope.py):

import os
import torch
from AnalyticMesh import save_model, load_model, AnalyticMarching

class MLPPolytope(torch.nn.Module):
    def __init__(self):
        super(MLPPolytope, self).__init__()
        self.linear0 = torch.nn.Linear(3, 14)
        self.linear1 = torch.nn.Linear(14, 1)
        with torch.no_grad(): # here we give the weights explicitly since training takes time
            weight0 = torch.tensor([[ 1,  1,  1],
                                    [-1, -1, -1],
                                    [ 0,  1,  1],
                                    [ 0, -1, -1],
                                    [ 1,  0,  1],
                                    [-1,  0, -1],
                                    [ 1,  1,  0],
                                    [-1, -1,  0],
                                    [ 1,  0,  0],
                                    [-1,  0,  0],
                                    [ 0,  1,  0],
                                    [ 0, -1,  0],
                                    [ 0,  0,  1],
                                    [ 0,  0, -1]], dtype=torch.float32)
            bias0 = torch.zeros(14)
            weight1 = torch.ones([14], dtype=torch.float32).unsqueeze(0)
            bias1 = torch.tensor([-2], dtype=torch.float32)

            add_noise = lambda x: x + torch.randn_like(x) * (1e-7)
            self.linear0.weight.copy_(add_noise(weight0))
            self.linear0.bias.copy_(add_noise(bias0))
            self.linear1.weight.copy_(add_noise(weight1))
            self.linear1.bias.copy_(add_noise(bias1))

    def forward(self, x):
        return self.linear1(torch.relu(self.linear0(x)))


if __name__ == "__main__":
    #### save onnx
    DIR = os.path.dirname(os.path.abspath(__file__)) # the directory to save files
    onnx_path = os.path.join(DIR, "polytope.onnx")
    save_model(MLPPolytope(), onnx_path) # we save the model as onnx format
    print(f"we save onnx to: {onnx_path}")

    #### save ply
    ply_path = os.path.join(DIR, "polytope.ply")
    model = load_model(onnx_path) # load as a specific model
    AnalyticMarching(model, ply_path) # do analytic marching
    print(f"we save ply to: {ply_path}")

API

We mainly provide the following two ways to use analytic marching:

  • Web interface (provides an easy-to-use graphic interface)
  • Python API (gives more detailed control)
  1. Web interface

    You should compile both the backend and frontend to use this web interface. Its usage is detailed in the user guide on the web page.

  2. Python API

    It's very simple to use, just three lines of code.

    from AnalyticMesh import load_model, AnalyticMarching 
    model = load_model(load_onnx_path) 
    AnalyticMarching(model, save_ply_path)

    If results are not satisfactory, you may need to change default values of the AnalyticMarching function.

    To obtain an onnx model file, you can just use the save_model function we provide.

    from AnalyticMesh import save_model
    save_model(your_custom_nn_module, save_onnx_path)

Some tips:

  • It is highly recommended that you try dichotomy first as the initialization method.
  • If CUDA runs out of memory, try setting voxel_configs. It will partition the space and solve them serially.
  • More details are commented in our source codes.

Use Analytic Marching in your own project

There are generally three ways to use Analytic Marching.

  1. Directly representing a single shape by a multi-layer perceptron. For a single object, you can simply represent the shape as a single network. For example, you can directly fit a point cloud by a multi-layer perceptron. In this way, the weights of the network uniquely determine the shape.
  2. Generating the weights of multi-layer perceptron from a hyper-network. To learn from multiple shapes, one can use hyper-network to generate the weights of multi-layer perceptron in a learnable manner.
  3. Re-parameterizing the latent code into the bias of the first layer. To learn from multiple shapes, we can condition the network with a latent code input at the first layer (e.g. 3+256 -> 512 -> 512 -> 1). Note that the concatenated latent code can be re-parameterized and combined into the bias of the first layer. More specifically, the computation of the first layer can be re-parameterized as , where the newly computed bias is .

About

This repository is mainly maintained by Jiabao Lei (backend) and Yongyi Su (frontend). If you have any question, feel free to create an issue on github.

If you find our works useful, please consider citing our papers.

@inproceedings{
    Lei2020,
    title = {Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks},
    author = {Jiabao Lei and Kui Jia},
    booktitle = {International Conference on Machine Learning 2020 {ICML-20}},
    year = {2020},
    month = {7}
}

@misc{
    Lei2021,
    title={Learning and Meshing from Deep Implicit Surface Networks Using an Efficient Implementation of Analytic Marching}, 
    author={Jiabao Lei and Kui Jia and Yi Ma},
    year={2021},
    eprint={2106.10031},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Contact: [email protected]

You might also like...
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

A Tool for Verifying Neural Networks using SMT-Based Model Checking

Project Title QNNVerifier Description A Tool for Verifying Neural Networks using SMT-Based Model Checking. Using Frama-C and ESBMC as the backends. Yo

CoDi is a cellular automaton model for spiking neural networks

CoDi CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signa

Implementation for the
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. [Paper] [Supplementary Material] Langlois, P. A., Boulch, A

MITIE: library and tools for information extraction

MITIE: MIT Information Extraction This project provides free (even for commercial use) state-of-the-art information extraction tools. The current rele

LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)
LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

TengineFactory - Algorithm acceleration landing framework, let you complete the development of algorithm at low cost.eg: Facedetect, FaceLandmark..
TengineFactory - Algorithm acceleration landing framework, let you complete the development of algorithm at low cost.eg: Facedetect, FaceLandmark..

简介 随着人工智能的普及,深度学习算法的越来越规整,一套可以低代码并且快速落地并且有定制化解决方案的框架就是一种趋势。为了缩短算法落地周期,降低算法落地门槛是一个必然的方向。 TengineFactory 是由 OPEN AI LAB 自主研发的一套快速,低代码的算法落地框架。我们致力于打造一个完全

A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.
A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.

torch-ngp A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. Note: This

Comments
  • reserved identifier violation

    reserved identifier violation

    I would like to point out that identifiers like “__MLP_H__” and “__POLYMESH_H__do not fit to the expected naming convention of the C++ language standard. Would you like to adjust your selection for unique names?

    opened by elfring 3
  • What cuda version do you use? error: calling a __host__ function(

    What cuda version do you use? error: calling a __host__ function("cudaMemcpyAsync") ... is not allowed.

    Hi! First of all, thanks for the impressive works! I'm trying to compile the backend. The cmake step has successfully passed, but I ran into a bug when running the make step.

    [ 33%] Building CUDA object CMakeFiles/cuam.dir/src/cuam_kernel.cu.o
    In file included from /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/src/cuam_kernel.cu:12:0:
    /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/inc/polymesh.h:342:2: warning: #warning It may contain degenerated faces (such as `A-B-C-C-D-A`) [-Wcpp]
     #warning It may contain degenerated faces (such as `A-B-C-C-D-A`)
      ^
    In file included from /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/src/cuam_kernel.cu:13:0:
    /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/inc/kernel.h:595:2: warning: #warning `UseCublas` branch of function `solve_3x3_Batched` is not implemented. [-Wcpp]
     #warning `UseCublas` branch of function `solve_3x3_Batched` is not implemented.
      ^
    /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/inc/kernel.h(1469): error: calling a __host__ function("cudaMemcpyAsync") from a __global__ function("inferNewStates_kernel<double> ") is not allowed
    
    /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/inc/kernel.h(1469): error: identifier "cudaMemcpyAsync" is undefined in device code
    
    /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/inc/kernel.h(1469): error: calling a __host__ function("cudaMemcpyAsync") from a __global__ function("inferNewStates_kernel<float> ") is not allowed
    
    /home/PJLAB/guojianfei/ai_ws/shape/AnalyticMesh/backend/inc/kernel.h(1469): error: identifier "cudaMemcpyAsync" is undefined in device code
    
    4 errors detected in the compilation of "/tmp/tmpxft_00004759_00000000-6_cuam_kernel.cpp1.ii".
    CMakeFiles/cuam.dir/build.make:89: recipe for target 'CMakeFiles/cuam.dir/src/cuam_kernel.cu.o' failed
    make[2]: *** [CMakeFiles/cuam.dir/src/cuam_kernel.cu.o] Error 1
    CMakeFiles/Makefile2:99: recipe for target 'CMakeFiles/cuam.dir/all' failed
    make[1]: *** [CMakeFiles/cuam.dir/all] Error 2
    Makefile:90: recipe for target 'all' failed
    make: *** [all] Error 2
    

    I've tried google the error info, but surprisingly there are no relative results. Do you have any clue?

    OS: ubuntu16 Graphics card: GTX1660s CUDA: 10.0 cudnn: 7.6.5

    opened by ventusff 2
Owner
Research lab focusing on CV, ML, and AI
null
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
A lightweight C library for artificial neural networks

Getting Started # acquire source code and compile git clone https://github.com/attractivechaos/kann cd kann; make # learn unsigned addition (30000 sam

Attractive Chaos 617 Dec 19, 2022
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 9, 2023
Low dependency(C++11 STL only), good portability, header-only, deep neural networks for embedded

LKYDeepNN LKYDeepNN 可訓練的深度類神經網路 (Deep Neural Network) 函式庫。 輕量,核心部份只依賴 C++11 標準函式庫,低相依性、好移植,方便在嵌入式系統上使用。 Class diagram 附有訓練視覺化 demo 程式 訓練視覺化程式以 OpenCV

Lin Kao-Yuan 44 Nov 7, 2022
Tiny CUDA Neural Networks

This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning fast "fully fused" multi-layer perceptron as well as support for various advanced input encodings, losses, and optimizers.

NVIDIA Research Projects 1.9k Jan 7, 2023
Raspberry Pi guitar pedal using neural networks to emulate real amps and pedals.

NeuralPi NeuralPi is a guitar pedal using neural networks to emulate real amps and pedals on a Raspberry Pi 4. The NeuralPi software is a VST3 plugin

Keith Bloemer 865 Jan 5, 2023
A header-only C++ library for deep neural networks

MiniDNN MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional

Yixuan Qiu 336 Dec 22, 2022
InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.

InsNet documentation InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is desig

Chauncey Wang 62 Jan 3, 2023
A framework for generic hybrid two-party computation and private inference with neural networks

MOTION2NX -- A Framework for Generic Hybrid Two-Party Computation and Private Inference with Neural Networks This software is an extension of the MOTI

ENCRYPTO 15 Nov 29, 2022
TS-9 guitar pedal clone using neural networks.

TS-M1N3 TS-M1N3 is a guitar plugin clone of the TS-9 Tubescreamer overdrive pedal. Machine learning was used to train a model of both the drive and to

Keith Bloemer 29 Nov 23, 2022