A GKR-based zero-knowledge proof protocol for CNN model inference.

Overview

zkCNN

Introduction

This is the implementation of this paper, which is a GKR-based zero-knowledge proof for CNN reference, containing some common CNN models such as LeNet5, vgg11 and vgg16.

Requirement

  • C++14
  • cmake >= 3.10
  • GMP library

Input Format

The input has two part which are data and weight in the matrix.

Data Part

This part is the picture data, a vector reshaped from its original matrix by

formula1

where formula2 is the number of channel, formula3 is the height, formula4 is the width.

Weight Part

This part is the set of parameters in the neural network, which contains

  • convolution kernel of size formula10

    where formula11 and formula12 are the number of output and input channels, formula13 is the sideness of the kernel (here we only support square kernel).

  • convolution bias of size formula16.

  • fully-connected kernel of size formula14.

  • fully-connected bias of size formula15.

Experiment Script

Clone the repo

To run the code, make sure you clone with

git clone --recurse-submodules [email protected]:TAMUCrypto/zkCNN.git

since the polynomial commitment is included as a submodule.

Run a demo of LeNet5

The script to run LeNet5 model (please run the script in script/ directory).

./demo_lenet.sh
  • The input data is in data/lenet5.mnist.relu.max/.
  • The experiment evaluation is output/single/demo-result-lenet5.txt.
  • The inference result is output/single/lenet5.mnist.relu.max-1-infer.csv.

Run a demo of vgg11

The script to run vgg11 model (please run the script in script/ directory).

./demo_vgg.sh
  • The input data is in data/vgg11/.
  • The experiment evaluation is output/single/demo-result.txt.
  • The inference result is output/single/vgg11.cifar.relu-1-infer.csv.

Polynomial Commitment

Here we implement a hyrax polynomial commitment based on BLS12-381 elliptic curve. It is a submodule and someone who is interested can refer to this repo hyrax-bls12-381.

Reference

You might also like...
A Tool for Verifying Neural Networks using SMT-Based Model Checking

Project Title QNNVerifier Description A Tool for Verifying Neural Networks using SMT-Based Model Checking. Using Frama-C and ESBMC as the backends. Yo

Pure C ONNX runtime with zero dependancies for embedded devices

🤖 cONNXr C ONNX Runtime A onnx runtime written in pure C99 with zero dependencies focused on embedded devices. Run inference on your machine learning

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Libonnx A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. Getting Started The library's

 Forward - A library for high performance deep learning inference on NVIDIA GPUs
Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

A library for high performance deep learning inference on NVIDIA GPUs.
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

NCNN+Int8+YOLOv4 quantitative modeling and real-time inference
NCNN+Int8+YOLOv4 quantitative modeling and real-time inference

NCNN+Int8+YOLOv4 quantitative modeling and real-time inference

ResNet Implementation, Training, and Inference Using LibTorch C++ API

LibTorch C++ ResNet CIFAR Example Introduction ResNet implementation, training, and inference using LibTorch C++ API. Because there is no native imple

CTranslate2 is a fast inference engine for OpenNMT-py and OpenNMT-tf models supporting both CPU and GPU executio

CTranslate2 is a fast inference engine for OpenNMT-py and OpenNMT-tf models supporting both CPU and GPU execution. The goal is to provide comprehensive inference features and be the most efficient and cost-effective solution to deploy standard neural machine translation systems such as Transformer models.

TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Owner
null
SMID, Parallel computing of CNN

Parallel Computing in Deep Reference Network 1. Introduction Deep neural networks are made up of a number of layers of linked nodes, each of which imp

null 1 Dec 22, 2021
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

NVIDIA Isaac ROS 62 Dec 14, 2022
A lightweight 2D Pose model can be deployed on Linux/Window/Android, supports CPU/GPU inference acceleration, and can be detected in real time on ordinary mobile phones.

A lightweight 2D Pose model can be deployed on Linux/Window/Android, supports CPU/GPU inference acceleration, and can be detected in real time on ordinary mobile phones.

JinquanPan 58 Jan 3, 2023
Inference framework for MoE layers based on TensorRT with Python binding

InfMoE Inference framework for MoE-based models, based on a TensorRT custom plugin named MoELayerPlugin (including Python binding) that can run infere

Shengqi Chen 34 Nov 25, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 80 Dec 27, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

NeuroSim 32 Nov 24, 2022
An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

An Out-of-the-Box TensorRT-based Framework for High Performance Inference with C++/Python Support

手写AI 1.5k Jan 5, 2023
Object Based Generic Perception Object Model

This model is a highly parameterizable generic perception sensor and tracking model. It can be parameterized as a Lidar or a Radar. The model is based on object lists and all modeling is performed on object level.

TU Darmstadt - FZD 5 Jun 11, 2022
model infer framework with multithreads based on PaddleX

model_infer_multiThreads (最近更新:2021-10-28 增加了原生的所有api接口,支持clas/det/seg/mask) 该repo基于PaddleX模型推理动态链接库的接口代码进行修改,支持多线程并行访问。大部分代码均来自paddleX的model_infer.cp

Liang Su 15 Oct 24, 2022