Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE


DeepDetect Logo

Open Source Deep Learning Server & API

Join the chat at GitHub release (latest SemVer) GitHub Release Date GitHub commits since latest release (by date)

DeepDetect ( is a machine learning API and server written in C++11. It makes state of the art machine learning easy to work with and integrate into existing applications. It has support for both training and inference, with automatic conversion to embedded platforms with TensorRT (NVidia GPU) and NCNN (ARM CPU).

It implements support for supervised and unsupervised deep learning of images, text, time series and other data, with focus on simplicity and ease of use, test and connection into existing applications. It supports classification, object detection, segmentation, regression, autoencoders, ...

And it relies on external machine learning libraries through a very generic and flexible API. At the moment it has support for:

Please join the community on Gitter, where we help users get through with installation, API, neural nets and connection to external applications.

Docker image CPU
Docker image GPU
Docker image GPU+TORCH
Docker image GPU+TENSORRT

Main features

  • high-level API for machine learning and deep learning
  • support for Caffe, Tensorflow, XGBoost, T-SNE, Caffe2, NCNN, TensorRT, Pytorch
  • classification, regression, autoencoders, object detection, segmentation, time-series
  • JSON communication format
  • remote Python and Javacript clients
  • dedicated server with support for asynchronous training calls
  • high performances, benefit from multicore CPU and GPU
  • built-in similarity search via neural embeddings
  • connector to handle large collections of images with on-the-fly data augmentation (e.g. rotations, mirroring)
  • connector to handle CSV files with preprocessing capabilities
  • connector to handle text files, sentences, and character-based models
  • connector to handle SVM file format for sparse data
  • range of built-in model assessment measures (e.g. F1, multiclass log loss, ...)
  • range of special losses (e.g Dice, contour, ...)
  • no database dependency and sync, all information and model parameters organized and available from the filesystem
  • flexible template output format to simplify connection to external applications
  • templates for the most useful neural architectures (e.g. Googlenet, Alexnet, ResNet, convnet, character-based convnet, mlp, logistic regression, SSD, DeepLab, PSPNet, U-Net, CRNN, ShuffleNet, SqueezeNet, MobileNet, RefineDet, VOVNet, ...)
  • support for sparse features and computations on both GPU and CPU
  • built-in similarity indexing and search of predicted features, images, objects and probability distributions

Machine Learning functionalities per library

Caffe Caffe2 XGBoost TensorRT NCNN Libtorch Tensorflow T-SNE Dlib
Training (CPU) Y Y Y N/A N/A Y N Y N
Training (GPU) Y Y Y N/A N/A Y N Y N
Inference (CPU) Y Y Y N Y Y Y N/A Y
Inference (GPU) Y Y Y Y N Y Y N/A Y
Classification Y Y Y Y Y Y Y N/A Y
Object Detection Y Y N Y Y N N N/A Y
Segmentation Y N N N N N N N/A N
Regression Y N Y N N Y N N/A N
Autoencoder Y N N/A N N N N N/A N
OCR / Seq2Seq Y N N N Y N N N N
Time-Series Y N N N Y Y N N N
Text words Y N Y N N N N N N
Text characters Y N N N N N N Y N
Images Y Y N Y Y Y Y Y Y
Time-Series Y N N N Y N N N N

Tools and Clients


Caffe Tensorflow Source Top-1 Accuracy (ImageNet)
AlexNet Y N BVLC 57.1%
SqueezeNet Y N DeepScale 59.5%
Inception v1 / GoogleNet Y Y BVLC / Google 67.9%
Inception v2 N Y Google 72.2%
Inception v3 N Y Google 76.9%
Inception v4 N Y Google 80.2%
ResNet 50 Y Y MSR 75.3%
ResNet 101 Y Y MSR 76.4%
ResNet 152 Y Y MSR 77%
Inception-ResNet-v2 N Y Google 79.79%
VGG-16 Y Y Oxford 70.5%
VGG-19 Y Y Oxford 71.3%
ResNext 50 Y N 76.9%
ResNext 101 Y N 77.9%
ResNext 152 Y N 78.7%
DenseNet-121 Y N 74.9%
DenseNet-161 Y N 77.6%
DenseNet-169 Y N 76.1%
DenseNet-201 Y N 77.3%
SE-BN-Inception Y N 76.38%
SE-ResNet-50 Y N 77.63%
SE-ResNet-101 Y N 78.25%
SE-ResNet-152 Y N 78.66%
SE-ResNext-50 Y N 79.03%
SE-ResNext-101 Y N 80.19%
SENet Y N 81.32%
VOC0712 (object detection) Y N 71.2 mAP
InceptionBN-21k Y N 41.9%
Inception v3 5K N Y
5-point Face Landmarking Model (face detection) N N
Front/Rear vehicle detection (object detection) N N

More models:



DeepDetect is designed, implemented and supported by Jolibrain with the help of other contributors.

  • Support for TensorFlow

    Support for TensorFlow

    See short discussion here,

    In fact, ability to save computation graph models as protobuffers then read them up and providing data at runtime, not too different than with Caffe.

    The API has been designed to absorb more libraries, and this is a nice test case that should allow to expand to more complex models such as seq2seq without breaking compatibility through API.

    Additional relevant doc:

    type:enhancement build:compilation kind:neural net kind:API kind:cuda mllib:tensorflow 
    opened by beniz 70
  • Support for Deep Residual Net (ResNet) reference models for ILSVRC

    Support for Deep Residual Net (ResNet) reference models for ILSVRC

    This is support for the state-of-the-art nets just released by They are implemented as DeepDetect neural net templates: resnet_50,resnet_101 and resnet_152 are now available from the API.

    Note: training successfully tested on resnet_18 and resnet_50

    For using the nets in predict mode:

    • model repository preparation:
    • download models from
    • mkdir path/to/model
    • cp ResNet-50-model.caffemodel path/to/model/
    • cp ResNet_mean.binaryproto path/to/model/mean.binaryproto
    • service creation:
    curl -X PUT "http://localhost:8080/services/imageserv" -d "{\"mllib\":\"caffe\",\"description\":\"image classification service\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\"},\"mllib\":{\"template\":\"resnet_50\",\"nclasses\":1000}},\"model\":{\"templates\":\"../templates/caffe/\",\"repository\":\"/path/to/model\"}}"

    Note that template is set to resnet_50

    • image classification:
    curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"parameters\":{\"input\":{\"width\":224,\"height\":224},\"output\":{\"best\":5}},\"data\":[\"\"]}"
    type:enhancement kind:model kind:neural net kind:API mllib:caffe 
    opened by beniz 43
  • Amazon Machine Instance (AMI) on EC2

    Amazon Machine Instance (AMI) on EC2

    ~~Providing AMI is a good idea to ease deployment for some users.~~

    ~~Related links of interest:~~ ~~- Deprecated Caffe AMI: ~~- Newly contributed Caffe AMI (that includes Torch): ~~- Another Caffe AMI and Docker: ~~- An AMI for deep learning, and that contains Caffe:

    After a long wait, the official AMIs with support for Caffe, XGBoost and Tensorflow backends are available for both GPU and CPU:

    See for thorough documentation.

    type:help wanted build:compilation kind:packaging 
    opened by beniz 36
  • Docker images optimization / CI/CD refactoring

    Docker images optimization / CI/CD refactoring

    Docker images optimization :

    | Image | Uncompressed size | Image efficiency score | Layers | Optimized | | ------ | ------ | ------ | ------ | ------ | | jolibrain/deepdetect_cpu | 7.4 GB (New 919MB)| 57% (new 95%) | 27 (new 18) |:+1: | | jolibrain/deepdetect_gpu | 12 GB (New 2.15GB)| 65% (new 95%) | 33 (new 24) |:+1: |

    CPU and GPU images contains build dependencies which increase Docker images size. To prevent this, we use multi-stage build feature to remove all build dependencies.

    CI/CD refactoring :

    Dockerfiles duplication have been deleted and we add build argument to select cmake profile.

    Build documentation have been updated on file in docker folder.

    build:compilation kind:docker 
    opened by quadeare 30
  • Install deepdetect on CentOS

    Install deepdetect on CentOS

    Hi, I tried to install dependencies on CentOS with similar scripts, however, it did not work: yum install build-essential libgoogle-glog-dev libgflags-dev libeigen3-dev libopencv-dev libcppnetlib-dev libboost-dev libcurlpp-dev libcurl4-openssl-dev protobuf-compiler libopenblas-dev libhdf5-dev libprotobuf-dev libleveldb-dev libsnappy-dev liblmdb-dev libutfcpp-dev Loaded plugins: security Setting up Install Process No package build-essential available. No package libgoogle-glog-dev available. No package libgflags-dev available. No package libeigen3-dev available. No package libopencv-dev available. No package libcppnetlib-dev available. No package libboost-dev available. No package libcurlpp-dev available. No package libcurl4-openssl-dev available. No package libopenblas-dev available. No package libhdf5-dev available. No package libprotobuf-dev available.

    Does it mean, that I have to compiled all those source code one by one for installation?

    build:compilation kind:packaging 
    opened by anguoyang 30
  • Error creating network

    Error creating network

    I'm trying to use 'Supervised Semantics-preserving Deep Hashing' (48-bit SSDH) model as provided here.

    The following files are used for network initialization (provided by the above repo): men_ssdh.prototxt:

    name: "CaffeNet" input: "data" input_shape { dim: 10 dim: 3 dim: 227 dim: 227 } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 96 kernel_size: 11 stride: 4 } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm1" type: "LRN" bottom: "pool1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv2" type: "Convolution" bottom: "norm1" top: "conv2" convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3" convolution_param { num_output: 384 pad: 1 kernel_size: 3 } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" inner_product_param { num_output: 4096 } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" inner_product_param { num_output: 4096 } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "latent_layer" type: "InnerProduct" bottom: "fc7" top: "latent_layer" inner_product_param { num_output: 48 } } layer { name: "encode_neuron" type: "Sigmoid" bottom: "latent_layer" top: "encode_neuron" }


    net: "men_ssdh.prototxt" test_iter: 100 test_interval: 100 base_lr: 0.001 lr_policy: "step" gamma: 0.1 stepsize: 25000 display: 20 max_iter: 50000 momentum: 0.9 weight_decay: 0.0005 snapshot: 10000 snapshot_prefix: "men_ssdh"

    I'm trying to create the service with this call:

    curl -X PUT "http://localhost:8090/services/men_ssdh_orig" -d "{\"mllib\":\"caffe\",\"description\":\"image classification men finetuned\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\",\"width\":227,\"height\":227},\"mllib\":{\"nclasses\":10}},\"model\":{\"repository\":\"/home/ubuntu/code/deepdetect/models/men_ssdh\"}}" 

    This fails with

    INFO - 11:50:20 - Initializing net from parameters: 
    E0720 11:50:20.006518 12587] Error creating network
    ERROR - 11:50:20 - service creation call failed

    Any clue about what is wrong with the above initialization ?

    opened by neo01124 28
  • Service creation Error

    Service creation Error

    Hi, @beniz , I tried to create the age classification service with:

    curl -X PUT "http://localhost:8080/services/ageserv" -d "{"mllib":"caffe","description":"age classification service","type":"supervised","parameters":{"input":{"connector":"image"},"mllib":{"nclasses":2,"template":"age_model"}},"model":{"templates":"../templates/caffe/","repository":"/opt/models/age_model/"}}"

    and got these errors: {"status":{"code":400,"msg":"BadRequest","dd_code":1006,"dd_msg":"Service Bad Request Error"}}


    opened by anguoyang 27
  • CentOS 7 cannot run image prediction example

    CentOS 7 cannot run image prediction example

    Hi Beniz,

    Thanks for doing this great work of making a docker file.

    I am new to this deepdetect package. I am facing a problem running this docker image under CentOS 7, specifically version 3.10.0-327.el7.x86_64.

    I successfully ran the docker image "deepdetect_cpu" in Ubuntu 14.04, and had no problem with the following example lines.

    docker run -d -p 8080:8080 beniz/deepdetect_cpu

    curl http://localhost:8080/info

    curl -X PUT "http://localhost:8080/services/imageserv" -d "{\"mllib\":\"caffe\",\"description\":\"image classification service\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\"},\"mllib\":{\"nclasses\":1000,\"template\":\"googlenet\"}},\"model\":{\"templates\":\"../templates/caffe/\",\"repository\":\"/opt/models/ggnet/\"}}"

    curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"parameters\":{\"input\":{\"width\":224,\"height\":224},\"output\":{\"best\":3},\"mllib\":{\"gpu\":false}},\"data\":[\"\"]}"

    But in CentOS 7, the service is created successfully. It returns the following as in the example.


    But if I go on to do the prediction line, I will get

    {"status":{"code":400,"msg":"BadRequest","dd_code":1006,"dd_msg":"Service Bad Request Error"}}

    Could you take a look at this? It seems I am missing something obvious, but I cannot tell where.


    type:question kind:docker 
    opened by chaos2008 26
  • GPU Docker image fails on prediction call (GTX1080)

    GPU Docker image fails on prediction call (GTX1080)

    The problem:

    On my Ubuntu 16.04 system, which uses a GTX 1080 graphics card, I seem to be facing issues performing prediction calls on the Deepdetect GPU image. This happens even when I try to use the built-in imageserv-model. The Check failed (custom): (error) == (cudaSuccess) usually points to a memory error, but I have more than 6GB of free video memory according to nvidia-smi, which should be more than enough for the imageserv-model. Therefore, I am considering if this might be a DeepDetect-bug. When I set GPU to False, the call succeeds, albeit after quite some processing time. I am running the image in NVIDIA-Docker.

    Thanks for all of your hard work; DeepDetect is an amazing piece of software. :)


    • DeepDetect GPU Docker image: beniz/deepdetect_gpu
    • DeepDetect Commit: 11adce0366d367fd16c5ac5233c8a6118a24f8ad
    • GPU: GeForce GTX 1080
    • NVIDIA driver version: 375.39
    • Linux kernel version: 4.8.0-46-generic
    • Docker version: 17.03.1-ce, build c6d412e

    Steps taken:

    • Spin up DeepDetect Docker container:

    sudo nvidia-docker run -d -p 8080:8080 beniz/deepdetect_gpu

    • API PUT call:

    curl -X PUT "http://localhost:8080/services/imageserv" -d "{\"mllib\":\"caffe\",\"description\":\"image classification service\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\"},\"mllib\":{\"nclasses\":1000}},\"model\":{\"repository\":\"/opt/models/ggnet/\"}}"


    • API PRED call:

    curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"parameters\":{\"input\":{\"width\":224,\"height\":224},\"output\":{\"best\":3},\"mllib\":{\"gpu\":true}},\"data\":[\"\"]}"

    {"status":{"code":500,"msg":"InternalError","dd_code":1007,"dd_msg":"src/caffe/util/ / Check failed (custom): (error) == (cudaSuccess)"}}

    • Server log output:

    INFO - 11:42:41 - Device id: 0 INFO - 11:42:41 - Major revision number: 6 INFO - 11:42:41 - Minor revision number: 1 INFO - 11:42:41 - Name: GeForce GTX 1080 INFO - 11:42:41 - Total global memory: 8491368448 INFO - 11:42:41 - Total shared memory per block: 49152 INFO - 11:42:41 - Total registers per block: 65536 INFO - 11:42:41 - Warp size: 32 INFO - 11:42:41 - Maximum memory pitch: 2147483647 INFO - 11:42:41 - Maximum threads per block: 1024 INFO - 11:42:41 - Maximum dimension of block: 1024, 1024, 64 INFO - 11:42:41 - Maximum dimension of grid: 2147483647, 65535, 65535 INFO - 11:42:41 - Clock rate: 1771000 INFO - 11:42:41 - Total constant memory: 65536 INFO - 11:42:41 - Texture alignment: 512 INFO - 11:42:41 - Concurrent copy and execution: Yes INFO - 11:42:41 - Number of multiprocessors: 20 INFO - 11:42:41 - Kernel execution timeout: Yes[11:42:42] /opt/deepdetect/src/ Error while proceeding with prediction forward pass, not enough memory?

    ERROR - 11:42:42 - service imageserv prediction call failed

    ERROR - 11:42:42 - Tue Apr 18 11:42:42 2017 UTC - "POST /predict" 500 437

    type:bug build:compilation kind:GPU kind:packaging 
    opened by BasVanBoven 22
  • Error while running GET service info #2

    Error while running GET service info #2

    Hello, i m using the following command to create the service.

    [email protected]:~/dev/deepdetect/build# curl -X PUT "http://localhost:8080/services/p" -d "{\"mllib\":\"caffe\",\"description\":\"p classification service\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\",\"width\":224,\"height\":224},\"mllib\":{\"template\":\"mlp\",\"nclasses\":5,\"layers\":[512,512,512],\"activation\":\"prelu\"}},\"model\":{\"templates\":\"../templates/caffe/\",\"repository\":\"../../../test_images\"}}"

    Using the following command to train the service

    curl -X POST "http://localhost:8080/train" -d "{\"service\":\"p\",\"async\":true,\"parameters\":{\"mllib\":{\"gpu\":false,\"net\":{\"batch_size\":32},\"solver\":{\"test_interval\":500,\"iterations\":30000,\"base_lr\":0.001,\"stepsize\":1000,\"gamma\":0.9}},\"input\":{\"connector\":\"image\",\"test_split\":0.1,\"shuffle\":true,\"width\":224,\"height\":224},\"output\":{\"measure\":[\"acc\",\"mcll\",\"f1\"]}},\"data\":[\"../../../test_images\"]}"

    In the dede logs, i can see that it started working on the images directory(attached is the dede log from console output) dede-errorlog.txt

    And then, if i run the GET training service info, it crashes the Training job with the error:

    curl -X GET "http://localhost:8080/train?service=p&job=1"
    INFO - 20:11:47 - Solver scaffolding done.
    INFO - 20:11:47 - Ignoring source layer inputl
    INFO - 20:11:47 - Ignoring source layer loss
    INFO - 20:11:47 - Opened lmdb ../../../test_images/test.lmdb
    > ERROR - 20:12:32 - service p training status call failed
    > ERROR - 20:12:32 - {"code":500,"msg":"InternalError","dd_code":1007,"dd_msg":"src/caffe/data_transformer.cpp:177 / Check failed (custom): (datum_height) == (height)"}

    If i don't run the check training service info call, i can see the screen is stuck at the following lines in dede console logs:

    INFO - 20:12:59 - act0 does not need backward computation.
    INFO - 20:12:59 - ip0 does not need backward computation.
    INFO - 20:12:59 - inputlt does not need backward computation.
    INFO - 20:12:59 - This network produces output label
    INFO - 20:12:59 - This network produces output losst
    INFO - 20:12:59 - Network initialization done.
    INFO - 20:12:59 - Solver scaffolding done.
    INFO - 20:12:59 - Ignoring source layer inputl
    INFO - 20:12:59 - Ignoring source layer loss

    Does it mean its still running and meanwhile, i should not run any check call? Please assist, thanks

    type:bug datatype:images mllib:caffe 
    opened by pranky89 22
  • Torch v1.12 requires libcupti* but nvidia/cuda:11.6.0-cudnn8-runtime-ubuntu20.04 doesn't include it

    Torch v1.12 requires libcupti* but nvidia/cuda:11.6.0-cudnn8-runtime-ubuntu20.04 doesn't include it

    Creating this to track the problem described here:

    In short, Torch v1.12 requires libcupti* libs but the nvidia/cuda:11.6.0-cudnn8-runtime-ubuntu20.04 docker image doesn't include them (although the nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04 docker image does).

    Potentially related issues:

    opened by cchadowitz-pf 0
  • DeepDetect full rewrite in Pure Java

    DeepDetect full rewrite in Pure Java

    Hi, as you all know, Java is underrated, and it is now in 2022 a reality that C++ is doomed to fail. Therefore I would strongly suggest to give up on this current project and invest a in full rewrite of DeepDetect in the absolute purest Java form.

    For this purpose, I propose using Github copilot AI ( and fix the code by hand afterward.

    May the beauty of Java bring real-time to your hearts and souls, my brothers and sisters! Thus having freed your minds from C++, your asses will naturally follow, or the opposite.


    Glory to Java!

    opened by roubignolles31 0
  • getting error while training, .solverstate

    getting error while training, .solverstate

    (A Markdown syntax reminder is available here:

    Before creating a new issue, please make sure that:

    If Ok, please give as many details as possible to help us solve the problem more efficiently.


    • Version of DeepDetect:
      • [ ] Locally compiled on:
        • [ ] Ubuntu 18.04 LTS
        • [ ] Other:
      • [x] Docker CPU
      • [ ] Docker GPU
      • [ ] Amazon AMI
    • Commit (shown by the server when starting):

    Your question / the problem you're facing:

    When I want to train a simple classification (dog_cat example), I face the following error:

    resuming a model requires a .solverstate file in model repository

    Error message (if any) / steps to reproduce the problem:

    • [x] list of API calls: as shown in the sample.

    • [x] Server log output:

    4c01f52c6171_cpu_deepdetect_1 | [2021-09-26 13:15:41.201] [dogs_cats] [info] selected solver: SGD
    4c01f52c6171_cpu_deepdetect_1 | [2021-09-26 13:15:41.201] [dogs_cats] [info] solver flavor : rectified
    4c01f52c6171_cpu_deepdetect_1 | [2021-09-26 13:15:41.201] [dogs_cats] [info] detected network type is classification
    4c01f52c6171_cpu_deepdetect_1 | [2021-09-26 13:15:41.203] [dogs_cats] [error] resuming a model requires a .solverstate file in model repository
    4c01f52c6171_cpu_deepdetect_1 | [2021-09-26 13:15:41.219] [dogs_cats] [error] training status call failed: Dynamic exception type: dd::MLLibBadParamException
    4c01f52c6171_cpu_deepdetect_1 | std::exception::what: resuming a model requires a .solverstate file in model repository
    4c01f52c6171_cpu_deepdetect_1 |
    4c01f52c6171_cpu_deepdetect_1 | [2021-09-26 13:15:41.220] [api] [error] {"code":400,"msg":"BadRequest","dd_code":1006,"dd_msg":"Service Bad Request Error: resuming a model requires a .solverstate file in model repository"}
    opened by mostafa8026 23
  • Different prediction with tensorrt on refinedet model for the version v0.18.0

    Different prediction with tensorrt on refinedet model for the version v0.18.0


    • Version of DeepDetect:
      • [ ] Locally compiled on:
        • [ ] Ubuntu 18.04 LTS
        • [ ] Other:
      • [ ] Docker CPU
      • [X] Docker GPU
      • [ ] Amazon AMI
    • Commit (shown by the server when starting): 23bd913ac180b56eddbf90c71d1f2e8bc2310c54

    Your question / the problem you're facing:

    I am observing weird predictions (with tensorrt and a refinedet model) associated to the last version of DeepDetect. The predictions seem really off.

    I have created a script to replicate. It will launch predictions on dd's version from v0.15.0 to v0.18.0 with and without tensorrt. Then it dumps the predictions and a hash is computed on each prediction file (we keep only the predicions' list). We observe that the v0.18.0 trt is not consistent with its caffe version or with the previous trt models.

    Please fill in the script the following env variables and make sure that you have a gpu available for testing. BASE_PATH=TODO LOGGING_FOLDER=TODO

    and then simply launch the script


    You should get the following output at then end (all the docker logs are not shown here):

    Here we compute the sha256sum of the predictions obtained.
    For the caffe models nothing changes however we observe differences for the trt model of the last version of dd v0.18.0.
    Compare deepdetect_gpu
    PATH_LOGS/prediction_deepdetect_gpu_v0.15.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63  -
    PATH_LOGS/prediction_deepdetect_gpu_v0.16.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63  -
    PATH_LOGS/prediction_deepdetect_gpu_v0.17.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63  -
    PATH_LOGS/prediction_deepdetect_gpu_v0.18.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63  -
    Compare deepdetect_gpu_tensorrt
    PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.15.0.json: 51767470062ecba3d77e765c34bed6000cf175400d5ff59dda9b4727356f49b5  -
    PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.16.0.json: 51767470062ecba3d77e765c34bed6000cf175400d5ff59dda9b4727356f49b5  -
    PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.17.0.json: 51767470062ecba3d77e765c34bed6000cf175400d5ff59dda9b4727356f49b5  -
    PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.18.0.json: 1508b68447819ff281231ad5c757e88f4a651f50570115565438ac9fee88d566  -
    Expected predictions
        "classes": [
            "last": true,
            "bbox": {
              "ymax": 350.2694091796875,
              "xmax": 745.9049682617188,
              "ymin": 108.38544464111328,
              "xmin": 528.0482788085938
            "prob": 0.9999849796295166,
            "cat": "1"
        "uri": ""
    Anormal predictions for trt v0.18.0
        "classes": [
            "last": true,
            "bbox": {
              "ymax": 239.68505859375,
              "xmax": 425.599365234375,
              "ymin": 0,
              "xmin": 211.946044921875
            "prob": 1,
            "cat": "1"
        "uri": ""
    opened by YaYaB 3
  • v0.23.1(Oct 14, 2022)


    • chain: crop with minimum dims, force square (a41ca51)

    Bug Fixes

    • torch: class_weights with multigpu (9c1ed4c)
    • torch: metrics naming for multiple test sets (17b8cbb)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.23.1
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.23.1
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.23.1
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.23.1
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.23.0(Sep 29, 2022)


    • add crnn resnet native template (ec1f8ad)
    • add deepdetect version to config variables for external projects (be79e54)
    • dlib: update dlib backend (12d181f)
    • torch: add multilabel classification (90d536e)
    • torch: allow multigpu for traced models (6b3b9c0)
    • torch: best model is computed over all the test sets (fbedf80)
    • torch: update torch to 1.12 (7172314)
    • yolox: export directly from trained dd repo to onnx (a612539)

    Bug Fixes

    • adamw default weight decay with torch backend (eb0cf83)
    • add missing headers in predict_out.hpp (b23298f)
    • docker: add libcupti to gpu_torch docker (1a5cd09)
    • enable caffe chain with DTO & custom actions (d3e722e)
    • exported yolox have the correct number of classes (4dac269)
    • missing ifdef (e8a70cf)
    • missing path to cub headers in tensorrt-oss build for jetson nano (00df9fd)
    • oatpp: oatpp-zlib memory leak (fccd9a6)
    • prevent a buggy optimization in traced fasterrcnn (dab88ca)
    • reload best metric correctly after resume (c15c502)
    • torch: OCR predict with native model (24aa37c)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.23.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.23.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.23.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.23.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.22.1(May 28, 2022)

    DeepDetect: Open Source Deep Learning Server & API (Changelog)

    0.22.1 (2022-05-28)

    Bug Fixes

    • caffe build can use custom opencv (fde90cd)
    • wrong cuda runtime in docker images (8ca5acf)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.22.1
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.22.1
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.22.1
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.22.1
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.22.0(May 23, 2022)


    • cpp: torch predict to DTO (b88f22a)
    • sliding object detection script (0e3df67)
    • tensorrt object detector top_k control (655aa48)
    • torch: bump to torch 1.11 and torchvision 0.12 (5d312d0)
    • torch: ocr model training and inference (3fc2e27)
    • trt: update tensorrt to 22.03 (c03aa9d)

    Bug Fixes

    • cropped model input size when publishing torch models + tests (2dabd89)
    • cutout and crops in data augmentation of torch models (1ef2796)
    • docker: fix libraries not found in trt docker (86f3924)
    • remove semantic commit check (5d0f0c7)
    • seeded random crops at test time (92feae3)
    • torch best model better or equal (4d50c8e)
    • torch model publish crash and repository (6a89b83)
    • torch: Fix update metrics and solver options when resuming (9b0019f)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.22.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.22.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.22.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.22.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.21.0(Feb 22, 2022)


    • add predict from video (02872eb)
    • add video input connector and streaming endpoints (07644b4)
    • allow pure negative samples for training object detectors with torch (cd23bad)
    • bench: add monitoring of transform time (3f77d42)
    • chain: add action to draw bboxes as trailing action (ae0a05f)
    • chain: allow user to add their own custom actions (a470c7b)
    • ml: added support for segformer with torch backend (ab03d1d)
    • ml: random cropping for training segmentation models with torch (ac7ce0f)
    • random crops for object detector training with torch backend (385122d)
    • segmentation of large images with sliding window, example Python script (8528e9a)

    Bug Fixes

    • bbox clamping in torch inference (2d6efd3)
    • caffe object detector training requires test set (2e4db7e)
    • dataset output dimension after crop augmentation (636d455)
    • detection/torch: correctly normalize MAP wrt torchlib outputs (b12d188)
    • model.json file saving (809f00a)
    • segmentation with torch backend + full cropping support (e14c3f2)
    • torch MaP with bboxes (9bc840f)
    • torch model published config file (b0d4e04)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.21.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.21.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.21.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.21.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.20.0(Dec 17, 2021)


    • feat: add elapsed time to training metrics (fe5fc41)
    • feat: add onnx export for torchvision models (07f69b1)
    • feat: add yolox export script for training and inference (0b2f20b)
    • feat: add yolox onnx export and trt support (80b7e6a)
    • api: chain uses dto end to end (5efbf28)
    • ml: data augmentation for training segmentation models with torch backend (b55c218)
    • ml: DETR export and inference with torch backend (1e4ea4e)
    • feat: full cuda pipeline for tensorrt (93815d7)
    • ml: noise image data augmentation for training with torch backend (2d9757d)
    • ml: training segmentation models with torch backend (1e3ff16)
    • ml: activate cutout for object detector training with torch backend (8a34aa1)
    • ml: distortion noise for image training with torch backend (35a16df)
    • ml: dice loss (542bcb4)
    • ml: manage models with multiple losses (bea7cb4)

    Bug Fixes

    • cpu: cudnn is now on by default, auto switch it to off in case of cpu_only (3770baf)
    • tensorrt: read onnx model to find topk (5cce134)
    • simsearch ivf index craft after reload, disabling mmap (8a2e665)
    • tensorrt: yolox postprocessing in C++ (1d781d2)
    • torch: add include sometimes needed (74487dc)
    • add mltype in metrics.json even if training is not over (9bda7f7)
    • clang formatting of mlmodel (130626b)
    • torch: avoid crashes caused by an exception in the training loop (667b264)
    • torch: bad bbox rescaling on multiple uris (05451ed)
    • torch: correct output name for onnx classification model (a03eb87)
    • torch: prevent crash during training if an exception is thrown (4ce7802)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.20.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.20.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.20.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.20.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.19.0(Sep 6, 2021)


    • add DTO schemas to swagger automatic doc (9180ff4)
    • add z-normalisation option (82d7cc5)
    • dto: add custom dto vector type (01222db)
    • torch: add ADAMP variant of adam in RANGER (2006.08217) (e26ed77)
    • trt: add return cv::Mat instead of vector for GAN output (4990e7b)
    • torch segmentation model prediction (d72a138)

    Bug Fixes

    • always depend on oatpp (f262114)
    • test: tar archive was decompressed at each cmake call (910a0ee)
    • torch: predictions handled correctly when data count > 1 (5a95c29)
    • trt: detect architecture and rebuild model if necessary (5c9ff89)
    • TRT: fix build wrt new external build script (7121dfe)
    • TRT: make refinedet great again, also upgrades to TRT8.0.0/TRT-OSS21.08 (bdff2ae)
    • CI on Jetson nano with lighter classification model (1673a99)
    • dont rebuild torchvision everytime (4f17897)
    • remove linking errors on oatpp access_log (ed276b3)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.19.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.19.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.19.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.19.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.18.0(Jun 11, 2021)


    • build: CMake config file to link with dede (dd71a35)
    • ml: add multigpu support for external native models (90dcadd)
    • ml: inference for GAN generators with TensorRT backend (c93188c)
    • ml: python script to trace timm vision models (055fdfe)
    • predict: add best_bbox for torch, trt, caffe, ncnn backend (7890401)
    • torch: add dataloader_threads in API (74a036d)
    • torch: add multigpu for torch models (447dd53)
    • torch: support detection models in chains (7bb9705)
    • TRT: port to TensorRT 21.04/7.2.3 (4377451)

    Bug Fixes

    • moving back to FAISS master (916338b)
    • build: add required definitions and include directory for building external dd api (a059428)
    • build: do not patch/rebuild tensorrt if not needed (bfd29ec)
    • build: torch 1.8 with cuda 11.3 string_view patch (5002308)
    • chain: fixed_size crops now work at the edges of images (8e38e35)
    • dto: allow scale input param to be either bool for csv/csvts or float for img (168fc7c)
    • log: typo in ncnn model log (0163b02)
    • ncnn: fix ncnnapi deserialization error (089aacd)
    • ncnn: fix typo in ut (893217b)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.18.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.18.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.18.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.18.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(May 10, 2021)


    • ml: data augmentation for object detection with torch backend (95942b9)
    • ml: Visformer architecture with torch backend (40ec03f)
    • torch: add batch size > 1 for detection models (91bde66)
    • torch: image data augmentation with random geometric perspectives (d163fd8)
    • api: introduce predict output parameter (c9ee71a)
    • api: use DTO for NCNN init parameters (2ee11f0)

    Bug Fixes

    • build: docker builds with tcmalloc (6b8411a)
    • doc: api traced models list (342b909)
    • graph: loading weights from previous model does not fail (5e7c8f6)
    • torch: fix faster rcnn model export for training (cbbbd99)
    • torch: retinanet now trains correctly (351d6c6)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.17.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.17.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.17.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.17.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Apr 23, 2021)


    • torch: add confidence threshold for classification (0e75d88)
    • torch: add more backbones to traced detection models (f4d05e1)
    • torch: allow FP16 inference on GPU (705d3d7)
    • torch: madgrad optimizer (0657d82)
    • torch: training of detection models on backend torch (b920999)

    Bug Fixes

    • torch: default gradient clipping to true when using madgrad (5979019)
    • remove dirty git flag on builds (6daa4f5)
    • services names were not always case insentitive (bee3183)
    • chains: cloning of image crops in chains (2e62b7e)
    • ml: refinedet image dimensions configuration via API (20d56e4)
    • TensorRT: fix some memory allocation weirdness in trt backend (4f952c3)
    • timeseries: throw if no data found (a95e7f9)
    • torch: allow partial or mismatching weights loading only if finetuning (23666ea)
    • torch: Fix underflow in CSVTS::serialize_bounds (c8b11b6)
    • torch: fix very long ETA with iter_size != 1 (0c716a6)
    • torch: parameters are added only once to solver during traced model training (86cbcf5)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.16.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.16.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.16.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.16.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Mar 26, 2021)


    • nbeats: default backcast loss coeff to zero, allows very short forecast length to learn smoothly (db17a41)
    • timeseries: add MAE and MSE metrics (847830d)
    • timeseries: do not output per serie metrics as a default, add prefix _all for displaying all metrics (5b6bc4e)
    • torch: model publishing with the platform (da14d33)
    • torch: save last model at training service interruption (b346923)
    • torch: SWA for RANGER/torch ( (74cf54c)
    • torch/csvts: create db incrementally (4336e89)

    Bug Fixes

    • caffe/detection: fix rare spurious detection decoding, see bug 1190 (94935b5)
    • chore: add opencv imgcodecs explicit link (8ff5851)
    • compile flags typo (8f0c947)
    • docker cpu link in readme (1541dcc)
    • tensorrt tests on Jetson nano (25b12f5)
    • nbeats: make seasonality block work (d035c79)
    • torch: display msg if resume fails, also fails if not best_model.txt file (d8c5418)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.15.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.15.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.15.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.15.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Mar 5, 2021)


    • bench: Add parameters for torch image backend (5d24f3d)
    • ml: ViT support for Realformer from (5312de7)
    • nbeats: add parameter coefficient to backcast loss (35b3c31)
    • torch: add inference for torch detection models (516eeb6)
    • torch: Sharpness Aware Minimization (2010.01412) (45a8408)
    • torch: support for multiple test sets (c0dcec9)
    • torch: temporal transformers (encoder only) (non autoreg) (3538eb7)
    • CSV parser support for quotes and string labels (efa4c79)
    • new cropping action parameters in chains (6597b53)
    • running custom methods from jit models (73d1eef)
    • torch/txt: display msg if vocab not found (31837ec)
    • SSD MAP-x threshold control (acd252a)
    • use oatpp::DTO to parse img-input-connector APIData (33aee72)

    Bug Fixes

    • build: pytorch with custom spdlog (1fb19a0)
    • caffe/cudnn: force default engine option in case of cudnn not compiled in (b6dec4e)
    • chore: typo when trying to use syslog (374e6c4)
    • client: Change python package name to dd_client (b96b0fa)
    • csvts: read from memory (6d1dba8)
    • csvts: throw proper error when a csv file is passed at training time (90aab20)
    • docker: ensure pip3 is working on all images (a374a58)
    • ncnn: update innerproduct so that it does not pack data (9d88187)
    • torch: add error message when repository contains multiple models (a08285f)
    • -Werror=deprecated-copy gcc 9.3 (0371cfa)
    • action cv macros with opencv >= 3 (37d2926)
    • caffe build spdlog dependency (62e781a)
    • docker /opt/models permissions (82e2695)
    • prevent softmax after layer extraction (cbee659)
    • tag syntax for github releases (4de3807)
    • torch backend CPU build and tests (44343f6)
    • typo in oatpp chain HTTP endpoint (955b178)
    • torch: gather torchscript model parameters correctly (99e4dbe)
    • torch: set seed of torchdataset during training (d02404a)
    • torch/ranger: allow not to use lookahead (d428d08)
    • torch/timeseries: in case of db, correctly finalize db (aabedbd)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.14.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.14.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.14.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.14.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Jan 22, 2021)


    • support for batches for NCNN image models (b85d79e)
    • ml: retain_graph control via API for torch autograd (d109558)
    • ml: torch image basic data augmentation (b9f8525)
    • ncnn: use master from tencent/ncnn (044e181)
    • upgrade oatpp to pre-1.2.5 (596f6f4)

    Bug Fixes

    • torch: csvts forecast mode needs sequence of length backcast during predict (4c89a1c)
    • add missing spdlog patch (4d0a4fa)
    • caffe linkage with our spdlog (967fdef)
    • copy .git in docker image builder (570323d)
    • deactivate the csvts NCNN test when caffe is not built (5a5c8f1)
    • missing support for parent_id in chains with Python client (a5fad50)
    • NCNN chain with images and actions (38b1d07)
    • throw if hard image read error in caffe classification input (f1c0d09)
    • doc: similarity search_nn number of results API (5eaf343)
    • torch: remove potential segfault in csvts connector (ba96b4e)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.13.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.13.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.13.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.13.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Jan 8, 2021)


    • Vision Transformer (ViT) image classification models support with libtorch
    • Support for native Torch vision classification models
    • Improved N-BEATS for multivariate time-series
    • OATPP new webserver interface


    • switch back to cppnet-lib by default (ebe3b15)
    • torch: native models can load weights from any jit file (69af7f4)
    • torch: update libtorch to 1.7.1 (41d5375)
    • add access log for oat++ (4291bf8)
    • add cudnn cmake find package (5983ffd)
    • add some more error messages to log (e4ec772)
    • enable backtrace on segfault for oatpp (96b2184)
    • enhance cppnetlib req timing logs (6fc3e76)
    • gives also per target error and not only global eucl when selecting measure:eucll (dd2fc79)
    • introduce aotpp interface for deepdetect (04b79f4)
    • print stacktrace on segfault (11ab359)
    • provide predict/transform duration in ms (0197991)
    • service stats provide predict and transform duration total (9a24125)
    • track oatpp request timing (68749d3)
    • ml: image regression model training with libtorch (968c551)
    • tools: trace_torchvision can trace models for loading weights with dd native models (c11b551)
    • torch: Add multigpu support for native models training (33cd1df)
    • torch: Add native resnet support (0a01e57)
    • torch: add wide resnet and resnext to the vision models (aba6efb)
    • use jolibrain fork of faiss (8eb6e53)
    • use oatpp by default (c1d6620)
    • vision transformer (ViT) multi-gpu with torch (88b65c2)
    • graph: correct data size computation if different ouputs of an op have different sizes (288dd5b)
    • ml: added vision tranformer (ViT) as torch native template (72c0269)
    • ml: torch db stores encoded images instead of tensors (e7f3c19)
    • ml: torch regression and classification models training from list of files without db (e049caa)
    • torch: clip gradient options for all optimizers (c2ddee5)
    • torch: implement resume mllib option for torchlib: if true, reuse previous solver state (02e3177)
    • torch/nbeats: allow different sizes for backcast and forecast , also implements minimal change in csvtstorchinputconn in order to do forecast of signals instead of label predicting (d4e27f3)
    • torch/timeseries: add (p)ReLU layers in recurrent template, allowing to compute mlp-like embeddings before LSTM layers (930bee2)
    • torch/timeseries: log much more info on data (17d9a49)
    • allow to disable warning on build (75b2928)
    • one protobuf to rule them all (37a0867)

    Bug Fixes

    • allows mean+std image transform with NCNN inference (c038f47)
    • benchmark tool to pass input size on every predict call (997023a)
    • bounds on two ml tests with non deterministic outputs (eeb783a)
    • broken API doc formatting (8b0ab32)
    • caffe backend internal exception if bbox rescaling fails (47d589f)
    • copy oatpp static files in docker images (d6568d8)
    • copy service name between input and output rapidjson document (46456dd)
    • ddimg logger ptr (4e9e871)
    • do not display all euclidean metrics for autoencoders (8e09e48)
    • ensure redownloaded test archives are extracted (d09eb2a)
    • fix compilation w/o caffe (0e693c5)
    • forward our cuda to xgboost (021b5a8)
    • init XXX_total_duration_ms to 0ms (292d891)
    • missing libboost-stacktrace-dev dep in docs (37e6008)
    • models flops and number of parameters in API (b856534)
    • NCNN backend using the common protobuf (a8dc531)
    • NCNN bbox loop (8d029c9)
    • NCNN best parameter for classification models (a7ac187)
    • ONNX tensorrt engine with correct enqueueV2 (1aede85)
    • pass CUDA_ARCH correctly to caffe (3b9f5a1)
    • raise error is jsonapi is invalid (0e0a892)
    • refinedet vovnet deploy parameter setup (d7ff1e6)
    • reraise same signal on abort (391568d)
    • scale image input with NCNN (49cddfe)
    • setuptools drop support of python27 (71cd789)
    • simsearch build with annoy (39a9cda)
    • some HTTP return codes (703553f)
    • permissions (66b49c4)
    • tensorrt input size for caffe source models (5488c99)
    • tensorrt max workspace size overflow (0358c4a)
    • tentative torch faster tests (cbdefa7)
    • torch image input connector mean and std scaling (13374dd)
    • wrong NCNN bbox output scaling (113c7d2)
    • wrong template_params with torch from DD API (bee39c4)
    • build: allow cmake 3.10 (Ubuntu 18.04) for builds with -DUSE_CPU_ONLY=ON (d6fff8d)
    • build: fix tensorrt-oss include (bdf4ec2)
    • caffe: check model destination permissions (913489e)
    • csv: remove trailing \n from header if there are any, robustify in case of data passed from mem (400b53a)
    • csvts: clarify error msg on seq lenths (ad99049)
    • csvts: fix typo in logged msg (a81b291)
    • nbeats: fix for multi gpu (627c8db)
    • nbeats: fix loss for backcast part (986728c)
    • oatpp: use parent _logger object (88fc246)
    • timeseries: throw exception if recurrent template w/o layers specification (efc49de)
    • timeseries: throw if smthg wrong with scaling values (bf316b9)
    • torch: also remove not best models in case of native model (45819df)
    • torch: correctly implements range state comparison (c58a0df)
    • torch: Display correct remaining time and iteration duration (ec7c031)
    • torch: do not add linear head if native model (f81b997)
    • torch: loosen ut test (30659c8)
    • torch: loosen ut test 2 (ad6da05)
    • torch: when resume, override solver parameters after solver reload (d08695d)
    • torch/timeseries: restore previous behavior wrt timesteps API param : timestep not used at predict time (a343d4b)
    • ViT blocks modification for importing pre-trained models (f7f66a4)
    • torch/timeseries: check if number of labels is not larger than number of columns (294a5ed)
    • ViT: typo that changed computations a lot (95acf05)
    • tensorrt refinedet CI and linking to nvcaffeparsers with tensorrt OSS (846d6f5)
    • torch/native: do not fail if trying to load weights before (late) allocation of native model (e0e1d9b)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.12.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.12.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.12.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.12.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Nov 10, 2020)


    • bench: support for regression model benchmarking (a385292)
    • make python client an install package (ec2f5e2)
    • one protobuf to rule them all (77912fe)
    • api: add versions and compile flags to /info (67b1d99), closes #897
    • caffe: add new optimizers flavors to API (d534a16)
    • ml: tensorrt support for regression models (77a016b)
    • tensorrt: Add support for onnx image classification models (a8b81f2)
    • torch: ranger optimizer (ie rectified ADAM + lookahead) + \ (a3004f0)

    Bug Fixes

    • torch: best model was never saved on last iteration (6d1aa4d)
    • torch: clip gradient in rectified adam as stated in annex B of original paper (1561269)
    • torch: Raise an exception if gpu is not available (1f0887a)
    • add pytorch fatbin patch (43a698c)
    • add tool to generate debian buster image with the workaround (5570db4)
    • building documentation up to date for 18.04, tensorrt and tests (18ba916)
    • docker adds missing pytorch deps (314160c)
    • docker build readme link from doc (c6682bf)
    • handle int64 in conversion from json to APIData (863e697)
    • ignore JSON conversion throw in partial chains output (742c1c7)
    • missing main in (8b8b196)
    • proper cleanup of tensorrt models and services (d6749d0)
    • put useful informations in case of unexpected exception (5ab90c7)
    • readme table of backends, models and data formats (f606aa8)
    • regression benchmark tool parameter (3840218)
    • tensorrt output layer lookup now throws when layer does not exist (ba7c839)
    • csvts/torch: allow to read csv timeserie directly from query (76023db)
    • doc: update to neural network templates and output connector (2916daf)
    • docker: don't share apt cache between arch build (75dc9e9)
    • graph: correctly discard dropout (16409a6)
    • stats: measure of inference count (b517910)
    • timeseries: do not segfault if no valid files in train/test dir (1977bba)
    • torch: add missing header needed in case of building w/o caffe backend (2563b74)
    • torch: load weights only once (0052a03)
    • torch: reload solver params on API device (30fa16f)
    • tensorrt fp16 and int8 selector (36c7488)
    • torch/native: prevent loading weights before instanciating native model (b15d767)
    • torch/timeseries: do not double read query data (d54f60d)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.11.0
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.11.0
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.11.0
    • GPU with torch backend: docker pull jolibrain/deepdetect_gpu_torch:v0.11.0
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.10.1(Oct 9, 2020)


    • timeseries: MAPE, sMAPE, MASE, OWA metrics (c1f4ef9)
    • automatically push image build for master (19e9674)
    • build: add script to create cppnet-lib debian package (28247b4)
    • build: allow to change CUDA_ARCH (67ad43e)
    • dede: Training for image classification with torch (6e81915)
    • docker: publish image as soon as ready (957e07c)
    • docker: publish image as soon as ready (5f7013d)
    • docker: rework Dockerfile (8bc9ddf)
    • docker: use prebuild cppnet-lib (c929773)
    • graph: lstm autoencoder (038a74c)
    • nbeats: expose hidden size param in API (d7e5515)
    • add auto release tools (98b41b0)
    • imginputfile: histogram equalization of input image (2f0061c), closes #778
    • imginputfile: histogram equalization of input image (576f2d8), closes #778
    • stats: added service statistics mechanism (1839e4a)
    • torch: in case of timeseries, warn if file do not contain enough timesteps (1a5f905)
    • torch: nbeats (f288665)
    • torch: upgrade to torch 1.6 (f8f7dbb)
    • torch,native: extract_layer (d37e182)
    • add json output to (874fc01)
    • added bw image input support to dd_bench (6e558d6)
    • trains-status: add tflops to body.measures (af31c8b), closes #785
    • Docker images optimization (fba637a)
    • format the code with clang-format (07d6bdc)
    • LSTM over torch , preliminary internal graph representation (25faa8b)
    • update all docker images to ubuntu 18.04 (eaf0421)

    Bug Fixes

    • fix split_data in csvts connector (8f554b5)
    • build: CUDA_ARCH not escaped correctly (696087f)
    • build: ensure all xgboost submodules are checkouted (12aaa1a)
    • clang-format: signed/unsigned comparaison (af8e144)
    • clang-format: signed/unsigned comparaison (0ccabb6)
    • clang-format: typo in dataset tarball command (04ddad7)
    • csvts: correctly store and print test file names (12d4639)
    • dede: Remove unnecessary caffe include that prevent build with torch only (a471b82)
    • dede: support all version of spdlog while building with syslog (81f47c9)
    • docker: add missing .so at runtime (4cc24ce)
    • docker: add missing gpu_tensorrt.Dockerfile (97ff2b3)
    • docker: add some missing runtime deps (0883a33)
    • docker: add some missing runtime deps (a91f35f)
    • docker: fixup base runtime image (6238dd4)
    • docker: install rapidjson-dev package (30fb2ca)
    • native: do not raise exception if no template_param is given (d0705ab)
    • nbeats: correctly setup trend and seasonality models (implement paper version and not code version) (75accc6)
    • nbeats: much lower memory use in case of large dim signals (639e222)
    • tests: inc iteration of torchapi.service_train_image test (4c93ace)
    • torch: Fix conditions to add classification head. (f46a710)
    • torch/timeseries: unscale prediction output if needed (aa30e88)
    • /api/ alias when deployed on (4736893)
    • add support and automated processing of categorical variables in timeseries data (1a9af3e)
    • allow serialization/deserializationt of Inf/-Inf/NaN (976c892)
    • allows to specify size and color/bw with segmentation models (58ecb4a)
    • build with -DUSE_TENSORRT_OSS=ON (39bd675)
    • convolution layer initialization of SE-ResNeXt network templates (69ff0fb)
    • in tensorrt builds, remove forced cuda version and unused lib output + force-select tensorrt when tensorrt_oss is selected (9430fb4)
    • input image transforms in API doc (f513f17)
    • install cmake version 3.10 (10666b8)
    • missing variant package in docker files (dcf738b)
    • race condition in xgboost|dede build (fd32eae)
    • remove unecessary limit setting call to protobuf codedstream (ae26f59)
    • replace db":true by db":false in json files when copying models (06ac6df)
    • set caffe smooth l1 loss threshold to 1 (0e329f0)
    • ssd_300_res_128 deploy file is missing a quote (4e52a0e)
    • svm prediction with alll db combinations (d8e2961)
    • svm with db training (6e925f2)
    • tensorrt does not support blank_label (7916500)
    • typo in docker image builder (cb5ae19)
    • unusual builds (ie w/o torch or with tsne only lead to build errors (241bf6b)
    • update caffe cudnn engine without template (ca58c51)
    • torch: handle case where sequence data is < wanted timestep (b6d394a)
    • TRT: refinedet (b6152b6)

    Docker images:

    • CPU version: docker pull jolibrain/deepdetect_cpu:v0.10.1
    • GPU (CUDA only): docker pull jolibrain/deepdetect_gpu:v0.10.1
    • GPU (CUDA and Tensorrt) :docker pull jolibrain/deepdetect_cpu_tensorrt:v0.10.1
    • All images available on
    Source code(tar.gz)
    Source code(zip)
  • v0.9.7(Apr 23, 2020)

    This release updates to C++ torch 1.4, improves speed and accuracy when training object detection models, and fixes various issues.

    Features & Updates

    • Support for Torch 1.4 with BERT and image classification models update #698
    • New CUDNN convolution backend for Caffe saves a lot of memory for ResNext architectures and grouped convolutions, see
    • Geometry transforms for object detection training, #702 and
    • Much faster training of object detectors, see
    • Added unit tests for TensorRT backend #697
    • Segmentation benchmark script #711


    • added RefineDet VoVNet39 512x512 architecture for SotA object detection #700

    API changes

    • fine-grained CUDNN engine selection #696

    Bug fixes

    • Fixed Dlib logger in chains #699
    • Fixes to torch API unit tests #704
    • Fixed TensorRT classification best API keyword behavior #720
    • Fixed logging from within output connectors #705
    • Fixed rare case in metrics #708
    • Fixed CUDA arch for FAISS builds #710
    • Fix to rotate actions in chains #713
    • Fix of Torch backend build along with Caffe #714
    • Fixed error handling on model parser failure #712
    Source code(tar.gz)
    Source code(zip)
  • v0.9.6(Oct 5, 2020)

    Features & updates

    • BERT + GPT2 training / finetuning / inference with Torch C++ API #673
    • TensorRT 5, 6 & 7 OSS #683
    • lr dropout #668 as an implementation of (also see
    • confidence on OCR/CTC output prediction #694
    • support for logger inside chain actions #693
    • added support for OCR models (CRNN) Squeeze-Excitation ResNet-50 and ResNeXt-50 #685

    API changes

    • mllib.lr_dropout#668
    • image rotation action in chains #677

    Bug fixes

    • inverted ymin and ymax in detection model output, #690
    • chain action crop failing when bbox is not within the image #693
    • fixed bug in raw measure data output for model calibration #679
    • fixed error message on wrong TensorRT model filename #681
    • fixed potentially empty chain output #692
    Source code(tar.gz)
    Source code(zip)
  • v0.9.5(Dec 31, 2019)

    This release brings improvements and fixes across training and inference backends, as summarized below.

    Features & updates:

    • Update to Dlib 19.18, #653
    • Update to Tensorflow 1.13.1, #658
    • Support for Dlib face feature extraction model and landmark shape predictor action via chain, #657
    • chain now avoid image serialization in between multiple services and actions, #660
    • Added support for GPU selection via gpuid with TensorRT backend, #676
    • Added a per-service mutex to TensorRT prediction calls, ref #659

    API changes:

    • scale allows to scale input image values from API (see #661)
    • std now allows a vector of float (see #661)

    Bug fixes

    • Pytorch inference fixes, #661
    • Fixed similarity search backend selection at build time, #666
    • Fix of raw measures output for object detection, #679
    • Compiler warnings reduction at build time, #664
    Source code(tar.gz)
    Source code(zip)
  • v0.9.4(Oct 22, 2019)

    This release brings two important features:

    • Pytorch models inference for image classification and BERT text classification #611 #616 A new mllib.pytorch backend is introduced Training Pytorch models is not yet released but available from a PR #637

    • Incremental indeixing & similarity search with FAISS #641 This is an alternative to the existing support for annoy. It allows incremental indexing as well as low-level index compression. Early results show slightly better similarity search metrics than annoy, along with more options for improvements. Build with -DUSE_FAISS=ON with cmake See API additions in #641

    API changes:

    • Input image interpolation method selection via API #640 The interp API parameter takes values from "linear","nearest","cubic","area","lanczos4". Fastest methods are "nearest" then "linear", default remains "cubic".

    • OpenCV GPU support with CUDA for image input resizing #642 input.cuda boolean to resize input images on GPU, use -DUSE_CUDA_CV=ON passed to cmake Requires OpenCV >= 3

    • No hardcoded limit on tensorrt compilation maxBatchSize #639

    • Measures update: measures:["raw"] returns full 'raw' measures for classification and detection tasks. This is useful for performing metrics analyses outside of DeepDetect. #648 #652

    • Added input connector timeout control at service creation and/or every predict call level #650

    • Additional low level controls over SSD object detection models via API #651 Includes ssd_expand_ratio, ssd_mining_type, ...

    Bug fixes:

    • Fixed tensorrt max batch size reading from filename when loading model #631
    • Fixes to /chain error handling #633
    • Fix to caffe2ncnn automatic conversion of models #634
    • Fix to main CMakeLists so that make -j$N is supported for main build #643
    • Fixed ambiguous var with TF input connector #646
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3.1(Sep 2, 2019)

    This is a small release that fixes optimizer W ( and adds support for trees in multi-model inference via /chain (#629).

    API changes:

    • Support for multiple models inference organized as a tree (#629). This basically allows running inference of more than one model on the output of a parent model. E.g. detect vehicles in images, then run license plate detection and car color model on the vehicle crops, then OCR on the license plates only, in a single API call.

    Bug fixes:

    • Decoupled weight decay is now fixed, and implements default scheduler (, from, see
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(Aug 28, 2019)

    This release mainly adds new optimizers to the Caffe backend, along with an important bug fix to the optimizer selection when training of models from scratch (not affecting transfer learning).

    Main changes:

    • Lookahead Optimizer:, see #621
    • Rectified Adam:, see #628
    • Decoupled Weight Decay Regularization:, see #627
    • Training learning rate warmup, see #623

    Other changes:

    • Improved dede server command line model start list behavior, see #620
    • Learning rate value now returned on training status call and plotted on platform, see #624

    Bug fixes:

    • Fixed training optimizer selection when training models from scratch, see #626

    API changes:

    The new optimizers include improvements from some papers released this summer 2019. The main new training API parameters in the solver object are:

    • parameters.solver.lookahead: true/false, triggers lookahead optimizer
    • parameters.solver.lookahead_steps: default to 6
    • parameters.solver.lookahead_alpha : default to 0.5
    • parameters.mllib.solver.warmup_lr: initial learning rate linearly increased to base_lr over parameters.mllib.solver.warmup_iter steps
    • parameters.mllib.solver.warmup_iter: number of warmup steps
    • ADAMW, SGDW and AMSGRADW optimizers implemented decoupled weight decay regularization
    • parameters.mllib.solver.rectified: activates the rectified optimization scheme
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Aug 2, 2019)

    This release mainly adds the new /chain API that allows the execution of chains of models and actions. Typical usage is one or more object detectors on images, cropping and OCR.

    The /chain API is inference only at the moment with a limited set of actions, that is expected to grow. Implemented actions are image crops and class filters, see #605

    Main changes:

    • Multi-model /chain API, see #605
    • Dlib backend update to 19.17, see #601
    • XFS compatibility for all disk operations, see #602 (solves #50)
    • NCNN model generation from Caffe model is now automated, see #612
    • Update of Mapbox's C++ variant to v1.1.3, see #604
    • NCNN input/output blob configuration from API, see #588

    Bug fixes:

    • Fix to bounding boxes with TensorRT, #609
    • Fix crash with NCNN when the model directory is empty, see #613
    • Prevent automatic detection of URLs when URL is within a text sample, see #615
    • Fix builds on Raspberry 3 & 4, see #618
    Source code(tar.gz)
    Source code(zip)
  • v0.9.1(Jun 30, 2019)

    This release mainly adds support for TensorRT, including DLA acceleration on NVidia Xavier boards.

    For a full report on performances for a variety of image models (classification and object detection) on various embedded boards (Jetson Nano, TX2 & Xavier), see Jolibrain's report here:

    Main changes:

    • Support for TensorRT 5.x with "mllib":"tensorrt" API option (#594)
      • Automated compilation from Caffe models to TensorRT accelerated models
      • Support for DLA on Xavier board
      • Benchmark tool update for tensorRT (#589)
    • Fix to object detection bounding boxes boundaries (#594)
    • Support for residual networks with LSTMs for time-series (#583)
    • Performances fixes for time-series (#583)
    Source code(tar.gz)
    Source code(zip)
  • v0.9(May 9, 2019)

    While DeepDetect has been developed over the past 3 years, we've kept using the master branch as a single stable and continuous release.

    DeepDetect v0.9 is the first versioned release that can thus accomodate new needs by customers and clients who need longer term releases as well as release notes to decide when to update/upgrade.

    From this point on, DeepDetect releases will list bugfixes, changes and new features.

    Note that the platform ( is bound to follow the DeepDetect server (this repository) versioning to ensure compatibility.

    Dockers with tag v0.9 are available from and the command line, e.g. docker pull jolibrain/deepdetect_cpu:v0.9

    Source code(tar.gz)
    Source code(zip)
Pretty AI for solving real world problems
tutorial on how to train deep learning models with c++ and dlib.

Dlib Deep Learning tutorial on how to train deep learning models with c++ and dlib. usage git clone mkdir build

Abdolkarim Saeedi 1 Dec 21, 2021
Caffe2 is a lightweight, modular, and scalable deep learning framework.

Source code now lives in the PyTorch repository. Caffe2 Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the origin

Meta Archive 8.4k Dec 4, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 33k Nov 26, 2022
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Amazon Archives 4.4k Nov 30, 2022
Support Yolov4/Yolov3/Centernet/Classify/Unet. use darknet/libtorch/pytorch to onnx to tensorrt

ONNX-TensorRT Yolov4/Yolov3/CenterNet/Classify/Unet Implementation Yolov4/Yolov3 centernet INTRODUCTION you have the trained model file from the darkn

null 170 Nov 29, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 78 Nov 2, 2022
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

TensorRT Open Source Software This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. Included are the sources for Tens

NVIDIA Corporation 6.3k Dec 3, 2022
Openvino tensorflow - OpenVINO™ integration with TensorFlow

English | 简体中文 OpenVINO™ integration with TensorFlow This repository contains the source code of OpenVINO™ integration with TensorFlow, designed for T

OpenVINO Toolkit 166 Nov 23, 2022
Pose-tensorflow - Human Pose estimation with TensorFlow framework

Human Pose Estimation with TensorFlow Here you can find the implementation of the Human Body Pose Estimation algorithm, presented in the DeeperCut and

Eldar Insafutdinov 1.1k Nov 23, 2022
PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.

PSTensor : Custimized a Tensor Data Structure Compatible with PyTorch and TensorFlow. You may need this software in the following cases. Manage memory

Jiarui Fang 8 Feb 12, 2022
This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch.

Onsets and Frames TensorRT inference This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch (http

Xianke Wang 6 Jan 13, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
yolov5 onnx caffe

环境配置 ubuntu:18.04 cuda:10.0 cudnn:7.6.5 caffe: 1.0 OpenCV:3.4.2 Anaconda3:5.2.0 相关的安装包我已经放到百度云盘,可以从如下链接下载:

null 60 Dec 1, 2022
Eclipse Deeplearning4J (DL4J) ecosystem is a set of projects intended to support all the needs of a JVM based deep learning application

Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.

Eclipse Foundation 12.7k Nov 27, 2022
This is a sample ncnn android project, it depends on ncnn library and opencv

This is a sample ncnn android project, it depends on ncnn library and opencv

null 242 Dec 3, 2022
GFPGAN-ncnn - a naive NCNN implementation of GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration

GFPGAN-ncnn a naive ncnn implementation of GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration model support: 1.GFPGANClean

FeiGeChuanShu 42 Nov 26, 2022
RealSR-NCNN-Android is a simple Android application that based on Realsr-NCNN & Real-ESRGAN.

RealSR-NCNN-Android Real-ESRGAN is a Practical Algorithms for General Image Restoration. RealSR-NCNN-Android is a simple Android application that base

null 249 Dec 3, 2022
Simple inference deep head pose ncnn version

ncnn-deep-head-pose Simple implement inference deep head pose ncnn version with high performance and optimized resource. This project based on deep-he

Đỗ Công Minh 12 Nov 17, 2022