RapidOCR - A cross platform OCR Library based on PaddleOCR & OnnxRuntime

Overview

RapidOCR (捷智OCR)

简体中文 | English

Open in Colab

目录

简介

  • 💖 目前已知运行速度最快、支持最广,完全开源免费并支持离线部署的多平台多语言OCR SDK

  • 中文广告: 欢迎加入我们的QQ群下载模型及测试程序,qq群号:887298230

  • 缘起:百度paddlepaddle工程化不是太好,为了方便大家在各种端上进行ocr推理,我们将它转换为onnx格式,使用python/c++/java/swift/c# 将它移植到各个平台。

  • 名称来源: 轻快好省并智能。 基于深度学习技术的OCR技术,主打人工智能优势及小模型,以速度为使命,效果为主导。

  • 基于百度的开源PaddleOCR 模型及训练,任何人可以使用本推理库,也可以根据自己的需求使用百度的paddlepaddle框架进行模型优化。

近期更新

🎄 2021-12-18 update

2021-11-28 update

  • 更新ocrweb部分
    • 添加显示各个阶段处理时间
    • 更新说明文档
    • 更换文本检测模型为ch_PP-OCRv2_det_infer.onnx,推理更快,更准

2021-11-13 update

  • 添加python版本中文本检测和识别可调节的超参数,主要有box_thresh|unclip_ratio|text_score,详情见参数调节
  • 将文本识别中字典位置以参数方式给出,便于灵活配置,详情见keys_path

2021-10-27 update

  • 添加使用onnxruntime-gpu版推理的代码(不过gpu版本的onnxruntime不太好用,按照官方教程配置,感觉没有调用起来GPU)
  • 具体使用步骤参见: onnxruntime-gpu推理配置

2021-09-13 update

  • 添加基于python的whl文件,便于使用,详情参见release/python_sdk

2021-09-11 update

  • 添加PP-OCRv2新增模型onnx版本
    • 使用方法推理代码不变,直接替换对应模型即可。
  • 经过在自有测试集上评测:
    • PP-OCRv2检测模型效果有大幅度提升,模型大小没变。
    • PP-OCRv2识别模型效果无明显提升,模型大小增加了3.58M。
  • 模型上传到百度网盘 提取码:30jv

之前更新记录

点击查看

2021-08-07 update

  • 现在正在做的

    • PP-Structure 表格结构和cell坐标预测 正在整理中
  • 之前做的,未完成的,欢迎提PR

    • 打Dokcer镜像
    • 尝试onnxruntime-gpu推理
2021-07-17 update
  • 完善README文档
  • 增加英文、数字识别onnx模型,具体参见python/en_number_ppocr_mobile_v2_rec,用法同其他
  • 整理一下模型转onnx
2021-07-04 update
  • 目前仓库下的python程序已经可以在树莓派4B上,成功运行,详细信息请进群,询问群主
  • 更新整体结构图,添加树莓派的支持
2021-06-20 update
  • 优化ocrweb中识别结果显示,同时添加识别动图演示
  • 更新datasets目录,添加一些常用数据库链接(搬运一下^-^)
  • 更新FAQ
2021-06-10 update
2021-06-08 update
  • 整理仓库,统一模型下载路径
  • 完善相关说明文档
2021-03-24 update
  • 新模型已经完全兼容ONNXRuntime 1.7 或更高版本。 特别感谢:@Channingss
  • 新版onnxruntime比1.6.0 性能提升40%以上。

整个框架

常见问题 FAQ

SDK 编译状态

鉴于ubuntu用户都是商业用户,也有编译能力,暂不提供预编译包使用,可自行编译。

平台 编译状态 提供状态
Windows x86/x64 CMake-windows-x86-x64 下载链接
Linux x64 CMake-linux 暂不提供,自行编译

在线demo

  • 说明: 本在线demo不存储小伙伴们上传测试的任何图像数据
  • demo所用模型组合为: ch_PP-OCRv2 det + mobile cls + mobile rec
  • 运行机器配置: 4核 AMD EPYC 7K62 48-Core Processor
  • 示例图:

项目结构

(点击展开)
RapidOCR
├── android             # 安卓工程目录
├── api4cpp             # c语言跨平台接口库源码目录,直接用根下的CMakelists.txt 编译
├── assets              # 一些演示用的图片,不是测试集
├── commonlib           # 通用库
├── cpp                 # 基于c++的工程项目文件夹
├── datasets            # 常用OCR相关数据集汇总
├── dotnet              # .Net程序目录
├── FAQ.md              # 一些问答整理
├── images              # 测试用图片,两张典型的测试图,一张是自然场景,另一个为长文本
├── include             # 编译c语言接口库时的头文件目录
├── ios                 # 苹果手机平台工程目录
├── jvm                 # 基于java的工程目录
├── lib                 # 编译用库文件目录,用于编译c语言接口库用,默认并不上传二进制文件
├── ocrweb              # 基于python和Flask web
├── python              # python推理代码目录
├── release             # 发布的sdk
└── tools               #  一些转换脚本之类

当前进展

  • C++范例(Windows/Linux/macOS): demo
  • Jvm范例(Java/Kotlin): demo
  • .Net范例(C#): demo
  • Android范例: demo
  • python范例: demo
  • IOS范例: 等待有缘人贡献代码
  • 依据python版本重写C++推理代码,以提升推理效果,并增加对gif/tga/webp 格式图片的支持

模型相关

模型名称 模型简介 模型大小 备注
ch_PP-OCRv2_det_infer.onnx 轻量文本检测模型 2.23M 较v1轻量检测,精度有较大提升
ch_PP-OCRv2_rec_infer.onnx 轻量文本识别模型 7.79M
ch_ppocr_mobile_v2.0_det_infer.onnx 轻量文本检测模型 2.23M PP-OCRv1
ch_ppocr_mobile_v2.0_cls_infer.onnx 轻量文本方向分类模型 571KB PP-OCRv1
ch_ppocr_mobile_v2.0_rec_infer.onnx 轻量文本识别模型 4.21M PP-OCRv1
ch_ppocr_server_v2.0_det_infer.onnx 服务器版文本检测模型 46.60M PP-OCRv1
ch_ppocr_server_v2.0_rec_infer.onnx 服务器版文本识别模型 106M PP-OCRv1
japan_rec_crnn.onnx 轻量日语识别模型 3.38M PP-OCRv1
en_number_mobile_v2.0_rec_infer.onnx 轻量英文和数字识别模型 1.79M PP-OCRv1

模型转onnx

原始发起者及初创作者

版权声明

  • 如果你的产品使用了本仓库中的全部或部分代码、文字或材料
  • 请注明出处并包括我们的github url: https://github.com/RapidAI/RapidOCR

授权

  • OCR模型版权归百度所有,其它工程代码版权归本仓库所有者所有。
  • 本软件采用LGPL 授权方式,欢迎大家贡献代码,提交issue 甚至pr.

联系我们

  • 您可以通过QQ群联系到我们:887298230

  • 群号搜索不到时,请直接点此链接,找到组织

  • 用QQ扫描以下二维码:

示例图

C++/JVM示例图像

.Net示例图像

多语言示例图像

Issues
  • onnx转openvino出错

    onnx转openvino出错

    用的onnx模型是在您提供的网盘下载的ch_ppocr_mobile_v2.0_rec_infer.onnx 转换的命令是 python "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo.py" --input_model=ch_ppocr_mobile_v2.0_rec_infer.onnx --output_dir=. --model_name=model_rec --data_type=FP32 出现如下错误 image 请问这错误是啥意思?该怎么解决呢?

    opened by Dandelion111 13
  • Trouble with installation

    Trouble with installation

    pip install https://github.com/RapidAI/RapidOCR/raw/main/release/python_sdk/sdk_rapidocr_v1.0.0/rapidocr-1.0.0-py3-none-any.whl -i https://pypi.douban.com/simple/ Looking in indexes: https://pypi.douban.com/simple/ Collecting rapidocr==1.0.0 Using cached https://github.com/RapidAI/RapidOCR/raw/main/release/python_sdk/sdk_rapidocr_v1.0.0/rapidocr-1.0.0-py3-none-any.whl (18 kB) Collecting six>=1.15.0 Downloading https://pypi.doubanio.com/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: numpy>=1.19.3 in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from rapidocr==1.0.0) (1.21.4) Collecting pyclipper>=1.2.1 Downloading https://pypi.doubanio.com/packages/24/6e/b7b4d05383cb654560d63247ddeaf8b4847b69b68d8bc6c832cd7678dab1/pyclipper-1.3.0.zip (142 kB) |████████████████████████████████| 142 kB 2.7 MB/s Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: Shapely>=1.7.1 in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from rapidocr==1.0.0) (1.8.0) ERROR: Could not find a version that satisfies the requirement onnxruntime>=1.7.0 (from rapidocr) (from versions: none) ERROR: No matching distribution found for onnxruntime>=1.7.0

    But I do have onnxruntime on my Mac.

    🍺 /opt/homebrew/Cellar/onnxruntime/1.9.1: 77 files, 11.9MB

    opened by sxflynn 9
  • E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running ScatterND node. Name:'ScatterND@1' Status Me

    E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running ScatterND node. Name:'[email protected]' Status Me

    环境: windows 工具: Anaconda3-2020.11-Windows-x86_64 在Anaconda里面: conda create -n base37 python=3.7 然后:在base37里面安装了 requirements.txt 然后,windows下面使用 base37下面的执行 rapidOCR.py

    报错: C:\ProgramData\Anaconda3\python.exe E:/comm_Item/Item_doing/ocr_recog_py/RapidOCR/python/rapidOCR.py dt_boxes num : 17, elapse : 0.11702466011047363 cls num : 17, elapse : 0.016003131866455078 2021-06-06 17:06:33.2157753 [E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running ScatterND node. Name:'[email protected]' Status Message: updates tensor should have shape equal to indices.shape[:-1] + data.shape[indices.shape[-1]:]. updates shape: {1}, indices shape: {1}, data shape: {3} Traceback (most recent call last): File "E:/comm_Item/Item_doing/ocr_recog_py/RapidOCR/python/rapidOCR.py", line 271, in dt_boxes, rec_res = text_sys(args.image_path) File "E:/comm_Item/Item_doing/ocr_recog_py/RapidOCR/python/rapidOCR.py", line 195, in call rec_res, elapse = self.text_recognizer(img_crop_list) File "E:\comm_Item\Item_doing\ocr_recog_py\RapidOCR\python\ch_ppocr_mobile_v2_rec\text_recognize.py", line 115, in call preds = self.session.run(None, onnx_inputs)[0] File "C:\ProgramData\Anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 188, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running ScatterND node. Name:'[email protected]' Status Message: updates tensor should have shape equal to indices.shape[:-1] + data.shape[indices.shape[-1]:]. updates shape: {1}, indices shape: {1}, data shape: {3}

    Process finished with exit code 1

    opened by xinsuinizhuan 8
  • python+onnx+onnxRuntime推理时间疑问

    python+onnx+onnxRuntime推理时间疑问

    您好,我在测试的时候,发现python+onnx+onnxRuntime的推理速度慢于python+paddle+mkl的时间,想问下是我某些设置没有开启嘛?我将两个代码的预处理参数统一了。 我的cpu是Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz。

    opened by Gmgge 4
  • dotnet测试 InitModel就出差了

    dotnet测试 InitModel就出差了

    public void InitModel(string path, int numThread) { try { SessionOptions op = new SessionOptions(); //这行 op.GraphOptimizationLevel = GraphOptimizationLevel.ORT_ENABLE_EXTENDED; op.InterOpNumThreads = numThread; op.IntraOpNumThreads = numThread; dbNet = new InferenceSession(path, op); inputNames = dbNet.InputMetadata.Keys.ToList(); } catch (Exception ex) { Console.WriteLine(ex.Message + ex.StackTrace); throw ex; } }

    执行到: public SessionOptions() : base(IntPtr.Zero, true) { NativeApiStatus.VerifySuccess(NativeMethods.OrtCreateSessionOptions(out handle)); } 出错: 在 OcrLiteLib.DbNet.InitModel(String path, Int32 numThread) 位置 F:\RapidOCR\dotnet\BaiPiaoOcrOnnxCs\OcrLib\DbNet.cs:行号 45 在 OcrLiteLib.OcrLite.InitModels(String detPath, String clsPath, String recPath, String keysPath, Int32 numThread) 位置 F:\RapidOCR\dotnet\BaiPiaoOcrOnnxCs\OcrLib\OcrLite.cs:行号 29

    我在win7上 测试PYTHON 很好 检测图片速度也好 请大佬看看 引用的包 没有升级 我看了一下dotnet 已经半年没有更新了

    opened by zhuewizz 2
  • Python recognition inference result is weird

    Python recognition inference result is weird

    When performing inference on a screenshot from wikipedia in english, the ocr results are... weird. Some lines are perfectly recognised while others are completely wrong. Is there a parameter I need to change ?

    Original image: wiki

    Inference result: infer_wiki

    opened by samayala22 2
  • onnxruntime error on arm64

    onnxruntime error on arm64

    1 环境 硬件:RK3399 onnxruntime: 使用github最新代码编译(在chineseocr_lite中测试通过)

    2 执行 (同样的逻辑在PC端执行一切正常) [email protected]:~/RapidOCR/python$ sh rapidOCR.sh dt_boxes num : 17, elapse : 1.2671310901641846 cls num : 17, elapse : 0.2634892463684082 2021-02-25 03:23:09.794914083 [E:onnxruntime:, sequential_executor.cc:339 Execute] Non-zero status code returned while running ScatterND node. Name:'[email protected]' Status Message: updates tensor should have shape equal to indices.shape[:-1] + data.shape[indices.shape[-1]:]. updates shape: {1}, indices shape: {1}, data shape: {3} Traceback (most recent call last): File "RapidOCR.py", line 272, in dt_boxes, rec_res = text_sys(args.image_path) File "RapidOCR.py", line 196, in call rec_res, elapse = self.text_recognizer(img_crop_list) File "/home/linaro/RapidOCR/python/ch_ppocr_mobile_v2_rec/text_recognize.py", line 119, in call preds = self.session.run(None, onnx_inputs)[0] File "/usr/local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running ScatterND node. Name:'[email protected]' Status Message: updates tensor should have shape equal to indices.shape[:-1] + data.shape[indices.shape[-1]:]. updates shape: {1}, indices shape: {1}, data shape: {3}

    opened by zsun14 2
  • ImportError: cannot import name 'escape' from 'jinja2'

    ImportError: cannot import name 'escape' from 'jinja2'

    • working with RapidOCR/ocrweb
    • https://github.com/RapidAI/RapidOCR/tree/main/ocrweb
      • requirements.txt
        • Flask==1.1.2
          • Jinja2==3.1.2

    └─(16:17:30)──> python main.py                                                                                                                                                                         ──(Fri,Jun24)─┘
    Traceback (most recent call last):
      File "main.py", line 9, in <module>
        from flask import Flask, render_template, request
      File "/Users/XXXXXX/anaconda3/envs/rapid/lib/python3.7/site-packages/flask/__init__.py", line 14, in <module>
        from jinja2 import escape
    ImportError: cannot import name 'escape' from 'jinja2' (/Users/XXXXXX/anaconda3/envs/rapid/lib/python3.7/site-packages/jinja2/__init__.py)
    

    • Fixed by upgrading flask to Version: 2.1.2
    • ref: https://stackoverflow.com/questions/71718167/importerror-cannot-import-name-escape-from-jinja2
    opened by lukstc 1
  • python版cuda使用问题。

    python版cuda使用问题。

    onnxruntime-gpu==1.10 cuda==11.40 cudnn==8.2.4 ubuntu 1804 环境如上,在目标检测:加载模型阶段使用cuda,代码: ..... self.preprocess_op = create_operators(pre_process_list) self.postprocess_op = DBPostProcess(thresh=0.3, box_thresh=0.5, max_candidates=1000, unclip_ratio=1.6, use_dilation=True) providers = [ ('CUDAExecutionProvider', { 'device_id': 0, 'arena_extend_strategy': 'kNextPowerOfTwo', 'gpu_mem_limit': 2 * 1024 * 1024 * 1024, 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': True, }), 'CPUExecutionProvider', ] self.session = onnxruntime.InferenceSession(det_model_path, providers=providers) ......

    错误如下: ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

    请问如何解决,谢谢

    opened by Gitlixiangdong 1
  • 建议python版本中,关于CPU与GPU的onnx代码,自动判断该选择哪一个版本

    建议python版本中,关于CPU与GPU的onnx代码,自动判断该选择哪一个版本

    看到README中关于切换到GPU的代码如下图: image

    类似需要修改的代码分布在多个文件中,让用户去每个地方修改的话,不容易找到,还不一定能改得正确。建议作者在代码中处理一下。

    由于 onnxrt 在同一环境中,只能安装 CPU 或 GPU 两版本中的一个,不能同时安装两者,所以让用户在运行时去选择GPU还是CPU没有意义,在安装运行库时就决定好是CPU还是GPU了。onnxrt可在运行中判断当前使用的版本,如下图: image

    因此,根据 ort.get_device() 的返回结果,在代码中处理一下,就不必让用户去按照 README 自行修改代码了。 这样处理后,可能的问题是具有GPU的系统,却想要人为地选择以CPU的方式运行,这种情况估计需要参数去辅助判断。 或者作者想想更好的处理办法,尽量避免用户去修改代码,也有利于之后的RAPIDOCR版本升级。 或者既然已经将大部分参数写进 config.yaml中,那也不差把CPU或GPU这个版本选择参数添加上去了。有关的代码依据这个参数进行改动。这样用户在使用时,只需要改 config.yaml文件中一个地方就行。

    opened by zhsunlight 1
  • No results from inference when using onnxruntime with TensorRT

    No results from inference when using onnxruntime with TensorRT

    I built onnxruntime with TensorRT to see if there could be any performance improvements with RapidOCR but unfortunately, the inference returned an empty array. Here's the log:

    C:\Users\samay\Documents\RapidOCR\python>python rapidOCR.py
    2021-07-08 00:09:15.7074765 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-08 05:09:15   ERROR] Parameter check failed at: engine.cpp::nvinfer1::rt::ExecutionContext::setBindingDimensions::1136, condition: profileMaxDims.d[i] >= dimensions.d[i]
    Traceback (most recent call last):
      File "C:\Users\samay\Documents\RapidOCR\python\rapidOCR.py", line 257, in <module>
        dt_boxes, rec_res = text_sys(args.image_path)
      File "C:\Users\samay\Documents\RapidOCR\python\rapidOCR.py", line 177, in __call__
        dt_boxes, elapse = self.text_detector(img)
      File "C:\Users\samay\Documents\RapidOCR\python\ch_ppocr_mobile_v2_det\text_detect.py", line 136, in __call__
        dt_boxes = post_result[0]['points']
    IndexError: list index out of range
    

    I'm by no means an expert in model conversion so I'm guessing tensorrt simply doesn't support the converted onnx model ? Is there a way to make it work ?

    opened by samayala22 6
Releases(V_20210608_1623115119)
Owner
RapidAI-NG
A open source organization for the development of AI based applications.
RapidAI-NG
RapidOCR ncnn 推理

RapidOCRNcnnCpp Project下载 有整合好源码和依赖库的完整工程项目,文件比较大,可到Q群共享内下载,找以Project开头的压缩包文件 如果想自己折腾,则请继续阅读本说明 介绍 RapidOCR ncnn 推理 模型转换路线: paddle-> onnx -> onnx-simp

RapidOCR Team 14 Jun 13, 2022
Lite.AI 🚀🚀🌟 is a user-friendly C++ lib for awesome🔥🔥🔥 AI models based on onnxruntime, ncnn or mnn. YOLOX, YoloV5, YoloV4, DeepLabV3, ArcFace, CosFace, Colorization, SSD

Lite.AI ?????? is a user-friendly C++ lib for awesome?????? AI models based on onnxruntime, ncnn or mnn. YOLOX??, YoloV5??, YoloV4??, DeepLabV3??, ArcFace??, CosFace??, Colorization??, SSD??, etc.

Def++ 1.7k Jun 27, 2022
🚀🚀🌟NanoDet with ONNXRuntime/MNN/TNN/NCNN.

nanodet.lite.ai.toolkit ?? ?? ?? 使用Lite.AI.ToolKit C++工具箱来跑NanoDet的一些案例(https://github.com/DefTruth/lite.ai.toolkit) ,ONNXRuntime、MNN、NCNN和TNN四个版本。 若是

DefTruth 7 Apr 7, 2022
A C++-based, cross platform ray tracing library

Visionaray A C++ based, cross platform ray tracing library Getting Visionaray The Visionaray git repository can be cloned using the following commands

Stefan Zellmann 404 Jun 11, 2022
ClanLib is a cross platform C++ toolkit library.

ClanLib ClanLib is a cross platform toolkit library with a primary focus on game creation. The library is Open Source and free for commercial use, und

Kenneth Gangstø 301 Jul 3, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Nicholas Gillian 777 Jun 19, 2022
MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

Cross-platform, customizable ML solutions for live and streaming media.

Google 17.7k Jun 25, 2022
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator compatible with deep learning frameworks, PyTorch and TensorFlow/Keras, as well as classical machine learning libraries such as scikit-learn, and more.

Microsoft 7k Jun 25, 2022
Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration

ITK: The Insight Toolkit C++ Python Linux macOS Windows Linux (Code coverage) Links Homepage Download Discussion Software Guide Help Examples Issue tr

Insight Software Consortium 1k Jun 26, 2022
The Forge Cross-Platform Rendering Framework PC Windows, Linux, Ray Tracing, macOS / iOS, Android, XBOX, PS4, PS5, Switch, Quest 2

The Forge is a cross-platform rendering framework supporting PC Windows 10 / 7 with DirectX 12 / Vulkan 1.1 with DirectX Ray Tracing API DirectX 11 Fa

The Forge / Confetti 3.1k Jul 2, 2022
A Cross-Platform(Web, Android, iOS) app to Generate Faces of People (These people don't actually exist) made using Flutter.

?? ?? Flutter Random Face Generator A flutter app to generate random faces. The Generated faces do not actually exist in real life (in other words you

Aditya 63 Jun 25, 2022
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. All NLP modules are based on Timbl, the Tilburg memory-based learning software package.

Frog - A Tagger-Lemmatizer-Morphological-Analyzer-Dependency-Parser for Dutch Copyright 2006-2020 Ko van der Sloot, Maarten van Gompel, Antal van den

Language Machines 69 Jun 20, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

dgSPARSE 49 Jun 17, 2022
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library

Build Status Travis CI VM: Linux x64: Raspberry Pi 3: Jetson TX2: Backstory I set to build ccv with a minimalism inspiration. That was back in 2010, o

Liu Liu 6.9k Jun 23, 2022
Distributed machine learning platform

Veles Distributed platform for rapid Deep learning application development Consists of: Platform - https://github.com/Samsung/veles Znicz Plugin - Neu

Samsung 897 May 28, 2022
Hopsworks - Data-Intensive AI platform with a Feature Store

Give us a star if you appreciate what we do What is Hopsworks? Quick Start Development and Operational ML on Hopsworks Docs Who’s behind Hopsworks? Op

Logical Clocks AB 720 Jun 19, 2022
Machine Learning Platform for Kubernetes

Reproduce, Automate, Scale your data science. Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applica

polyaxon 3.1k Jun 23, 2022
from Microsoft STL, but multi-platform

mscharconv <charconv> from Microsoft STL, but multi-platform. Tested with MSVC, gcc, and clang on Windows, Linux, and macOS (will likely work on other

Borislav Stanimirov 35 Jun 8, 2022
Zero-latency convolution on Bela platform

bela-zlc Zero-latency convolution on Bela platform | Report | Video | Overview Convolution has many applications in audio, such as equalization and ar

Christian J. Steinmetz 19 Jun 25, 2022