Caffe2 is a lightweight, modular, and scalable deep learning framework.

Overview
Issues
  • Builder scripts for Docker containers

    Builder scripts for Docker containers

    This includes a build script for Docker containers to run builds and tests in as well as a build and test script that is run to build and test Caffe2 itself. These scripts are directly used by Jenkins.

    CLA Signed 
    opened by pietern 72
  • cmake: python packages now install to the cannonical directory

    cmake: python packages now install to the cannonical directory

    Addresses issue #1676

    Now when make install is run, the caffe2 (and caffe) python modules will be installed into the correct site-packages directory (relative to the prefix) instead of directly in the prefix.

    CLA Signed 
    opened by Erotemic 51
  • ../lib/libcaffe2.so: undefined reference to `google::protobuf::internal::AssignDescriptors(std::__cxx11::basic_string

    ../lib/libcaffe2.so: undefined reference to `google::protobuf::internal::AssignDescriptors(std::__cxx11::basic_string

    Building caffe2 failed following "Custom Anaconda Install".

    1. conda create -n caffe2 && source activate caffe2
    2. conda install -y protobuf (3.4) or conda install -y -c conda-forge protobuf (3.5.1)
    3. git clone --recusive ...
    4. mkdir build && cd build
    5. cmake -DUSE_CUDA=ON -DUSE_LEVELDB=ON -DCMAKE_PREFIX_PATH=~ /Prog/anaconda2/envs/caffe2 -DCMAKE_INSTALL_PREFIX=~ /Prog/anaconda2/envs/caffe2 ..
    6. make install
    7. compile failed [75% ] ‘google::protobuf::internal::fixed_address_empty_string[abi:cxx11]’未定义的引用

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: gcc 5.4.0
    • CMake version: cmake 3.5.1
    • CMake arguments: cmake -DUSE_CUDA=ON -DUSE_LEVELDB=ON -DCMAKE_PREFIX_PATH=~ /Prog/anaconda2/envs/caffe2 -DCMAKE_INSTALL_PREFIX=~ /Prog/anaconda2/envs/caffe2 ..
    • Relevant libraries/versions (e.g. CUDA): cuda 8.0

    CMake summary output

    ******** Summary ********
    <please paste summary output here>
    

    [ 75%] Building CXX object caffe2/CMakeFiles/reshape_op_gpu_test.dir/operators/reshape_op_gpu_test.cc.o [ 75%] Linking CXX executable ../bin/reshape_op_gpu_test CMakeFiles/reshape_op_gpu_test.dir/operators/reshape_op_gpu_test.cc.o:在函数‘caffe2::ReshapeOpGPUTest_testReshapeWithScalar_Test::TestBody()’中: reshape_op_gpu_test.cc:(.text+0x1725):对‘google::protobuf::internal::fixed_address_empty_string[abi:cxx11]’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteBytes(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream*)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::MessageLite::SerializeAsStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::SetUsageMessage(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::Message::DebugStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char const*, void ()(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&))’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::Message::ShortDebugStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteStringMaybeAliased(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::internal::ParseNamedEnum(google::protobuf::EnumDescriptor const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::ReadBytes(google::protobuf::io::CodedInputStream*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::protobuf::MessageLite::ParseFromString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::Message::GetTypeNameabi:cxx11 const’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::OnShutdownDestroyString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::io::CodedOutputStream::WriteStringWithSizeToArray(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, unsigned char*)’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::FlagRegisterer::FlagRegisterer<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >(char const*, char const*, char const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::Message::InitializationErrorStringabi:cxx11 const’未定义的引用 ../lib/libcaffe2_gpu.so:对‘google::base::CheckOpMessageBuilder::NewStringabi:cxx11’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteBytesMaybeAliased(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::AssignDescriptors(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::internal::MigrationSchema const*, google::protobuf::Message const* const*, unsigned int const*, google::protobuf::MessageFactory*, google::protobuf::Metadata*, google::protobuf::EnumDescriptor const**, google::protobuf::ServiceDescriptor const**)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::internal::WireFormatLite::WriteString(int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::io::CodedOutputStream*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::TextFormat::ParseFromString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, google::protobuf::Message*)’未定义的引用 ../lib/libcaffe2.so:对‘google::protobuf::MessageLite::SerializeToString(std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) const’未定义的引用 collect2: error: ld returned 1 exit status caffe2/CMakeFiles/reshape_op_gpu_test.dir/build.make:126: recipe for target 'bin/reshape_op_gpu_test' failed make[2]: *** [bin/reshape_op_gpu_test] Error 1 CMakeFiles/Makefile2:1341: recipe for target 'caffe2/CMakeFiles/reshape_op_gpu_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/reshape_op_gpu_test.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2

    build Protobuf 
    opened by 7oud 37
  • TravisCI Overhaul

    TravisCI Overhaul

    Uncached build: https://travis-ci.org/lukeyeager/caffe2/builds/239677224 Cached build: https://travis-ci.org/lukeyeager/caffe2/builds/239686725

    Improvements:

    • Parallel builds everywhere
    • All builds use CCache for quick build times (help from https://github.com/pytorch/pytorch/pull/614, https://github.com/ccache/ccache/pull/145)
    • Run ctests when available (continuation of https://github.com/caffe2/caffe2/pull/550)
    • Upgraded from cuDNN v5 to v6
    • Fixed MKL build (by updating pkg version)
    • Fixed android builds (https://github.com/caffe2/caffe2/commit/b6f905a67b8cdc301203c08d5a598bb1ed6d1873#commitcomment-22404119)

    Things that are broken:

    • ~~Building NNPACK fails with no discernible error message (currently disabled entirely)~~
    • ~~Android builds continue to fail with existing error:~~
    • ~~OSX builds time-out:~~

    Summary

    | Before | After | Changes | | --- | --- | --- | | COMPILER=g++ | linux | without CUDA | | COMPILER=g++-5 | linux-gcc5 | without CUDA | | COMPILER=g++ | linux-cuda | updated to cuDNN v6 | | BLAS=MKL | linux-mkl | updated pkg version | | BUILD_TARGET=android | linux-android | | | COMPILER=clang++ | osx | | | BUILD_TARGET=ios | osx-ios | | | BUILD_TARGET=android | osx-android | | | QUICKTEST | GONE | | | COMPILER=g++-4.8 | GONE | | | COMPILER=g++-4.9 | GONE | |

    CLA Signed 
    opened by lukeyeager 30
  • make_mnist_db doesn't generate db files

    make_mnist_db doesn't generate db files

    When running tutorial MNIST.ipynb, the function GenerateDB() runs fine, no error reported but it did not generate files mnist-train-nchw-leveldb or mnist-test-nchw-leveldb.

    While calling make_mnist_db directly from command line, it reported an error:

    • Caffe2 flag error: Cannot convert argument to bool: --db Note that if you are passing in a bool flag, you need to explicitly specify it, like --arg=True or --arg True. Otherwise, the next argument may be inadvertently used as the argument, causing the above error. Caffe2 flag: illegal argument: --channel_first

    Any thoughts how to fix it?

    I am using Caffe2 on windows server 2016 with python2.7. Running tutorial Basics.ipynb works.

    build 
    opened by yiwsun 29
  • onnx_onnx_c2.proto:383:5: Expected

    onnx_onnx_c2.proto:383:5: Expected "required", "optional", or "repeated".

    If this is a build issue, please fill out the template below.

    System information

    • Operating system: Ubuntu 14.04
    • Compiler version: GCC 4.8.4
    • CMake version: 3.11.2
    • CMake arguments: No args
    • Relevant libraries/versions : CUDA 8.0 CuDNN v6.0.21

    CMake summary output

    ******** Summary ********
    -- Does not need to define long separately.
    -- std::exception_ptr is supported.
    -- NUMA is not available
    -- Turning off deprecation warning due to glog.
    -- Current compiler supports avx2 extention. Will build perfkernels.
    -- Caffe2: Found protobuf with new-style protobuf targets.
    -- Caffe2 protobuf include directory: /usr/include
    -- The BLAS backend of choice:Eigen
    -- Could NOT find NNPACK (missing: NNPACK_INCLUDE_DIR NNPACK_LIBRARY PTHREADPOOL_LIBRARY CPUINFO_LIBRARY) 
    -- Brace yourself, we are building NNPACK
    -- Found PythonInterp: /usr/bin/python (found version "2.7.6") 
    -- Caffe2: Cannot find gflags automatically. Using legacy find.
    -- Caffe2: Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
    -- Caffe2: Cannot find glog automatically. Using legacy find.
    -- Caffe2: Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
    -- git Version: v0.0.0
    -- Version: 0.0.0
    -- Performing Test HAVE_STD_REGEX
    -- Performing Test HAVE_STD_REGEX
    -- Performing Test HAVE_STD_REGEX -- compiled but failed to run
    -- Performing Test HAVE_GNU_POSIX_REGEX
    -- Performing Test HAVE_GNU_POSIX_REGEX
    -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
    -- Performing Test HAVE_POSIX_REGEX
    -- Performing Test HAVE_POSIX_REGEX
    -- Performing Test HAVE_POSIX_REGEX -- success
    -- Performing Test HAVE_STEADY_CLOCK
    -- Performing Test HAVE_STEADY_CLOCK
    -- Performing Test HAVE_STEADY_CLOCK -- success
    -- Found lmdb    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
    -- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so)
    -- Found Snappy  (include: /usr/include, library: /usr/lib/libsnappy.so)
    -- Could NOT find Numa (missing: Numa_INCLUDE_DIR Numa_LIBRARIES) 
    CMake Warning at cmake/Dependencies.cmake:205 (message):
      Not compiling with NUMA.  Suppress this warning with -DUSE_NUMA=OFF
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Found CUDA: /usr/local/cuda-8.0 (found suitable exact version "8.0") 
    -- OpenCV found (/usr/local/share/OpenCV)
    CMake Warning at cmake/Dependencies.cmake:270 (find_package):
      By not providing "FindEigen3.cmake" in CMAKE_MODULE_PATH this project has
      asked CMake to find a package configuration file provided by "Eigen3", but
      CMake did not find one.
    
      Could not find a package configuration file provided by "Eigen3" with any
      of the following names:
    
        Eigen3Config.cmake
        eigen3-config.cmake
    
      Add the installation prefix of "Eigen3" to CMAKE_PREFIX_PATH or set
      "Eigen3_DIR" to a directory containing one of the above files.  If "Eigen3"
      provides a separate development package or SDK, be sure it has been
      installed.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Did not find system Eigen. Using third party subdirectory.
    -- Found PythonInterp: /usr/bin/python (found suitable version "2.7.6", minimum required is "2.7") 
    -- NumPy ver. 1.14.0 found (include: /usr/local/lib/python2.7/dist-packages/numpy/core/include)
    -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
    -- MPI support found
    -- MPI compile flags: -pthread
    -- MPI include path: /usr/lib/openmpi/include/openmpi/usr/lib/openmpi/include
    -- MPI LINK flags path: -L/usr/lib/openmpi/lib -pthread
    -- MPI libraries: /usr/lib/libmpi_cxx.so/usr/lib/libmpi.so/usr/lib/x86_64-linux-gnu/libdl.so/usr/lib/x86_64-linux-gnu/libhwloc.so
    CMake Warning at cmake/Dependencies.cmake:324 (message):
      OpenMPI found, but it is not built with CUDA support.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Found CUDA: /usr/local/cuda-8.0 (found suitable version "8.0", minimum required is "7.0") 
    -- Caffe2: CUDA detected: 8.0
    -- Found cuDNN: v6.0.21  (include: /usr/local/cuda-8.0/include, library: /usr/local/cuda-8.0/lib64/libcudnn.so)
    -- Automatic GPU detection returned 6.1.
    -- Added CUDA NVCC flags for: sm_61
    -- Could NOT find NCCL (missing: NCCL_INCLUDE_DIRS NCCL_LIBRARIES) 
    -- Could NOT find CUB (missing: CUB_INCLUDE_DIR) 
    -- Could NOT find Gloo (missing: Gloo_INCLUDE_DIR Gloo_LIBRARY) 
    -- MPI include path: /usr/lib/openmpi/include/openmpi/usr/lib/openmpi/include
    -- MPI libraries: /usr/lib/libmpi_cxx.so/usr/lib/libmpi.so/usr/lib/x86_64-linux-gnu/libdl.so/usr/lib/x86_64-linux-gnu/libhwloc.so
    -- CUDA detected: 8.0
    -- Found libcuda: /usr/lib/x86_64-linux-gnu/libcuda.so
    -- Found libnvrtc: /usr/local/cuda-8.0/lib64/libnvrtc.so
    CMake Warning at cmake/Dependencies.cmake:457 (message):
      mobile opengl is only used in android or ios builds.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    CMake Warning at cmake/Dependencies.cmake:533 (message):
      Metal is only used in ios builds.
    Call Stack (most recent call first):
      CMakeLists.txt:101 (include)
    
    
    -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
    -- GCC 4.8.4: Adding gcc and gcc_s libs to link line
    -- Include NCCL operators
    -- Including image processing operators
    -- Excluding video processing operators due to no opencv
    -- Excluding mkl operators as we are not using mkl
    -- Include Observer library
    -- Using lib/python2.7/dist-packages as python relative installation path
    -- Automatically generating missing __init__.py files.
    -- 
    -- ******** Summary ********
    -- General:
    --   CMake version         : 3.11.20180308
    --   CMake command         : /usr/local/bin/cmake
    --   Git version           : v0.8.1-1319-g2900223
    --   System                : Linux
    --   C++ compiler          : /usr/bin/c++
    --   C++ compiler version  : 4.8.4
    --   Protobuf compiler     : /usr/bin/protoc
    --   Protobuf include path : /usr/include
    --   Protobuf libraries    : /usr/lib/x86_64-linux-gnu/libprotobuf.so;-pthread
    --   BLAS                  : Eigen
    --   CXX flags             :  -Wno-deprecated -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wno-invalid-partial-specialization
    --   Build type            : Release
    --   Compile definitions   : 
    -- 
    --   BUILD_BINARY          : ON
    --   BUILD_DOCS            : OFF
    --   BUILD_PYTHON          : ON
    --     Python version      : 2.7.6
    --     Python includes     : /usr/include/python2.7
    --   BUILD_SHARED_LIBS     : ON
    --   BUILD_TEST            : ON
    --   USE_ATEN              : OFF
    --   USE_ASAN              : OFF
    --   USE_CUDA              : ON
    --     CUDA version        : 8.0
    --     CuDNN version       : 6.0.21
    --     CUDA root directory : /usr/local/cuda-8.0
    --     CUDA library        : /usr/lib/x86_64-linux-gnu/libcuda.so
    --     CUDA NVRTC library  : /usr/local/cuda-8.0/lib64/libnvrtc.so
    --     CUDA runtime library: /usr/local/cuda-8.0/lib64/libcudart.so
    --     CUDA include path   : /usr/local/cuda-8.0/include
    --     NVCC executable     : /usr/local/cuda-8.0/bin/nvcc
    --     CUDA host compiler  : /usr/bin/cc
    --   USE_EIGEN_FOR_BLAS    : 1
    --   USE_FFMPEG            : OFF
    --   USE_GFLAGS            : ON
    --   USE_GLOG              : ON
    --   USE_GLOO              : ON
    --   USE_LEVELDB           : ON
    --     LevelDB version     : 1.15
    --     Snappy version      : 1.1.0
    --   USE_LITE_PROTO        : OFF
    --   USE_LMDB              : ON
    --     LMDB version        : 0.9.10
    --   USE_METAL             : OFF
    --   USE_MKL               : 
    --   USE_MOBILE_OPENGL     : OFF
    --   USE_MPI               : ON
    --   USE_NCCL              : ON
    --   USE_NERVANA_GPU       : OFF
    --   USE_NNPACK            : ON
    --   USE_OBSERVERS         : ON
    --   USE_OPENCV            : ON
    --     OpenCV version      : 3.3.0
    --   USE_OPENMP            : OFF
    --   USE_PROF              : OFF
    --   USE_REDIS             : OFF
    --   USE_ROCKSDB           : OFF
    --   USE_THREADS           : ON
    --   USE_ZMQ               : OFF
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/eslam/caffe2/build
    

    Now executing sudo make install I get the following errors:

    [ 15%] Running C++ protocol buffer compiler on /home/eslam/caffe2/build/third_party/onnx/onnx/onnx_onnx_c2.proto
    onnx_onnx_c2.proto:383:5: Expected "required", "optional", or "repeated".
    onnx_onnx_c2.proto:383:17: Missing field number.
    onnx_onnx_c2.proto:402:3: Expected "required", "optional", or "repeated".
    onnx_onnx_c2.proto:402:15: Missing field number.
    make[2]: *** [third_party/onnx/onnx/onnx_onnx_c2.pb.cc] Error 1
    make[1]: *** [third_party/onnx/CMakeFiles/onnx_proto.dir/all] Error 2
    make: *** [all] Error 2
    
    build CUDA Protobuf 
    opened by BerserkerTiger 27
  • [Windows] Bug fixes for MSVC

    [Windows] Bug fixes for MSVC

    • file_store_handler.cc: mkdir only accepts one argument and requires inclusion of <direct.h>
    • math.h: macro workaround does not work for integerIsPowerOf2 when prefixed with math namespace.
    • GpuBitonicSort.cuh: use std::integral_constant since nvcc ignores constexpr with MSVC (fixes #997)
    • pool_op_cudnn.cu: undefine IN and OUT macros defined in minwindef.h
    • logging.cc: Prefix glog logging levels with name since MSVC cannot use the abbreviated macros
    CLA Signed 
    opened by willyd 27
  • cmake: stop including files from the install directory

    cmake: stop including files from the install directory

    Here is the buggy behavior which this change fixes:

    • On the first configure with CMake, a system-wide benchmark installation is not found, so we use the version in third_party/ (see here)

    • On installation, the benchmark sub-project installs its headers to CMAKE_INSTALL_PREFIX (see here)

    • On a rebuild, CMake searches the system again for a benchmark installation (see https://github.com/caffe2/caffe2/issues/916 for details on why the first search is not cached)

    • CMake includes CMAKE_INSTALL_PREFIX when searching the system (docs)

    • Voila, a "system" installation of benchmark is found at CMAKE_INSTALL_PREFIX

    • On a rebuild, -isystem $CMAKE_INSTALL_PREFIX/include is added to every build target (see here). e.g:

      cd /caffe2/build/caffe2/binaries && ccache /usr/bin/c++    -I/caffe2/build -isystem /caffe2/third_party/googletest/googletest/include -isystem /caffe2/install/include -isystem /usr/include/opencv -isystem /caffe2/third_party/eigen -isystem /usr/include/python2.7 -isystem /usr/lib/python2.7/dist-packages/numpy/core/include -isystem /caffe2/third_party/pybind11/include -isystem /usr/local/cuda/include -isystem /caffe2/third_party/cub -I/caffe2 -I/caffe2/build_host_protoc/include  -fopenmp -std=c++11 -O2 -fPIC -Wno-narrowing -O3 -DNDEBUG   -o CMakeFiles/split_db.dir/split_db.cc.o -c /caffe2/caffe2/binaries/split_db.cc
      

    This causes two issues:

    1. Since the headers and libraries at CMAKE_INSTALL_PREFIX have a later timestamp than the built files, an unnecessary rebuild is triggered
    2. Out-dated headers from the install directory are used during compilation, which can lead to strange build errors (which can usually be fixed by rm -rf'ing the install directory)

    Possible solutions:

    • Stop searching the system for an install of benchmark, and always use the version in third_party/
    • Cache the initial result of the system-wide search for benchmark, so we don't accidentally pick up the installed version later
    • Hack CMake to stop looking for headers and libraries in the installation directory

    This PR is an implementation of the first solution. Feel free to close this and fix the issue in another way if you like.

    CLA Signed build 
    opened by lukeyeager 24
  • [wip] Fix public protobuf interface - wip

    [wip] Fix public protobuf interface - wip

    This is an ongoing fix to the protobuf issue, mainly to address 2 things:

    (1) we have random protobuf fixes trying to patch an already flaky system (for example, dual install from anaconda and brew). This diff aims to basically use standard packages as much as possible (protobuf cmake config files, or FindProtoBuf.cmake) and then enforce the build script to explicitly set paths.

    (2) We need protobuf to be in the public interface of Caffe2. This PR adds it.

    (3) We will most likely need a protobuf diagnostic tool / script. TBD.

    Firing a PR so that we can launch build tests.

    CLA Signed 
    opened by Yangqing 23
  • Check system dependencies first

    Check system dependencies first

    This PR changes the cmake of Caffe2 to look for system dependencies before resorting to the submodules in third-party. Only googletest should logically be in third-party, the other libraries should ideally be installed as system dependencies by the user. This PR adds system dependency checks for Gloo, CUB, pybind11, Eigen and benchmark, as these were missing from the cmake files.

    In addition it removes the execution of git submodule update --init in cmake. This seems like bad behavior to me, it should be up to the user to download submodules and manage the git repository.

    CLA Signed 
    opened by hgaiser 23
  • mpi_test.cc.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv

    mpi_test.cc.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv

    Hi,

    I am getting an error while running make in Caffe2. This is what it says: /usr/bin/ld: CMakeFiles/mpi_test.dir/mpi/mpi_test.cc.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv' /usr/lib/libmpi_cxx.so.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status caffe2/CMakeFiles/mpi_test.dir/build.make: 100: recipe for target 'bin/mpi_test' failed make[2]: *** [bin/mpi_test] Error 1 CMakeFiles/Makefile2:2518: recipe for target 'caffe2/CMakeFiles/mpi_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/mpi_test.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2

    Any idea how I can fix it? Any help will be appreciated.

    Thanks!!

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: gcc 5.4.0 20160609
    • CMake version: 3.5.1
    • CMake arguments:
    • CUDA 9.1
    • CuDNN 7.0.5

    CMake summary output

    ******** Summary ******** -- General: -- CMake version : 3.5.1 -- CMake command : /usr/bin/cmake -- Git version : v0.8.1-1240-g8f41717 -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 5.4.0 -- Protobuf compiler : /usr/bin/protoc -- Protobuf include path : /usr/include -- Protobuf libraries : optimized;/usr/lib/x86_64-linux-gnu/libprotobuf.so;debug;/usr/lib/x86_64-linux-gnu/libprotobuf.so;-pthread -- BLAS : Eigen -- CXX flags : -std=c++11 -O2 -fPIC -Wno-narrowing -Wno-invalid-partial-specialization -- Build type : Release

    -- Compile definitions :

    -- BUILD_BINARY : ON -- BUILD_DOCS : OFF -- BUILD_PYTHON : ON -- Python version : 2.7.12 -- Python library : /usr/lib/x86_64-linux-gnu/libpython2.7.so -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ATEN : OFF -- USE_ASAN : OFF -- USE_CUDA : OFF -- USE_EIGEN_FOR_BLAS : 1 -- USE_FFMPEG : OFF -- USE_GFLAGS : ON -- USE_GLOG : ON -- USE_GLOO : ON -- USE_LEVELDB : ON -- LevelDB version : 1.18 -- Snappy version : 1.1.3 -- USE_LITE_PROTO : OFF -- USE_LMDB : ON -- LMDB version : 0.9.17 -- USE_METAL : OFF -- USE_MKL : -- USE_MOBILE_OPENGL : OFF -- USE_MPI : ON -- USE_NCCL : OFF -- USE_NERVANA_GPU : OFF -- USE_NNPACK : ON -- USE_OBSERVERS : ON -- USE_OPENCV : ON -- OpenCV version : 2.4.9.1 -- USE_OPENMP : OFF -- USE_PROF : OFF -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_THREADS : ON -- USE_ZMQ : OFF -- Configuring done -- Generating done -- Build files have been written to: /home/ubuntu/caffe2/build

    
    
    build CUDA 
    opened by skhushu 22
  • nvcc fatal   : Unsupported gpu architecture 'compute_75'

    nvcc fatal : Unsupported gpu architecture 'compute_75'

    System information

    Operating system: Ubuntu 16.04
    CMake version: 3.11.0
    Relevant libraries/versions (e.g. CUDA):
    CUDA version : 9.0
    CuDNN version : 7.1.3
    I compile caffe2 with source
    

    CMake summary output [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/address.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/buffer.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/context.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/device.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/pair.cc.o [ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/unbound_buffer.cc.o [ 23%] Linking CXX static library ../../../lib/libgloo.a [ 23%] Built target gloo [ 23%] Building NVCC (Device) object third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o nvcc fatal : Unsupported gpu architecture 'compute_75' CMake Error at gloo_cuda_generated_nccl.cu.o.Release.cmake:215 (message): Error generating /home/lyl/pytorch/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o

    third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/build.make:77: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o' failed make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o] Error 1 CMakeFiles/Makefile2:951: recipe for target 'third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/all' failed make[1]: *** [third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2

    opened by NIEYALI 0
  • Caffe2 python Conv op can not specify engine

    Caffe2 python Conv op can not specify engine

    I am testing the Conv with 'depthwise_3x3' engine in Caffe2. My caffe2 is installed from source. I constructed one layer network which contains only one group convolution layer with input size (1,100,600,600) and kernel size (100,1,3,3), group=100. However, when I specify the engine to be 'depthwise_3x3', the speed is the same with engine 'cudnn' (or '" ""' or anything others). It seems that the argument 'engine=' does not work.

    opened by Guodonggogo 1
  • new instance  caffe2::Predictor get struct

    new instance caffe2::Predictor get struct

    I'm run AICamera demo, and i change pb model with shufflenet pb model, which had tested on caffe2 python. however when i run shufflenet pb model in AIcamera android app it always gets stuck in _predictor = new caffe2::Predictor(_initNet, _predictNet); anyone help me ?

    opened by cswwp 1
  • Trojan horse: Fuerboos.C!cl

    Trojan horse: Fuerboos.C!cl

    Hi there,

    when building the latest release:

    I get the following warning from Windows: Trojan:Win32/Fuerboos.C!cl Severe Warning caffe-master\build\CMakeFiles\3.12.3\CompilerIdC\a.exe

    Does anybody else experience this?

    opened by Franzisdrak 0
  • Error argument in predict

    Error argument in predict

    hi guys, caffe2 in now for me and i have this error while try run an example in tutorial web Caffe2 has been moved to https://github.com/pytorch/pytorch . Please post your issue at https://github.com/pytorch/pytorch/issues and include [Caffe2] in the beginning of your issue title.

    opened by jjoss 0
Releases(v0.8.1)
  • v0.8.1(Aug 8, 2017)

  • v0.8.0(Jul 21, 2017)

  • v0.7.0(Apr 18, 2017)

    Caffe2 v0.7.0 Release Notes

    Installation

    This build is confirmed for:

    • Ubuntu 14.04
    • Ubuntu 16.06

    Required Dependencies

    sudo apt-get update
    sudo apt-get install -y --no-install-recommends \
          build-essential \
          cmake \
          git \
          libgoogle-glog-dev \
          libprotobuf-dev \
          protobuf-compiler \
          python-dev \
          python-pip                          
    sudo pip install numpy protobuf
    

    Optional GPU Support

    If you plan to use GPU instead of CPU only, then you should install NVIDIA CUDA and cuDNN, a GPU-accelerated library of primitives for deep neural networks. NVIDIA's detailed instructions or if you're feeling lucky try the quick install set of commands below.

    Update your graphics card drivers first! Otherwise you may suffer from a wide range of difficult to diagnose errors.

    For Ubuntu 14.04

    sudo apt-get update && sudo apt-get install wget -y --no-install-recommends
    wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_8.0.61-1_amd64.deb"
    sudo dpkg -i cuda-repo-ubuntu1404_8.0.61-1_amd64.deb
    sudo apt-get update
    sudo apt-get install cuda
    

    For Ubuntu 16.04

    sudo apt-get update && sudo apt-get install wget -y --no-install-recommends
    wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb"
    sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
    sudo apt-get update
    sudo apt-get install cuda
    

    Install cuDNN (all Ubuntu versions)

    CUDNN_URL="http://developer.download.nvidia.com/compute/redist/cudnn/v5.1/cudnn-8.0-linux-x64-v5.1.tgz"
    wget ${CUDNN_URL}
    sudo tar -xzf cudnn-8.0-linux-x64-v5.1.tgz -C /usr/local
    rm cudnn-8.0-linux-x64-v5.1.tgz && sudo ldconfig
    

    Optional Dependencies

    Note libgflags2 is for Ubuntu 14.04. libgflags-dev is for Ubuntu 16.04.

    # for Ubuntu 14.04
    sudo apt-get install -y --no-install-recommends libgflags2
    
    # for Ubuntu 16.04
    sudo apt-get install -y --no-install-recommends libgflags-dev
    
    # for both Ubuntu 14.04 and 16.04
    sudo apt-get install -y --no-install-recommends \
          libgtest-dev \
          libiomp-dev \
          libleveldb-dev \
          liblmdb-dev \
          libopencv-dev \
          libopenmpi-dev \
          libsnappy-dev \
          openmpi-bin \
          openmpi-doc \
          python-pydot
    sudo pip install \
          flask \
          graphviz \
          hypothesis \
          jupyter \
          matplotlib \
          pydot python-nvd3 \
          pyyaml \
          requests \
          scikit-image \
          scipy \
          setuptools \
          tornado
    

    Clone & Build

    git clone --recursive https://github.com/caffe2/caffe2.git && cd caffe2
    make && cd build && sudo make install
    python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
    

    Run this command below to test if your GPU build was a success. You will get a test output either way, but it will warn you at the top of the output if CPU was used instead along with other errors like missing libraries.

    python -m caffe2.python.operator_test.relu_op_test
    

    Environment Variables

    These environment variables may assist you depending on your current configuration. When using the install instructions above on the AWS Deep Learning AMI you don't need to set these variables. However, our Docker scripts built on Ubuntu-14.04 or NVIDIA's CUDA images seem to benefit from having these set. If you ran into problems with the build tests above then these are good things to check. Echo them first and see what you have and possibly append or replace with these directories. Also visit the troubleshooting section below.

    echo $PYTHONPATH
    # export PYTHONPATH=/usr/local:$PYTHONPATH
    # export PYTHONPATH=$PYTHONPATH:/home/ubuntu/caffe2/build
    echo $LD_LIBRARY_PATH
    # export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
    

    Setting Up Tutorials & Jupyter Server

    If you're running this all on a cloud computer, you probably won't have a UI or way to view the IPython notebooks by default. Typically, you would launch them locally with ipython notebook and you would see a localhost:8888 webpage pop up with the directory of notebooks running. The following example will show you how to launch the Jupyter server and connect to remotely via an SSH tunnel.

    First configure your cloud server to accept port 8889, or whatever you want, but change the port in the following commands. On AWS you accomplish this by adding a rule to your server's security group allowing a TCP inbound on port 8889. Otherwise you would adjust iptables for this.

    Next you launch the Juypter server.

    jupyter notebook --no-browser --port=8889
    

    Then create the SSH tunnel. This will pass the cloud server's Jupyter instance to your localhost 8888 port for you to use locally. The example below is templated after how you would connect AWS, where your-public-cert.pem is your own public certificate and [email protected] is your login to your cloud server. You can easily grab this on AWS by going to Instances > Connect and copy the part after ssh and swap that out in the command below.

    ssh -N -f -L localhost:8888:localhost:8889 -i "your-public-cert.pem" [email protected]
    

    Troubleshooting

    |Python errors|| |----|-----| |Python version | Python is core to run Caffe2. We currently require Python2.7. Ubuntu 14.04 and greater have Python built in by default, and that can be used to run Caffe2. To check your version: python --version| |Solution | If you want the developer version of python, you could install the dev package for Python: sudo apt-get install python-dev| |Python environment | You may have another version of Python installed or need to support Python version 3 for other projects.| |Solution | Try virtualenv or Anaconda. The Anaconda platform provides a single script to install many of the necessary packages for Caffe2, including Python. Using Anaconda is outside the scope of these instructions, but if you are interested, it may work well for you.| |pip version | If you plan to use Python with Caffe2 then you need pip.| |Solution | sudo apt-get install python-pip and also try using pip2 instead of pip.| |"AttributeError: 'module' object has no attribute 'MakeArgument'" | Occurs when calling core.CreateOperator| |Solution | Check your install directory (/usr/local/), and remove the folder /caffe2/python/utils|

    |Building from source|| |----|-----| |OS version | Caffe2 requires Ubuntu 14.04 or greater.| |git | While you can download the Caffe2 source code and submodules directly from GitHub as a zip, using git makes it much easier.| |Solution | sudo apt-get install git| |protobuf | You may experience an error related to protobuf during the make step.| |Solution | Make sure you've installed protobuf in both of these two ways: sudo apt-get install libprotobuf-dev protobuf-compiler && sudo pip install protobuf| |libgflags2 error | This optional dependency is for Ubuntu 14.04.| |Solution | Use apt-get install libgflags-dev for Ubuntu 16.04.|

    |GPU Support|| |----|-----| |GPU errors | Unsupported GPU or wrong version| |Solution | You need to know the specific deb for your version of Linux. sudo dpkg -i| |cuda-repo-<distro>_<version>_<architecture>.deb Refer to NVIDIA's installation guide.| |Build issues | Be warned that installing CUDA and cuDNN will increase the size of your build by about 4GB, so plan to have at least 12GB for your Ubuntu disk size.|

    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Apr 3, 2017)

    Caffe2 v0.6.0 Release Notes

    Installation

    Note: the release archive will not include third_party submodules which you will need in order to build Caffe2. We've uploaded an archive with the source and the submodules and attached it to this release. See the bottom of this page for the link!

    This build is confirmed for:

    • Ubuntu 14.04
    • Ubuntu 16.06

    Required Dependencies

    sudo apt-get update
    sudo apt-get install python-dev python-pip git build-essential cmake libprotobuf-dev protobuf-compiler libgoogle-glog-dev
    sudo pip install numpy protobuf
    

    Optional GPU Support

    If you plan to use GPU instead of CPU only, then you should install NVIDIA CUDA and cuDNN, a GPU-accelerated library of primitives for deep neural networks. NVIDIA's detailed instructions or if you're feeling lucky try the quick install set of commands below.

    Update your graphics card drivers first! Otherwise you may suffer from a wide range of difficult to diagnose errors.

    For Ubuntu 14.04

    sudo apt-get update && sudo apt-get install wget -y --no-install-recommends
    wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_8.0.61-1_amd64.deb"
    sudo dpkg -i cuda-repo-ubuntu1404_8.0.61-1_amd64.deb
    sudo apt-get update
    sudo apt-get install cuda
    

    For Ubuntu 16.04

    sudo apt-get update && sudo apt-get install wget -y --no-install-recommends
    wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb"
    sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
    sudo apt-get update
    sudo apt-get install cuda
    

    Install cuDNN (all Ubuntu versions)

    CUDNN_URL="http://developer.download.nvidia.com/compute/redist/cudnn/v5.1/cudnn-8.0-linux-x64-v5.1.tgz"
    wget ${CUDNN_URL}
    sudo tar -xzf cudnn-8.0-linux-x64-v5.1.tgz -C /usr/local
    rm cudnn-8.0-linux-x64-v5.1.tgz && sudo ldconfig
    

    Optional Dependencies

    sudo apt-get install libgtest-dev libgflags2 libgflags-dev liblmdb-dev libleveldb-dev libsnappy-dev libopencv-dev libiomp-dev openmpi-bin openmpi-doc libopenmpi-dev python-pydot
    sudo pip install flask graphviz hypothesis jupyter matplotlib pydot python-nvd3 pyyaml requests scikit-image scipy setuptools tornado
    
    • Note for Ubuntu 16.04 libgflags2 should be replaced with libgflags-dev.

    Clone & Build

    git clone --recursive https://github.com/caffe2/caffe2.git && cd caffe2
    make && cd build && sudo make install
    python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
    

    Run this command below to test if your GPU build was a success. You will get a test output either way, but it will warn you at the top of the output if CPU was used instead along with other errors like missing libraries.

    python -m caffe2.python.operator_test.relu_op_test
    

    Environment Variables

    These environment variables may assist you depending on your current configuration. When using the install instructions above on the AWS Deep Learning AMI you don't need to set these variables. However, our Docker scripts built on Ubuntu-14.04 or NVIDIA's CUDA images seem to benefit from having these set. If you ran into problems with the build tests above then these are good things to check. Echo them first and see what you have and possibly append or replace with these directories. Also visit the troubleshooting section below.

    echo $PYTHONPATH
    # export PYTHONPATH=/usr/local:$PYTHONPATH
    # export PYTHONPATH=$PYTHONPATH:/home/ubuntu/caffe2/build
    echo $LD_LIBRARY_PATH
    # export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
    

    Setting Up Tutorials & Jupyter Server

    If you're running this all on a cloud computer, you probably won't have a UI or way to view the IPython notebooks by default. Typically, you would launch them locally with ipython notebook and you would see a localhost:8888 webpage pop up with the directory of notebooks running. The following example will show you how to launch the Jupyter server and connect to remotely via an SSH tunnel.

    First configure your cloud server to accept port 8889, or whatever you want, but change the port in the following commands. On AWS you accomplish this by adding a rule to your server's security group allowing a TCP inbound on port 8889. Otherwise you would adjust iptables for this.

    Next you launch the Juypter server.

    jupyter notebook --no-browser --port=8889
    

    Then create the SSH tunnel. This will pass the cloud server's Jupyter instance to your localhost 8888 port for you to use locally. The example below is templated after how you would connect AWS, where your-public-cert.pem is your own public certificate and [email protected] is your login to your cloud server. You can easily grab this on AWS by going to Instances > Connect and copy the part after ssh and swap that out in the command below.

    ssh -N -f -L localhost:8888:localhost:8889 -i "your-public-cert.pem" [email protected]
    

    Troubleshooting

    |Python errors|| |----|-----| |Python version | Python is core to run Caffe2. We currently require Python2.7. Ubuntu 14.04 and greater have Python built in by default, and that can be used to run Caffe2. To check your version: python --version| |Solution | If you want the developer version of python, you could install the dev package for Python: sudo apt-get install python-dev| |Python environment | You may have another version of Python installed or need to support Python version 3 for other projects.| |Solution | Try virtualenv or Anaconda. The Anaconda platform provides a single script to install many of the necessary packages for Caffe2, including Python. Using Anaconda is outside the scope of these instructions, but if you are interested, it may work well for you.| |pip version | If you plan to use Python with Caffe2 then you need pip.| |Solution | sudo apt-get install python-pip and also try using pip2 instead of pip.|

    |Building from source|| |----|-----| |OS version | Caffe2 requires Ubuntu 14.04 or greater.| |git | While you can download the Caffe2 source code and submodules directly from GitHub as a zip, using git makes it much easier.| |Solution | sudo apt-get install git| |protobuf | You may experience an error related to protobuf during the make step.| |Solution | Make sure you've installed protobuf in both of these two ways: sudo apt-get install libprotobuf-dev protobuf-compiler && sudo pip install protobuf| |libgflags2 error | This optional dependency is for Ubuntu 14.04.| |Solution | Use apt-get install libgflags-dev for Ubuntu 16.04.|

    |GPU Support|| |----|-----| |GPU errors | Unsupported GPU or wrong version| |Solution | You need to know the specific deb for your version of Linux. sudo dpkg -i| |cuda-repo-<distro>_<version>_<architecture>.deb Refer to NVIDIA's installation guide.| |Build issues | Be warned that installing CUDA and cuDNN will increase the size of your build by about 4GB, so plan to have at least 12GB for your Ubuntu disk size.|

    Source code(tar.gz)
    Source code(zip)
    caffe2-0.6.0-full.tar.gz(29.19 MB)
Owner
Meta Archive
These projects have been archived and are generally unsupported, but are still available to view and use
Meta Archive
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Aug 2, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20k Aug 5, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Fatih Küçükkarakurt 5 Apr 5, 2022
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

tiny-dnn 5.6k Jul 28, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 32.8k Jul 31, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Tencent 110 Jul 16, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

null 75 Apr 14, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices.

Xiaomi 4.7k Jul 28, 2022
Plaidml - PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.4k Jul 28, 2022
CubbyDNN - Deep learning framework using C++17 in a single header file

CubbyDNN CubbyDNN is C++17 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and I

Chris Ohk 31 May 30, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Aug 1, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.7k Aug 2, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8k Aug 4, 2022
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.

cDNN is a Deep Learning Library written in C Programming Language. cDNN provides functions that can be used to create Artificial Neural Networks (ANN)

Vishal R 11 Apr 18, 2022
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

OpenAI 3.8k Aug 1, 2022
tutorial on how to train deep learning models with c++ and dlib.

Dlib Deep Learning tutorial on how to train deep learning models with c++ and dlib. usage git clone https://github.com/davisking/dlib.git mkdir build

Abdolkarim Saeedi 1 Dec 21, 2021
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

TensorRT Open Source Software This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. Included are the sources for Tens

NVIDIA Corporation 5.7k Jul 29, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.2k Aug 6, 2022