MLPerf™ v1.0 results

Overview

MLPerf™ Inference v1.0

GitHub Submission HOWTO

Clone the MLPerf Inference v1.0 submission tree

Clone the submission tree e.g. under your home directory:

$ export SUBMISSION_ROOT=$HOME/submissions_inference_1_0
$ git clone [email protected]:mlcommons/submissions_inference_1_0.git $SUBMISSION_ROOT
$ cd $SUBMISSION_ROOT

Create a branch

We recommend creating a new branch for every logically connected group of results e.g. all results from your System-Under-Test (SUT) or only relating to a particular benchmark. Prefix your branch name with your organization's name. Feel free to include the SUT name, implementation name, benchmark name, etc.

For example:

$ git checkout master && git pull
$ git checkout -b dividiti-closed-aws-g4dn.4xlarge-openvino

Populate your branch according to the submission rules.

You can inspect your changes:

..." to include in what will be committed) closed/dividiti/code/ closed/dividiti/compliance/ closed/dividiti/measurements/ closed/dividiti/results/ closed/dividiti/systems/ nothing added to commit but untracked files present (use "git add" to track) ">
$ git status
On branch dividiti-closed-aws-g4dn.4xlarge-openvino
Untracked files:
  (use "git add ..." to include in what will be committed)
        closed/dividiti/code/
        closed/dividiti/compliance/
        closed/dividiti/measurements/
        closed/dividiti/results/
        closed/dividiti/systems/

nothing added to commit but untracked files present (use "git add" to track)

and make intermediate commits as usual:

$ git add closed/dividiti
$ git commit -m "Dump repo:mlperf-closed-aws-g4dn.4xlarge-openvino."

Run the submission checker

Once you are happy with the tree structure, truncate the accuracy logs and run the submission checker, culminating in e.g.:

      INFO:main:Results=2, NoResults=0
      INFO:main:SUMMARY: submission looks OK

Push the changes

Once you and the submission checker are happy with the tree structure, you can push the changes:

$ git push

fatal: The current branch dividiti-closed-aws-g4dn.4xlarge-openvino has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin dividiti-closed-aws-g4dn.4xlarge-openvino

Do exactly as suggested:

$ git push --set-upstream origin dividiti-closed-aws-g4dn.4xlarge-openvino

Create a pull request

If you now go to https://github.com/mlcommons/submissions_inference_1_0/, you should see a notification about your branch being recently pushed and can immediately create a pull request (PR). You can also select your branch from the dropdown menu under <> Code. (Aren't you happy you prefixed your branch's name with the submitter's name?)

As usual, you can continue committing to the branch until the PR is merged, with any changes being reflected in the PR.

Comments
  • [TensorRT] INTERNAL ERROR: Assertion failed: status==STATUS_SUCCESS

    [TensorRT] INTERNAL ERROR: Assertion failed: status==STATUS_SUCCESS

    Code dir: closed/NVIDIA Test benchmark: ssd-resnet34 Scenarios: Server When I running command make run RUN_ARGS="--benchmarks=ssd-resnet34 --scenarios=Server --test_mode=PerformanceOnly" and make run RUN_ARGS="--benchmarks=ssd-resnet34 --scenarios=Server --config_ver=triton --test_mode=PerformanceOnly" there all have this kind of error

    [2021-07-01 10:26:57,901 builder.py:149 INFO] Building ./build/engines/Tesla P4x1/ssd-resnet34/Server/ssd-resnet34-Server-gpu-b2-int8.default.plan
    [TensorRT] WARNING: Calibration Profile is not defined. Runing calibration with Profile 0
    [TensorRT] INFO: Detected 1 inputs and 1 output network tensors.
    [TensorRT] INFO: Starting Calibration.
    Calibrating with batch 0
    [TensorRT] INTERNAL ERROR: Assertion failed: status == STATUS_SUCCESS
    /work/code/plugin/NMSOptPlugin/src/nmsPluginOpt.cpp:183
    Aborting...
    
    Traceback (most recent call last):
      File "code/main.py", line 703, in <module>
        main(main_args, system)
      File "code/main.py", line 634, in main
        launch_handle_generate_engine(*_gen_args, **_gen_kwargs)
      File "code/main.py", line 62, in launch_handle_generate_engine
        raise RuntimeError("Building engines failed!")
    RuntimeError: Building engines failed!
    Makefile:606: recipe for target 'generate_engines' failed
    make[1]: *** [generate_engines] Error 1
    make[1]: Leaving directory '/work'
    Makefile:600: recipe for target 'run' failed
    make: *** [run] Error 2
    

    I have built test docker successfully by running command make prebuild and already built the required libraries and TensorRT plugins successfully by running command make build my gpu is tesla p4, and here is my config file code/common/system_list.py P4=SystemClass("Tesla P4",["Tesla P4"],["1BB3"],Architecture.Unknow,[1]) configs/ssd-resnet34/Server/config.json

    "Tesla P4": {
            "config_ver": {
                "triton": {
                    "instance_group_count": 4,
                    "server_target_qps": 110,
                    "use_triton": true
                }
            },
            "deque_timeout_usec": 2000,
            "gpu_batch_size": 2,
            "gpu_inference_streams": 4,
            "server_target_qps": 110,
            "use_cuda_thread_per_device": false
        },
    

    So where the problem is? Is there something wrong with config file, inference data or model?

    opened by TianningWang 5
  • Unable to link to libtensorflow_cc.so

    Unable to link to libtensorflow_cc.so

    I'm following the instructions in "Instructions for building TensorFlow and MLPerf loadgen integration" posted here: https://github.com/mlcommons/inference_results_v1.0/tree/master/closed/Intel/code/resnet50/tensorflow

    During the "Build Tensorflow Backend" phase, while running the Makefile in loadrun I was encountered with the following error:

    $ g++ -fopenmp loadrun.cc -O3 -fpic -Wall -std=gnu++14 -g -I/usr/include -I/home/serena/loadgen -I/home/serena/deps-installations2/tf-cc/include -I/usr/include/opencv4 -L/usr/lib -L/home/serena/loadgen/build -L/home/serena/deps-installations2/tf-cc/lib -L/usr/lib -L/home/serena/sf_Intel/code/resnet50/tensorflow/loadrun/../backend -o loadrun -lpthread -lrt -lmlperf_loadgen -ltensorflow_cc -ltensorflow_backend -lboost_filesystem -lboost_system -lopencv_core -lopencv_highgui -lopencv_imgproc -lopencv_videoio -lopencv_imgcodecs
    /usr/bin/ld: cannot find -ltensorflow_cc
    collect2: error: ld returned 1 exit status
    make: *** [Makefile:20: loadrun] Error 1
    

    I have built libtensorflow_cc.so locally; and placed it in the LDFLAGS path(/home/serena/depsinstallations2/tf-cc/lib). However, I get the above error. Any help or pointers.

    $ ls -al
    total 828456
    drwxrwxr-x 2 serena serena      4096 Jun 27 16:41 .
    drwxrwxr-x 4 serena serena      4096 Jun 29 18:04 ..
    -rwxr-xr-x 1 serena serena   2226679 Jun  9 10:18 libiomp5.so
    -rwxr-xr-x 1 serena serena 130121414 Jun  9 10:18 libmklml_intel.so
    lrwxrwxrwx 1 serena serena        29 Jun 27 16:41 libtensorflow_cc.so -> ./lib/libtensorflow_cc.so.2.0
    lrwxrwxrwx 1 serena serena        31 Jun 27 16:41 libtensorflow_cc.so.2 -> ./lib/libtensorflow_cc.so.2.5.0
    -r-xr-xr-x 1 serena serena 391381888 Jun 27 16:39 libtensorflow_cc.so.2.5.0
    -r-xr-xr-x 1 serena serena 324587936 Jun 20 12:03 libtensorflow.so
    
    opened by serenagomez1304 3
  • [Fix][NVIDIA] fix MultiThreadedAugmenter issue

    [Fix][NVIDIA] fix MultiThreadedAugmenter issue

    Fix issue where MultiThreadedAugmenter is not accessible anymore and making preprocessing failing on brats for 3d-unet benchmark

    https://github.com/mlcommons/inference_results_v1.0/issues/11

    opened by jqueguiner 2
  • Getting python error with NVIDA Generate TensorRT engines

    Getting python error with NVIDA Generate TensorRT engines

    I'm following the instructions to duplicate Nvidia inference workload posted here: https://github.com/mlcommons/inference_results_v1.0/tree/master/closed/NVIDIA#readme

    I'm encountered an error to generate the engine for TensorRT. This is inside the container from "make prebuild" script

    "make build" command ran successfully

    [email protected]:/work# make generate_engines RUN_ARGS="--benchmarks=bert --scenarios=Server --config_ver=default,high_accuracy" | tee generate.log [2021-05-20 16:31:39,869 init.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv [2021-05-20 16:31:39,882 main.py:701 INFO] Detected System ID: T4x1 [2021-05-20 16:31:39,908 main.py:529 INFO] Using config files: configs/bert/Server/config.json [2021-05-20 16:31:39,908 init.py:341 INFO] Parsing config file configs/bert/Server/config.json ... [2021-05-20 16:31:39,916 main.py:542 INFO] Processing config "T4x1_bert_Server" [2021-05-20 16:31:40,012 main.py:82 INFO] Building engines for bert benchmark in Server scenario... [2021-05-20 16:31:40,013 main.py:102 INFO] Building GPU engine for T4x1_bert_Server [2021-05-20 16:31:44,273 bert_var_seqlen.py:63 INFO] Using workspace size: 5,368,709,120 [2021-05-20 16:32:02,717 init.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv Process Process-1: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(self._args, **self._kwargs) File "/work/code/main.py", line 108, in handle_generate_engine b.build_engines() File "/work/code/bert/tensorrt/bert_var_seqlen.py", line 129, in build_engines weights_dict = get_onnx_fake_quant_weights(self.model_path) File "/work/code/bert/tensorrt/builder_utils.py", line 77, in get_onnx_fake_quant_weights model = onnx.load(path) File "/usr/local/lib/python3.6/dist-packages/onnx/init.py", line 115, in load_model model = load_model_from_string(s, format=format) File "/usr/local/lib/python3.6/dist-packages/onnx/init.py", line 152, in load_model_from_string return _deserialize(s, ModelProto()) File "/usr/local/lib/python3.6/dist-packages/onnx/init.py", line 95, in _deserialize decoded = cast(Optional[int], proto.ParseFromString(s)) google.protobuf.message.DecodeError: Error parsing message Traceback (most recent call last): File "code/main.py", line 703, in main(main_args, system) File "code/main.py", line 634, in main launch_handle_generate_engine(_gen_args, **_gen_kwargs) File "code/main.py", line 62, in launch_handle_generate_engine raise RuntimeError("Building engines failed!") RuntimeError: Building engines failed! make: *** [generate_engines] Error 1 Makefile:599: recipe for target 'generate_engines' failed

    opened by quic-nmorillo 2
  • Objects of target

    Objects of target "proto-library" referenced but no such target exists.

    The CMake error happened when I was running the MLPerf benchmark suite on NVIDIA GPU-based platforms. When running command make build command location:https://github.com/mlcommons/inference_results_v1.0/tree/master/closed/NVIDIA#build

    The error message is shown below:

    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
    Please set them or make sure they are set and tested correctly in the CMake files:
    NVINFER_LIBRARY
        linked by target "tritonserver" in directory /home/test/inference_results_v1.0/closed/NVIDIA/build/triton-inference-server/src/servers
    NVINFER_PLUGIN_LIBRARY
        linked by target "tritonserver" in directory /home/test/inference_results_v1.0/closed/NVIDIA/build/triton-inference-server/src/servers
    
    CMake Error at /home/test/inference_results_v1.0/closed/NVIDIA/build/triton-inference-server/src/servers/CMakeLists.txt:377 (add_executable):
      Error evaluating generator expression:
    
        $<TARGET_OBJECTS:proto-library>
    
      Objects of target "proto-library" referenced but no such target exists.
    
    
    CMake Error at /home/test/inference_results_v1.0/closed/NVIDIA/build/triton-inference-server/src/servers/CMakeLists.txt:275 (add_executable):
      Error evaluating generator expression:
    
        $<TARGET_OBJECTS:proto-library>
    
      Objects of target "proto-library" referenced but no such target exists.
    

    This is the part of the CMakeList.txt file that relates to $<TARGET_OBJECTS:proto-library>,file dir: /home/test/inference_results_v1.0/closed/NVIDIA/build/triton-inference-server/src/server/CMakeList.txt /home/test/inference_results_v1.0/closed/NVIDIA/build/triton-inference-server/src/test/CMakeList.txt

    add_executable(
      simple
      ${SIMPLE_SRCS}
      ${SIMPLE_HDRS}
      $<TARGET_OBJECTS:proto-library>
    
    )
    
    set(
        HTTP_ENDPOINT_OBJECTS
        $<TARGET_OBJECTS:http-endpoint-library>
        $<TARGET_OBJECTS:proto-library>
        $<TARGET_OBJECTS:model-config-library>
      )
    

    The protobuf source code has been successfully compiled, how do I get CMakelist to find its libraries?

    opened by TianningWang 1
  • How to generate .onnx file having custom layer?

    How to generate .onnx file having custom layer?

    @tjablin I have a pytorch model using spconv library. My task is to convert pytorch model to onnx and then from onnx to tensorrt.

    Your work is quite similar to what I want to do. Please can you provide the code about how you generated libautosiniancnnplugin_ampere.so and ofa_autosinian_is176.onnx.

    Does anyone have any idea how to generate .trt from pytorch model(with spconv library)? Please do suggest.

    opened by HaribHLK 0
  • Error executing tensorflow resnet50

    Error executing tensorflow resnet50

    I'm trying to executed this benchmark: https://github.com/mlcommons/inference_results_v1.0/tree/master/closed/Intel/code/resnet50/tensorflow but when running it I have an error like:

    2022-02-01 02:43:47.633895: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:581] model_pruner failed: Invalid argument: Graph does not contain terminal node ArgMax.
    2022-02-01 02:43:47.634168: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:581] model_pruner failed: Invalid argument: Graph does not contain terminal node ArgMax.
    2022-02-01 02:43:47.652754: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:581] model_pruner failed: Invalid argument: Graph does not contain terminal node ArgMax.
    Running inference failedInvalid argument: Tensor ArgMax:0, specified in either feed_devices or fetch_devices was not found in the GraphRunning inference failed
    

    I thought it is something related with the calibration process since the process was looking for a "conf.yaml" (https://github.com/mlcommons/inference_results_v1.0/blob/master/closed/Intel/calibration/TensorFlow/mlperf.patch#L148) which is not specified, so I looked a bit in the calibration tool repo and found a possible example which I used: https://github.com/intel/neural-compressor/blob/mlperf_v0.7/examples/tensorflow/image_recognition/resnet50_v1_5.yaml The configuration is using ArgMax:0 as output node (https://github.com/mlcommons/inference_results_v1.0/blob/master/closed/Intel/code/resnet50/tensorflow/backend/net_config.h#L64) but it seems it doesn't find it in the model.

    Does anyone have an idea what the error can be?

    opened by fenz 1
  • gits@xcdl190260:vitis/mlperf-vitis-benchmark-app.git  Can it be transferred to the public GitHub?

    [email protected]:vitis/mlperf-vitis-benchmark-app.git Can it be transferred to the public GitHub?

    opened by jzymessi 0
  • batch generator broken for barts dataset preprocessing

    batch generator broken for barts dataset preprocessing

    see this for more info : https://githubmemory.com/repo/MIC-DKFZ/nnUNet/issues/752

    performing : pip install batchgenerators==0.21 solve the issue but seems to break cuda support for torch

    or --ipc=host

    opened by jqueguiner 0
  • Link error while building tensorflow backend

    Link error while building tensorflow backend

    @tjablin I'm trying to build tensorflow backend. I see the following errors after the last make command.

    g++ -fopenmp loadrun.cc -O3 -fpic -Wall -std=gnu++14 -g -I/usr/include -I/home/serena/mlperf_inference/loadgen -I/home/serena/deps-installations2/tf-cc/include -I/usr/include/opencv4 -L/usr/lib -L/home/serena/mlperf_inference/loadgen/build -L/home/serena/deps-installations2/tf-cc/lib/lib -L/home/serena/deps-installations2/tf-cc/lib -L/usr/lib -L/../backend -L/usr/lib/x86_64-linux-gnu -o loadrun -lpthread -lrt -lmlperf_loadgen -ltensorflow_cc -ltensorflow_backend -lboost_filesystem -lboost_system -lopencv_core -lopencv_highgui -lopencv_imgproc -lopencv_videoio -lopencv_imgcodecs
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TTypes<long long, 1ul, long>::Tensor tensorflow::Tensor::tensor<long long, 1ul>()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:727: undefined reference to `tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `Eigen::DSizes<long, 1> tensorflow::TensorShape::AsEigenDSizes<1, long>() const':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:485: undefined reference to `tensorflow::TensorShape::CheckDimsEqual(int) const'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `Eigen::DSizes<long, 1> tensorflow::TensorShape::AsEigenDSizesWithPadding<1, long>() const':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:491: undefined reference to `tensorflow::TensorShape::CheckDimsAtLeast(int) const'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:495: undefined reference to `tensorflow::TensorShapeBase<tensorflow::TensorShape>::dim_size(int) const'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:495: undefined reference to `tensorflow::TensorShapeBase<tensorflow::TensorShape>::dim_size(int) const'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::get_net_conf(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/net_config.h:163: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/net_config.h:163: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/net_config.h:163: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::AccuracyCompute()':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:272: undefined reference to `tensorflow::TensorShapeBase<tensorflow::TensorShape>::dim_size(int) const'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:272: undefined reference to `tensorflow::TensorShapeBase<tensorflow::TensorShape>::dim_size(int) const'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `void std::_Destroy<tensorflow::Tensor>(tensorflow::Tensor*)':
    /usr/include/c++/9/bits/stl_construct.h:98: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::~Classifier()':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:145: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tensorflow::Tensor>::~pair()':
    /usr/include/c++/9/bits/stl_pair.h:208: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >* tensorflow::internal::MakeCheckOpString<unsigned long, unsigned long>(unsigned long const&, unsigned long const&, char const*)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:339: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::ForVar2()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:340: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::NewString[abi:cxx11]()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >* tensorflow::internal::MakeCheckOpString<long, int>(long const&, int const&, char const*)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:339: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::ForVar2()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:340: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::NewString[abi:cxx11]()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >* tensorflow::internal::MakeCheckOpString<long long, long long>(long long const&, long long const&, char const*)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:339: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::ForVar2()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:340: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::NewString[abi:cxx11]()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/default/logging.h:337: undefined reference to `tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `void tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1ul>(absl::lts_2020_02_25::Span<long long const>, std::array<long, 1ul>*) const':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:788: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:794: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:794: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TTypes<float, 1ul, long>::Tensor tensorflow::Tensor::shaped<float, 1ul>(absl::lts_2020_02_25::Span<long long const>)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:824: undefined reference to `tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::core::RefCounted::~RefCounted()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/refcount.h:90: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/refcount.h:90: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tensorflow::Tensor>::~pair()':
    /usr/include/c++/9/bits/stl_pair.h:208: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/include/c++/9/bits/stl_pair.h:208: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::run()':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:261: undefined reference to `tensorflow::operator<<(std::ostream&, tensorflow::Status const&)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TensorShapeRep::TensorShapeRep(tensorflow::TensorShapeRep const&)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:516: undefined reference to `tensorflow::TensorShapeRep::SlowCopyFrom(tensorflow::TensorShapeRep const&)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::core::RefCounted::Ref() const':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/refcount.h:93: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/refcount.h:93: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TensorShapeRep::TensorShapeRep(tensorflow::TensorShapeRep const&)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:516: undefined reference to `tensorflow::TensorShapeRep::SlowCopyFrom(tensorflow::TensorShapeRep const&)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::core::RefCounted::Ref() const':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/platform/refcount.h:93: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TensorShapeRep::~TensorShapeRep()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:531: undefined reference to `tensorflow::TensorShapeRep::DestructorOutOfLine()'
    /usr/bin/ld: /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:531: undefined reference to `tensorflow::TensorShapeRep::DestructorOutOfLine()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tensorflow::Tensor>::~pair()':
    /usr/include/c++/9/bits/stl_pair.h:208: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::load_sample(unsigned long*, unsigned long)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:130: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:130: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:130: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::ParseImageLabel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:466: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:466: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:418: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:418: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:452: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:452: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:454: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:454: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:449: undefined reference to `tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:449: undefined reference to `tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:466: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:418: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::Preprocess(bool, float*, boost::container::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void, void> const&)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:367: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:367: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:367: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::DirectUseSharedMemory(bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:195: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:195: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:186: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:186: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:195: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:186: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::WrapSHMInput(bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:207: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:207: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:207: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TensorShapeBase<tensorflow::TensorShape>::TensorShapeBase(std::initializer_list<long long>)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:170: undefined reference to `tensorflow::TensorShapeBase<tensorflow::TensorShape>::TensorShapeBase(absl::lts_2020_02_25::Span<long long const>)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::create_new_tensor()':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:185: undefined reference to `tensorflow::Tensor::Tensor(tensorflow::DataType, tensorflow::TensorShape const&)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Tensor::operator=(tensorflow::Tensor const&)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:303: undefined reference to `tensorflow::Tensor::CopyFromInternal(tensorflow::Tensor const&, tensorflow::TensorShape const&)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::create_new_tensor()':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:185: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:181: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:181: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TensorShapeRep::~TensorShapeRep()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:531: undefined reference to `tensorflow::TensorShapeRep::DestructorOutOfLine()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::create_new_tensor()':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:181: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:185: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TensorShapeRep::~TensorShapeRep()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor_shape.h:531: undefined reference to `tensorflow::TensorShapeRep::DestructorOutOfLine()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::warmup(int, int, bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:296: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:296: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::TTypes<float, 1ul, long>::Tensor tensorflow::Tensor::shaped<float, 1ul>(absl::lts_2020_02_25::Span<long long const>)':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/framework/tensor.h:824: undefined reference to `tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::warmup(int, int, bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:296: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::WrapLocalInput(bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:218: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:218: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::Preprocess(bool, float*, boost::container::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void, void> const&)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:367: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:367: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::DataProvider<float>::WrapLocalInput(bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/data_provider.h:218: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::load_model(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:149: undefined reference to `tensorflow::GraphDef::GraphDef()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:150: undefined reference to `tensorflow::Env::Default()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:150: undefined reference to `tensorflow::ReadBinaryProto(tensorflow::Env*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, google::protobuf::MessageLite*)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:155: undefined reference to `tensorflow::SessionOptions::SessionOptions()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:165: undefined reference to `tensorflow::NewSession(tensorflow::SessionOptions const&)'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::SessionOptions::~SessionOptions()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/public/session_options.h:28: undefined reference to `tensorflow::ConfigProto::~ConfigProto()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::load_model(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:149: undefined reference to `tensorflow::GraphDef::~GraphDef()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::SessionOptions::~SessionOptions()':
    /home/serena/deps-installations2/tf-cc/include/tensorflow/core/public/session_options.h:28: undefined reference to `tensorflow::ConfigProto::~ConfigProto()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::load_model(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:149: undefined reference to `tensorflow::GraphDef::~GraphDef()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `tensorflow::Classifier<float>::Classifier(float*, boost::container::vector<int, void, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int, int, bool)':
    /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:124: undefined reference to `tensorflow::Tensor::Tensor()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:130: undefined reference to `tensorflow::internal::LogMessage::LogMessage(char const*, int, int)'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:130: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:130: undefined reference to `tensorflow::internal::LogMessage::~LogMessage()'
    /usr/bin/ld: /home/serena/sf_Intel/code/resnet50/tensorflow/backend/classifier.h:124: undefined reference to `tensorflow::Tensor::~Tensor()'
    /usr/bin/ld: /usr/local/lib/libtensorflow_backend.a(tensorflow_backend.o): in function `void std::_Destroy<tensorflow::Tensor>(tensorflow::Tensor*)':
    /usr/include/c++/9/bits/stl_construct.h:98: undefined reference to `tensorflow::Tensor::~Tensor()'
    collect2: error: ld returned 1 exit status
    make: *** [Makefile:20: loadrun] Error 1
    

    Can you please help resolve this linking problem?

    opened by serenagomez1304 2
  • Wide variation in test performance

    Wide variation in test performance

    I have ran mlperf inference result NVIDIA code successfly, but I found the performance of my server was much worse than the config parameters. My GPU is tesla T4, single GPU. The config parameters in ./configs/ssd-resnet34/Server/config.json as bellow https://github.com/mlcommons/inference_results_v1.0/blob/master/closed/NVIDIA/configs/ssd-resnet34/Server/config.json#L267

     "T4x1": {
            "config_ver": {
                "triton": {
                    "instance_group_count": 4,
                    "server_target_qps": 110,
                    "use_triton": true
                }
            },
            "deque_timeout_usec": 2000,
            "gpu_batch_size": 2,
            "gpu_inference_streams": 4,
            "server_target_qps": 110,
            "use_cuda_thread_per_device": false
        },
    

    target qps is 110, but when I used this parameter, most of the query latency were timeout, and the actual latency is four orders of magnitude more than target latency. Why is this? here is my result

    SUT name : LWIS_Server
    Scenario : Server
    Mode     : PerformanceOnly
    Scheduled samples per second : 110.25
    Result is : INVALID
      Performance constraints satisfied : NO
      Min duration satisfied : Yes
      Min queries satisfied : Yes
    Recommendations:
     * Reduce target QPS to improve latency.
    
    ================================================
    Additional Stats
    ================================================
    Completed samples per second    : 23.95
    
    Min latency (ns)                : 11086820
    Max latency (ns)                : 8836518915455
    Mean latency (ns)               : 4472554615854
    50.00 percentile latency (ns)   : 4491175020756
    90.00 percentile latency (ns)   : 7986897806512
    95.00 percentile latency (ns)   : 8395938850230
    97.00 percentile latency (ns)   : 8573065539070
    99.00 percentile latency (ns)   : 8748750463855
    99.90 percentile latency (ns)   : 8828108821166
    
    ================================================
    Test Parameters Used
    ================================================
    samples_per_query : 1
    target_qps : 110
    target_latency (ns): 100000000
    max_async_queries : 0
    min_duration (ms): 600000
    max_duration (ms): 0
    min_query_count : 270336
    max_query_count : 0
    qsl_rng_seed : 7322528924094909334
    sample_index_rng_seed : 1570999273408051088
    schedule_rng_seed : 3507442325620259414
    accuracy_log_rng_seed : 0
    accuracy_log_probability : 0
    accuracy_log_sampling_target : 0
    print_timestamps : 0
    performance_issue_unique : 0
    performance_issue_same : 0
    performance_issue_same_index : 0
    performance_sample_count : 64
    
    No warnings encountered during test.
    
    No errors encountered during test.
    Finished running actual test.
    Device Device:0 processed:
      10 batches of size 1
      135163 batches of size 2
      Memcpy Calls: 0
      PerSampleCudaMemcpy Calls: 266236
      BatchedCudaMemcpy Calls: 2055
    &&&& PASSED Default_Harness # ./build/bin/harness_default
    [2021-07-07 13:40:27,307 main.py:280 INFO] Result: result_scheduled_samples_per_sec: 110.252, Result is INVALID
    

    Does hardware other than the gpu have a significant impact on the test results?if so please let me know,thanks.

    opened by TianningWang 0
Owner
MLCommons
MLCommons