Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Can i use AVX2(not AVX512) in OpenVINO?

Hap_Zhang
Beginner
542 Views

Hi,all

I test the inference time of my ner model with openvino(docker image from docker hub: openvino/ubuntu18_runtime ). OpenVINO uses MKLDNN by default, and use the AVX-512, however when i export MKL_CBWR=AVX2 with non OpenVINO, the inference time is faster than OpenVINO, can i use AVX2 in OpenVINO, and how should i do?

My CPU:Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz

MKL with AVX512: The inference time is 503ms
MKL with AVX2: The inference time is 320ms
OpenVINO(MKLDNN with AVX512):The inference time is 400ms

0 Kudos
2 Replies
Wan_Intel
Moderator
507 Views

Hi Hap_Zhang,

Thank you for reaching out to us.

 

OpenVINO Toolkit CPU plugin supports inference on Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® AVX2, Intel Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE).

 

If you would like to disable AVX512 in the CPU for testing purposes, you may refer to this thread.

 

Otherwise, you can try to set -DENABLE_AVX512F=OFF when building a custom OpenVINO runtime using the open-source version.

 

Steps to build Open Source OpenVINO toolkit for Linux from source is available at the following page:

https://github.com/openvinotoolkit/openvino/wiki/BuildingForLinux

 

CMake Options for Custom Compilation is available at the following page:

https://github.com/openvinotoolkit/openvino/wiki/CMakeOptionsForCustomCompilation

 

On another note, you can use the -pc flag for Benchmark C++ Tool to know which configuration is used by a layer. This flag shows execution statistics that you can use to get information about layer name, layer type, execution status, execution time, and the type of the execution primitive.

 

For example, you may execute the following command:

./benchmark_app -m=”<path_to_model>" -pc

 

 

Regards,

Wan

 

Wan_Intel
Moderator
462 Views

Hi Hap_Zhang,

Thank you for your question.

 

If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.

 

 

Regards,

Wan


Reply