Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5774 Discussions

why BERT model just run in one cpu though there are eight cpus

冯__帅
Beginner
218 Views
the latest version of openvino which support BERT model, but the inference just run in one cpu. plugin.add_cpu_extension('/opt/intel/openvino_2019.2.242/deployment_tools/inference_engine/lib/intel64/libcpu_extension_avx2.so')
0 Kudos
4 Replies
HemanthKum_G_Intel
218 Views

Hi,

Try to use benchmark_app to experiment with the core utilization. I used the following script on a machine having 18 cores, 2 sockets, which gives 72 logical processors all utilized to 100% during the peak of loading the model.

numactl -l ~/inference_engine_samples_build/intel64/Release/benchmark_app -i bert_input.bin -m bert_model.ckpt.xml -niter 100 -nthreads 72 -nstreams 72 -nireq 72

Output:-

Count:      144 iterations
Duration:   1921.27 ms
Latency:    833.773 ms
Throughput: 74.9504 FPS
 

John_H_20
Beginner
218 Views

Hello,

Is there somewhere I can download the input data file bert_input.bin? I tried running benchmark_app without the -i flag (since it is marked as optional) but got the error

[ ERROR ] Input Placeholder cannot be filled: please provide input binary files!

Is there a tutorial for converting a dataset (perhaps SQuAD) into the .bin format that benchmark_app will accept?

冯__帅
Beginner
218 Views
Hi, how can I get the bert_input.bin? I can't run the benchmark_app
HemanthKum_G_Intel
218 Views

Hi,

For the purpose of unit testing, I just gave a text file of the required number of bytes saved in .bin format for the purpose of answering the query here. I recommend exploring Google's BERT repositories for understanding the structured input. 

Reply