- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Going through the OpenVINO™ Integration with TensorFlow* tutorial playbook you get the following warning:
It would be great it the TensorFlow binary on DevCloud was built to take full advantage of IA.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Image is a little small, this is the text:
2022-07-18 10:35:16.720091: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi JohnWestlund,
The warning simply means the processing can use the AVX and AVX2 to speed up the inference performance.
In the OpenVINO™ Integration with TensorFlow* tutorial, just use the default processor, unless you submit job to specific hardware for execution.
Hope this information help
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the response, Hari.
I understand what the warning is saying, but was wondering why Intel's cloud is not running fully optimized binaries.
I appreciate your efforts to look into this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi JohnWestlund,
After verifying with the respective team, the warning portion of the output is due to the Kernel enabled with TensorFlow 2.8, which has the OneDNN flag as the experimental flag. That is the reason for the warning.
From TensorFlow 2.9, this API is enabled by default. No need to explicitly enable it.
In the elif statement: Its OVTF +OneDNN. So, you are getting the best of both OpenVINO integrated with TensorFlow and OneDNN.
elif(flag_enable == "openvino"):
print('Openvino Integration With Tensorflow')
print('Available Backends:')
backends_list = ovtf.list_backends()
for backend in backends_list:
print(backend)
os.environ['TF_ENABLE_ONEDNN_OPTS']='1'
ovtf.set_backend(backend_name)
Hope this information help
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi JohnWestlund,
This thread will no longer be monitored since we have provided a solution. Please submit a new question if you need any additional information from Intel.
Thank you

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page