Intel® C++ Compiler
Support and discussions for creating C++ code that runs on platforms based on Intel® processors.
7694 Discussions

Vectorization with Intel C++ 2022, VS 2022

wmeier
Novice
549 Views

Vectorization using VS 2022

Under Properties, Configuration Properties, C/C++

Code Generation, Enable Enhanced Instruction Set Intel(R) Advanced Vector Extensions 512 (/arch:AVX512)
Code Generation [Intel C++], Intel Processor-Specific Optimization None

This apparently uses the clang vectorizer, which provides optimization diagnostics in the Compiler Optimization Report tab, and in Optimization Notes in Text Editor.

 

When using a processor-specific option in Code Generation [Intel C++]:

Code Generation [Intel C++], Intel Processor-Specific Optimization Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) for Intel(R) Xeon(R) processors (/QxCORE-AVX512)


This overrides the setting in Code Generation, Enable Enhanced Instruction Set, and apparently uses a different vectorizer than clang.

Using this setting, I no longer get optimization diagnostics in the Compiler Optimization Report tab, nor in Optimization Notes in Text Editor.

Instead, the compiler is generating YAML files. The compiler manual describes a way to view the YAML files using opt-viewer.py, but I cannot find this file in my installation.

 

What is the best way to get feedback from the compiler regarding vectorization results when using VS 2022 and

Code Generation [Intel C++], Intel Processor-Specific Optimization Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) for Intel(R) Xeon(R) processors (/QxCORE-AVX512)

 

0 Kudos
9 Replies
Abhishek81
Black Belt
526 Views
wmeier
Novice
490 Views

Thanks for the reply.

 

1) The first PDF link addresses vectorization with the icl compiler, and has a copyright notice from 2010.  Is there a similar document that addresses vectorization with the icx compiler?

 

2) Yes, I have enabled the reports using Optimization Diagnostic Level set to Level 3 (/Qopt-report:3).

 

3) The third link is specific to the Fortran compiler, I am using C++.  The icx compiler does not generate a ".optrpt" file, it generates a ".opt.yaml" file.  Is there any way to review these "yaml" files on Windows?  I have not found the opt-viewer.py in my OneAPI Windows installation.

 

4) I do not have the "Guided Auto Parallelism" option under Tools in VS 2022.

 

 

Abhishek81
Black Belt
485 Views
Let me check on this and then I will update accordingly.
HemanthCH_Intel
Moderator
433 Views

Hi,


We haven't heard back from you. Could you please let us know if the above links resolve your issue?


Thanks & Regards,

Hemanth


wmeier
Novice
408 Views

No, the links did not resolve my issue.

 

Please see my original post.  I am looking for a way to get vectorization optimization diagnostics/reporting from icx when using "Code Generation [Intel C++], Intel Processor-Specific Optimization Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) for Intel(R) Xeon(R) processors (/QxCORE-AVX512)".

 

The compiler create YAML files, but I have no way of viewing those on Windows (opt-viewer.py is not included in the oneAPI directories).

 

 

HemanthCH_Intel
Moderator
365 Views

Hi,


>>"but I have no way of viewing those on Windows (opt-viewer.py is not included in the oneAPI directories)."

The opt-viewer.py is not part of the oneAPI directories. You need to download opt-viewer.py externally from the llvm-toolchain(llvm/tools/opt-viewer).For generating an optimization report and how to use opt-viewer.py, please refer to the below link:

https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-an...


Thanks & Regards,

Hemanth.


HemanthCH_Intel
Moderator
318 Views

Hi,


We haven't heard back from you. Could you please provide an update on your issue?


Thanks & Regards,

Hemanth


HemanthCH_Intel
Moderator
282 Views

Hi,


We assume that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards,

Hemanth


Reply