Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6969 Discussions

the relationship for MKL_CBWR and MKL_ENALBE_INSTRUCTIONS

Bob2023
Beginner
938 Views

I'm using Intel(R) Xeon(R) Gold 6242R, the MKL version is "MKL 2020.0 Product build 20191122"

 

I set MKL_VERBOSE=1 and set different MKL_CBWR and MKL_ENABLE_INSTRUCTIONS to check the code path.

the result:

1,  MKL_ENABLE_INSTRUCTIONS=AVX512 and MKL_CBWR=AVX512,  code path=AVX512

2, MKL_ENABLE_INSTRUCTIONS=AVX512 and MKL_CBWR=AVX2,  code path=AVX512

3, MKL_ENABLE_INSTRUCTIONS=AVX512 and MKL_CBWR=AVX,  code path=AVX

4,  MKL_ENABLE_INSTRUCTIONS=AVX2 and MKL_CBWR=AVX512,  code path=AVX2

5, MKL_ENABLE_INSTRUCTIONS=AVX2 and MKL_CBWR=AVX2,  code path=AVX2

6, MKL_ENABLE_INSTRUCTIONS=AVX2 and MKL_CBWR=AVX,  code path=AVX

7,  MKL_ENABLE_INSTRUCTIONS=AVX and MKL_CBWR=AVX512,  code path=AVX

8, MKL_ENABLE_INSTRUCTIONS=AVX and MKL_CBWR=AVX2,  code path=AVX

9, MKL_ENABLE_INSTRUCTIONS=AVX and MKL_CBWR=AVX,  code path=AVX

 

for the above result,  the 2nd is not as my expected. 

"2, MKL_ENABLE_INSTRUCTIONS=AVX512 and MKL_CBWR=AVX2,  code path=AVX512"

As MKL_CBWR=AVX2, why the code path is AVX512 and not AVX2 ?

 

 "mkl-enable-instructions" metioned:  "Settings specified by the mkl_enable_instructions function set an upper limit to settings specified by the mkl_cbwr_set function."

 

I want to know the relationship between MKL_CBWR , MKL_ENALBE_INSTRUCTIONS and the dispatched code path.

 

0 Kudos
3 Replies
ShanmukhS_Intel
Moderator
876 Views

Hi Bob Kim,

 

Thanks for posting in Intel Communities.

 

I want to know the relationship between MKL_CBWR , MKL_ENALBE_INSTRUCTIONS and the dispatched code path.

>> Please find the significance and other details below considering the latest version of oneAPI being used.

 

MKL_CBWR

Intel® oneAPI Math Kernel Library provides a conditional numerical reproducibility (CNR) functionality that enables you to obtain reproducible results from oneMKL routines. When enabling CNR, you choose a specific code branch of Intel® oneAPI Math Kernel Library that corresponds to the instruction set architecture (ISA) that you target. You can specify the code branch and other CNR options using theMKL_CBWR environment MKL_CBWR variable.

 

For Example:

MKL_CBWR="<branch>[,STRICT]" or

MKL_CBWR="BRANCH=<branch>[,STRICT]"

 

Please go through the below link for more details on usage of MKL_CBWR.

https://www.intel.com/content/www/us/en/docs/onemkl/developer-guide-windows/2023-0/specifying-code-branches.html

 

MKL_ENABLE_INSTRUCTIONS

Intel® oneAPI Math Kernel Library automatically queries and then dispatches the code path supported on your Intel® processor to the optimal instruction set architecture (ISA) by default.

 

The MKL_ENABLE_INSTRUCTIONS environment variable or the mkl_enable_instructions support function enables you to dispatch to an ISA-specific code path of your choice.

For Example: you can run the Intel® Advanced Vector Extensions (Intel® AVX) code path on an Intel processor based on Intel® Advanced Vector Extensions 2 (Intel® AVX2), or you can run the Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2) code path on an Intel AVX-enabled Intel processor. This feature is not available on non-Intel processors.

 

More details regarding the MKL_ENALBE_INSTRUCTIONS were mentioned under below link

https://www.intel.com/content/www/us/en/docs/onemkl/developer-guide-windows/2023-0/instruction-set-specific-dispatch-on-intel-archs.html

 

Dispatched code path.

The code path in Intel MKL refers to the implementation of the library's functions that are used by a program. Code path can have significant impact on performance of the application/program being run.

 

The Intel MKL library contains multiple code paths. Each code path is optimized for different processor architectures and configurations.

For example, the library includes code paths that are optimized for Intel processors with different levels of support for features as per the program. When a program uses Intel MKL functions, the library automatically selects the code path that is best suited for the processor and configuration on which the program is running.

 

In addition to providing optimized code paths for different processor architectures and configurations, Intel MKL also includes features like thread-level parallelism and memory management optimizations that can further improve the performance of the library's functions. By taking advantage of these features and selecting the appropriate code path, you could be able to create programs that can perform complex operations more quickly and efficiently on Intel processors.

 

Best Regards,

Shanmukh.SS

 

0 Kudos
ShanmukhS_Intel
Moderator
744 Views

Hi Bob,


A gentle reminder:

Has the infromation provided helped? Could you please let us know if we could close this case at our end or let us know if you need any other information?


Best Regards,

Shanmukh.SS


0 Kudos
ShanmukhS_Intel
Moderator
684 Views

Hi Bob,

 

As a new thread(mentioned below) was opened for the continuation of the current case, We are closing this case to avoid duplication. We would like to inform you that any further interaction would be continued in the community url mentioned below.

 

https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-CBWR-and-MKL-ENALBE-INSTRUCTIONS/m-p/1478254#M34483

 

Best Regards,

Shanmukh.SS

 

0 Kudos
Reply