Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

R on Xeon Phi (on Windows)

Kelkar__Keyur
Beginner
1,025 Views

Hello there,

I was wondering whether you might be able to share detailed instructions on how to compile R to run on the xeon phi co-processor, specifically tailored for Windows 10?

I have read in detail the following links:

...but as all of these are written assuming the user is running Linux OS, I am struggling to make any progress whatsoever (functions like "./configure" dont work in Windows). I have also searched high and low on the internet but have not been able to find anything.

My goal is the rebuild R3.4.3 to run on the xeon phi 3120A, in the Windows 10 environment. I have installed the co-processor drivers and the unit is showing up correctly.

Many thanks in advance,

Keyur

 

0 Kudos
3 Replies
Ying_H_Intel
Employee
1,025 Views

Hi Keyur,

I'm sorry to say we deprecated the support for xeon-phi coprocessor for a while.  for example, claimed in the release notes of 2018
https://software.intel.com/en-us/articles/intel-math-kernel-library-release-notes-and-new-features

if you have to, what i can suggest
​​1. installation or build R under Windows:
this step is not MKL related, you suppose refer ​R documentations like https://cran.r-project.org/​=>manual => R Installation and Administration => 3 Installing R under Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

https://cran.r-project.org/bin/windows/base/

​2. Once you complete the R for windows installation, please search the libRblas dynamic libraries.

As  MKL is actually as BLAS library integrated to R.  So you can just replace the libRblas and libRlapack library with mkl_rt.dll.  if quick test, you may rename them directly as https://software.intel.com/en-us/articles/quick-linking-intel-mkl-blas-lapack-to-r
​if everything is ok, then  you are done with MKL and R under window . 
​If there is no *blas* library, then you may not need to consider MKL integration longer.

​3. about coprocessor running.

 set MKL_MIC_ENABLE =1  which will enable your  BLAS base R script ,   like do matrix multiply (gemm) . it will automatically offload to xeon phi .
​for more how to run coprocessor under windows, it is not MKL library related. But you can refer to MKL user manual to see several model to run coprocessor under windows.

​Best Regards,
YIng

MKL 2017 user guide for your reference
Using Intel® Math Kernel Library on Intel® Xeon Phi™ Coprocessors

Intel® Math Kernel Library (Intel® MKL) offers two sets of libraries to support Intel® Many Integrated Core (Intel® MIC) Architecture:

  • For the host computer based on Intel® 64 or compatible architecture and running a Windows* operating system
  • For Intel® Xeon Phi™ coprocessors

You can control how Intel MKL offloads computations to Intel® Xeon Phi™ coprocessors. Either you can offload computations automatically or use Compiler Assisted Offload:

  • Automatic Offload.

    On Windows* OS running on Intel® 64 or compatible architecture systems, Automatic Offload automatically detects the presence of coprocessors based on Intel MIC Architecture and automatically offloads computations that may benefit from additional computational resources available. This usage model enables you to call Intel MKL routines as you would normally do with minimal changes to your program. The only change needed to enable Automatic Offload is either the setting of an environment variable or a single function call. For details see Automatic Offload.

  • Compiler Assisted Offload.

    This usage model enables you to use the Intel compiler and its offload pragma support to manage the functions and data offloaded to a coprocessor. Within an offload region, you should specify both the input and output data for the Intel MKL functions to be offloaded. After linking with the Intel MKL libraries for Intel MIC Architecture, the compiler provided run-time libraries transfer the functions along with their data to a coprocessor to carry out the computations. For details see Compiler Assisted Offload.

In addition to offloading computations to coprocessors, you can call Intel MKL functions from an application that runs natively on a coprocessor. Native execution occurs when an application runs entirely on Intel MIC Architecture. Native mode is a fast way to make an existing application run on Intel MIC Architecture with minimal changes to the source code. For more information, see Running Intel MKL on an Intel Xeon Phi Coprocessor in Native Mode.

 

0 Kudos
kelkar__Keyur1
Beginner
1,025 Views

Many thanks Ying, but to get I am unable to get auto-offloading to work.

I have set up my Xeon phi 3120A in Windows 10 Pro, with MPSS 3.8.4 and Parallel XE 2017 (Initial Release). I have chosen this Parallel XE as this was the last supported XE for the x100 series. I have installed the MKL version that is packaged with the Parallel XE 2017 (Initial Release).

What have I done / setup:

After setting up MPSS 3.8.4, and following the steps such as flashing and pinging, I have checked that micctrl -s shows “mic0 ready” (with linux image containing the appropriate KNC name), miccheck produces all "passes" and micinfo gives me a reading for all the key stats that the co-processor is providing.

Hence to me it looks like the co-processor is certainly installed and being recognised by my computer. I can also see that mic0 is up and running in the micsmc gui.

I have then set up my environment variables to enable automatic offload, namely, MKL_MIC_ENABLE=1, OFFLOAD_DEVICES= 0, MKL_MIC_MAX_MEMORY= 2GB, MIC_ENV_PREFIX= MIC, MIC_OMP_NUM_THREADS= 228, MIC_KMP_AFFINITY= balanced.

The Problem - Auto Offload is not working

When I go to run some simple code in R-3.4.3 (copied below, designed specifically for automatic offload), it keeps running the code through my host computer rather than running anything through the Xeon phi.

To support this, I cannot see any activity on the Xeon Phis when I look at the micsmc gui.

Hence, auto offload is not working.

 
The R code is per as per: https://software.intel.com/en-us/forums/intel-many-integrated-core/topic/538102

Other steps I have tried:

I then proceeded to set up the MKL_MIC_DISABLE_HOST_FALLBACK=1 environmental variable, and as expected, when I ran the above code, R terminated.

In this intel link: https://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf it says that if the HOST_FALLBACK flag is active and offload is attempted but fails (due to “offload runtime cannot find a coprocessor or cannot initialize it properly”), it will terminate the program – this is happening in that R is terminating completely. For completeness, this problem is happening on R-3.5.1, Microsoft R Open 3.5.0 and R-3.2.1 as well.
 

So my questions are:

  1. What am I missing to make the R code run on the Xeon phi in Windows 10? Can you please advise me on what I need to do to make this work?
  2. (linked to 1) is there a way to check if the MKL offload runtime can see the Xeon phi? Or that it is correctly set up, or what (if any) problem that MKL is having initialising the Xeon phi?

Will sincerely appreciate if you can help me – I believe that I am missing a fundamental/simple step, and have been tearing my hair out trying to make this work.

Cheers,

Keyur

 

0 Kudos
Ying_H_Intel
Employee
1,025 Views

Hi Keyur,

​I noticed you submit another issues. so reminder me the problem.  Before to do R relative thing,  could you try the c/fortran sample code provided by MKL by default and follow the doc steps and see if the AO is work or not, then consider the R?

​If you have install MKL.

1. please go to  MKL example folder

C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2016\windows\mkl\examples\

there is  example_mic.zip.   you can copy it to your desktop

​and unzip it. go into  \mic_ao\blasc​  

​2. open one MSVC X64 command windows

​then use > nmake  libintel64   to obverse if you have mIC used or not.

​Please let us know how the sample work or not.

​Best Regards,
Ying

 

 

 

 

 

 

0 Kudos
Reply