Software Archive
Read-only legacy content
17061 Discussions

multiple asynchronous computational kernel offload launches

Eduardo_D_1
Beginner
345 Views

In Nvidia CUDA, the Kepler GPU can support concurrent execution of 16 kernels. One can use multiple streams to feed multiple kernels to the GPU and let the hardware schedule the work. One can also use the grid dimension and block dimension in the kernel launch to roughly influence the amount of computing resources.  For example, one can use [1x1x1] grid [512x1x1] threads so that the kernel executes on a single SMX unit.

 I am wondering whether or how one might achieve something similar for the MIC.

Can multiple threads on the host issue separate non-blocking asynchronous offload computation commands with signals to run code or call MKL BLAS?
Is there a way to influence the amount of computing resources in the offload MKL BLAS operation? One may assume for simplicity the data is already on the MIC.


If each offload command uses all 240 virtual cores already, then there may not be much performance gains with multiple asynchronous offload computation commands.
 
 The idea is there may be many matrix operations but each matrix block is not that large that can saturate the MIC. If we can make multiple concurrent non-blocking offload operations, then this may be a way to make effective use of the MIC.   

This approach might  also be relevant to mapping computation and dependencies described as a Directed Acyclic Graph (DAG) to the MIC.

 

 

 For example, launch  12 concurrent threads on host, each thread perform asynchronous launches with different signals of  offload MKL BLAS using 20 (separate) hyper-thread cores on MIC.  

 

 

Certainly we do NOT want all 12 concurrent threads to use the SAME 20 hyper-thread cores.
 
Perhaps the runtime system on MIC will "do the right thing" in scheduling the work on idle or available cores with separate asynchronous offload computation launches with signals?

 

In the Forum  there is an example to use multiple "-env MIC_KMP_AFFINITY" options for  mpiexec command  to associate or "pin" different MPI tasks with cores on the MIC but it is not clear to me how to achieve something similar with threads.

 

https://software.intel.com/en-us/forums/topic/360754

 

 

0 Kudos
2 Replies
TimP
Honored Contributor III
345 Views

I'm not clear on which comments of yours distinguish your question from previous discussions on this forum.  I don't know why you would use offload mode if "data is/are already on MIC," but maybe I don't understand your meaning.

If running multiple simultaneous offload jobs, I would think you would use the MIC_KMP_PLACE_THREADS to reserve separate groups of cores for each, and also distribute evenly across cores in case less than 4 threads per core is optimum.  If the individual jobs aren't big enough to benefit from using all the cores, it would seem you might expect excessive overhead in launching them (with data on host), but maybe that's another matter.  In the case of MPI, MIC clearly supports efficiently at least 6 individual threaded processes pinned to distinct groups of cores.

If you set each offload to a reasonable number of threads but don't set affinities for them, the scheduler will distribute them somewhat randomly, not taking advantage of cache locality, nor even spread work evenly across cores when not using 4 threads per core, if that's important.

In the case of parallel instances of MKL BLAS, 4 threads per core are likely to be OK, but performance measured e.g. by Gflops isn't likely to approach what you could get with a single large problem using all of MIC RAM.  There is an MPI example like that in the Jeffers,Reinders book, but it was written up prior to the release of KMP_PLACE_THREADS, which makes it a bit easier.

0 Kudos
Eduardo_D_1
Beginner
345 Views

 

Dear Tim,

My goal is concurrent asynchronous COMPUTATION offloads for overlapping multiple computations, not overlapping data transfer with offload computation. Thus my comment that assume all data is already on MIC.

In CUDA, it is also possible to launch 16 concurrent kernels for computation on the GPU device.

OpenACC supports asynchronous kernel launches with signal tag.

In CUBLAS, there is   cublasDgetrfBatched for computing LU factorization of many matrices and cublasDgetriBatched for triangular solves, say matrix size is 1024 and there are 200 real*8 matrices. Each matrix size of 1024 is not sufficient large to make efficient use of all 236 virtual HT cores on the same  MIC.  Computing LU for multiple small matrices is just an example.  There may be other concurrent computations to build the matrices.
 

For the Intel MIC, one may consider performing say 16 asynchronous offload computation launches (with signal tag in OpenMP loop)  to call MKL and setting say MIC_MKL_NUM_THREADS=12 to use  only 12 virtual HT cores on MIC to use a total of 16*12=192 virtual HT cores.

I would rather not run multiple MPI tasks on the single MIC.

Is such use of the MIC using multiple asynchronous concurrent offload computation possible and what would be a reasonable setting of the environment variables for AFFINITY and PLACEMENT?  If this is a known feature or use case, would you kindly point me to the appropriate documentation or example for multiple asynchronous offload computation on the same MIC?

 

 

 

 

 

 

 

0 Kudos
Reply