Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Linchuan_Li
Beginner
42 Views

Any solution for “dynamic scheduling” in MKL ??

I wonder if there is some way to make MKL routine working as #pragma omp for schedule(dynamic) in OpenMP.

as I'm using Sparse BLAS doing SpMV, and for some reason I deside to reorder the rows of a sparse matrix by length. This results in the workload for internal threads in MKL being unbalanced, since SpMV in MKL is parallelized by row, and probably static scheduled.

if there's some function controlling the schedule strategy of internal threads that i don't know, it would be great.

(MKL_DYNAMIC is intended for dynamic thread number control, not for workload balancing)

0 Kudos
4 Replies
Linchuan_Li
Beginner
42 Views

maybe a schedule(static, thunk_size) like mechanism would be enough to deal with my situation, but i can not find any way to set internal OpenMP parallel thunk size nor dynamic schedule support in MKL
TimP
Black Belt
42 Views

One of my accounts has been working on comparisons between open source spmv and spmm code and MKL, where the open source is built with omp schedule(runtime). Then our cases perform best with OMP_SCHEDULE=guided. As you say, MKL doesn't implement OMP_SCHEDULE. We heard from the MKL team that they are working on a solution.
jimdempseyatthecove
Black Belt
42 Views

Until the default schedule reset becomes available or KMP_SCHEDULE=... you might see if you can partition your matrix operation. An enhancement of this, *** TimP may be able to confirm this, is consider using an outer parallel region with a single secton that partitions the matrix operation launching tasks. Something like: !$omp parallel !$omp single do i=1,nPartitions !$omp task call YourPartitioning(i,A,B,C) ! partition number + array args !$omp end task end do !$omp end single !$omp end parallel Jim Dempsey
Linchuan_Li
Beginner
42 Views

jimdempseyatthecove wrote:

Until the default schedule reset becomes available or KMP_SCHEDULE=... you might see if you can partition your matrix operation.
An enhancement of this, *** TimP may be able to confirm this, is consider using an outer parallel region with a single secton that partitions the matrix operation launching tasks. Something like:

!$omp parallel
!$omp single
do i=1,nPartitions
!$omp task
call YourPartitioning(i,A,B,C) ! partition number + array args
!$omp end task
end do
!$omp end single
!$omp end parallel

Jim Dempsey

Finally i get rid of this problem by writing my own spmv code, and it works well. But thank you and Tim, anyway~