Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
- Multiple Core Usage Issues

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted
##

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-13-2013
11:40 PM

16 Views

Multiple Core Usage Issues

I recompiled my installation of numpy and scipy agaisnt Intel MKL. I am trying to speed up a script that fits tensor for DT MRI. The script bottlenecks during the svd operation, in particular the call to numpy.linalg.lapack_lite.dgesdd inside the svd function is very slow. Slow in the sense that I ran this calculation using the default ATLAS and now with MKL the speedup is negiigible. The thing I noticed is like ATLAS, Intel MKL is only using one core for the bulk of the SVD calculations.

I found this topic

http://software.intel.com/en-us/forums/topic/282166

and he says because SVD is BLAS 2 it is usually single threaded but will be mutlithreaded on newer processors. I haev an Intel i7 2630QM(Sandy Architecture) processor and am running the latest Intel MKL build(11). Should I be experiencing multiple thread use and if so, how can I obtain that? I'm not sure what other information would be helpful to provide, but I can provide whatever you need to help me. Thanks in advance!

9 Replies

Highlighted
##

what exact singular value decompostion routines you are using? some of them are threaded some not and also would be interesting to know the typical problem size you are dealing with.

Gennady_F_Intel

Moderator

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
02:52 PM

16 Views

Highlighted
##

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
03:29 PM

16 Views

Do you mean the svd routine inside MKL? I mentioned the function being called in numpy is numpy.linalg.lapack_lite.dgesdd. I am not quite sure which function this links to in Intel MKL, but I think dgesdd is a standard LAPACK routine, or is that incorrect?

As for the problem size, one example was 1977359 by 56 for matrix size.

Highlighted
##

Zhang_Z_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
03:54 PM

16 Views

Which version of NumPy are you using? Why didn't you call numpy.linalg.svd? I don't know where numpy.linalg.lapack_lite.dgesdd came from. It's not found in the latest NumPy documentation.Please try numpy.linalg.svd.

In our own NumPy benchmarking, we see at least 1.9x speedup for SVD when replacing ATLAS with MKL in NumPy 1.6.2, on a quad-core Ivy Bridge CPU. Your mileage may vary but you should see noticeable performance improvement. You can grab our test code and have a try: http://software.intel.com/file/41177

Highlighted
##

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
04:34 PM

16 Views

I'm using numpy 1.7

The call to dgesdd occurs during my call to the DT mri library(dipy). It calls numpy.linalg.pinv to calculate a pseudo inverse and from there it does the svd computations. I looked at the source code and I am calling numpy.linalg.svd, but it calls dgesdd(which looks like a low level c subroutine)

https://github.com/numpy/numpy/blob/master/numpy/linalg/linalg.py

At line 1315, it shows the branch where the assignment and subsequent call is made.

I'm going to try the benchmarks and downgrading to numpy 1.6 to see if that causes any changes.

Highlighted
##

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
05:01 PM

16 Views

I'm using numpy 1.7

I looked at the code and dgesdd is a low level subroutine called inside numpy.linalg.svd

https://github.com/numpy/numpy/blob/master/numpy/linalg/linalg.py

If you go to line 1315 you can the branch where dgesdd is use to calculate the svd inside the numpy.linalg.svd function.

I'm making a call to a dt mri library(dipy) which calls numpy.linalg.pinv(psuedoinverse) which calculates the SVD. So I am using the function you mentiond. Does you benchmark make use of multiple cores? If not, then I guess it's just that mileage I get out of it is neglibile.

But I'm going to try numpy1.6 and see what happens. Ive asked a friend to try on a more powerful machine too.

Highlighted
##

Zhang_Z_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
09:47 PM

16 Views

So we are calling into the same SVD function.

Unless you linked with the sequential MKL when you built your NumPy, or you specified single thread execution at run time, by default it does use multipe cores. See the attached picture for our SVD performance chart. BTW, how did you link with MKL? Did you do something to the same effect of OMP_NUM_THREADS=1 at run time?

NumPy 1.7 and 1.6.2 should not be much different in terms of SVD performance. You'd better first run the benchmark I gave you on your NumPy 1.7 and see what happens. Note that our test used square matrices. Your matrix is very tall and skinny. This may make a difference. I'll be very interested in knowing.

Highlighted
##

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
10:13 PM

16 Views

I linked to mkl using the instructions from this page: http://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl

I'm certain they linked because when I run numpy.show_config() it shows mkl_rt as the library along with pthread. I've checked OMP_NUM_THREADS and the other environment variable both are blank which I believe is supposed to mean it'll use the default amount of physical cores, according to what I've googled. I'll run your benchmarks on mkl and the atlas configuration and tell you the difference. I didn't realize those benchmarks given were for square matrices, so that may very well be the issue.

Highlighted
##

One more thing I should mention, I don't think I was clear about this but I'm profiling the code, and it runs across multiple cores until I get to the SVD calculation, and then it just loads one core to 100% and sits there until the end of the program(the rest finishes very quickly).

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2013
10:22 PM

16 Views

Highlighted
##

I figured out the problem. It is somewhat specific to my problem and stupidity, but I'll post it anways in case somebody else has the same problem. It turns out that while my matrix size is very large, the library I'm using, dipy, actually breaks up the matrix in python code and runs calculations on it using a for loop. Since this is done in python code it is very slow, and the size I was actually running intel mkl on was over 2 million seperate 56 by 7 matrices....., so the neglible speedup and lack of proper core usage makes perfect sense(I think the marginal improvement I did see, probably says something in favor of Intel MKL). I'm going to try to go a few levels up throw the entire matrix and see what the results are. I'll post back when I do.

Niraj_A_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-04-2013
07:25 PM

16 Views

For more complete information about compiler optimizations, see our Optimization Notice.