Intel® Distribution for Python*
Engage in discussions with community peers related to Python* applications and core computational packages.

How to boost numpy.dot performance

Gang_Z_
Beginner
2,514 Views

Hi,

I have installed intel anaconda build instead of offical anaconda build. There  are 15% performance improvement form matrix, such as np.dot np.reduce

I am wondering if there are any other ways which can greatly boost perfromance for matrix compution.

Thanks for your help!

 

 

 

0 Kudos
1 Solution
Robert_C_Intel
Employee
2,514 Views

For dot, Anaconda & Intel both rely on MKL so there will not be a big performance difference. The performance difference is probably coming from our optimizations of memory allocation. You might be able to make it faster by using the out parameter. See https://github.com/IntelPython/ibench/blob/master/ibench/benchmarks/dot.py for an example.

For some linear algebra operations, it will be faster to use fortran order arrays. See https://github.com/IntelPython/ibench/blob/master/ibench/benchmarks/inv.py for an example. dot will get the same speed for c & fortran order.

Running on a machine with multiple cores and avx2 or avx 512 will also bring a benefit.

View solution in original post

0 Kudos
1 Reply
Robert_C_Intel
Employee
2,515 Views

For dot, Anaconda & Intel both rely on MKL so there will not be a big performance difference. The performance difference is probably coming from our optimizations of memory allocation. You might be able to make it faster by using the out parameter. See https://github.com/IntelPython/ibench/blob/master/ibench/benchmarks/dot.py for an example.

For some linear algebra operations, it will be faster to use fortran order arrays. See https://github.com/IntelPython/ibench/blob/master/ibench/benchmarks/inv.py for an example. dot will get the same speed for c & fortran order.

Running on a machine with multiple cores and avx2 or avx 512 will also bring a benefit.

0 Kudos
Reply