Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

MKL numpy/scipy not passing tests

patrick_mckendree_yo
446 Views

Hi,

I've been pulling my hair out for a few days trying to get a build of numpy/scipy against the MKL to pass all the scipy tests.  I've ran out of ideas to try, so any ideas on what to try next would be greatly appreciated!  My guess is that I'm missing an ifort flag, but I've tried a bunch of different combinations without improving the situation.

My system is running 64bit Ubuntu 13.04 with a Intel Core i7-4700MQ CPU with Python 2.7.6.  I've been using icc and ifort versions 14.0.2, and MKL version 11.1.2.  I've been building Numpy 1.8.0 and Scipy 0.13.3.

I've been pretty much following Intel's instructions on building numpy and scipy, http://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl .

My site.cfg looks like this:

[mkl]
library_dirs = /opt/intel/mkl/lib/intel64
include_dirs = /opt/intel/mkl/include
mkl_libs = mkl_rt
lapack_libs =

I modified self.cc_exe in the following class in numpy/distutils/intelccompiler.py:

class IntelEM64TCCompiler(UnixCCompiler):
    """ A modified Intel x86_64 compiler compatible with a 64bit gcc built Python.                      
    """
    compiler_type = 'intelem'
    cc_exe = 'icc -m64 -fPIC'
    cc_args = "-fPIC"
    def __init__ (self, verbose=0, dry_run=0, force=0):
        UnixCCompiler.__init__ (self, verbose, dry_run, force)
        self.cc_exe = 'icc -m64 -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -openmp -xhost'
        compiler = self.cc_exe
        self.set_executables(compiler=compiler,
                             compiler_so=compiler,
                             compiler_cxx=compiler,
                             linker_exe=compiler,
                             linker_so=compiler + ' -shared')



I left numpy/distutils/fcompiler/intel.py alone, the class for 64 bit compiler is here:

class IntelEM64TFCompiler(IntelFCompiler):
    compiler_type = 'intelem'
    compiler_aliases = ()
    description = 'Intel Fortran Compiler for 64-bit apps'

    version_match = intel_version_match('EM64T-based|Intel\\(R\\) 64|64|IA-64|64-bit')

    possible_executables = ['ifort', 'efort', 'efc']

    executables = {
        'version_cmd'  : None,
        'compiler_f77' : [None, "-FI"],
        'compiler_fix' : [None, "-FI"],
        'compiler_f90' : [None],
        'linker_so'    : ['<F90>', "-shared"],
        'archiver'     : ["ar", "-cr"],
        'ranlib'       : ["ranlib"]
        }

    def get_flags(self):
        return ['-fPIC']

    def get_flags_opt(self):
        #return ['-i8 -xhost -openmp -fp-model strict']                                                 
        return ['-xhost -openmp -fp-model strict']

    def get_flags_arch(self):
        return []


I compiled and installed numpy as follows,

python setup.py config --compiler=intelem build_clib --compiler=intelem build_ext --compiler=intelem install

It built just fine and passed all the tests,

Ran 4969 tests in 29.835s

OK (KNOWNFAIL=5, SKIP=7)

I then built scipy as follows,

python setup.py config --compiler=intelem --fcompiler=intelem build_clib --compiler=intelem --fcompiler=intelem build_ext --compiler=intelem --fcompiler=intelem install

Which also built and installed just fine.  However, I fail the following four tests:

 

======================================================================
FAIL: test_lorentz (test_odr.TestODR)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/scipy/odr/tests/test_odr.py", line 293, in test_lorentz
    3.7798193600109009e+00]),
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 6 decimals

(mismatch 100.0%)
 x: array([  1.00000000e+03,   1.00000000e-01,   3.80000000e+00])
 y: array([  1.43067808e+03,   1.33905090e-01,   3.77981936e+00])

======================================================================
FAIL: test_multi (test_odr.TestODR)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/scipy/odr/tests/test_odr.py", line 190, in test_multi
    0.5101147161764654, 0.5173902330489161]),
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 6 decimals

(mismatch 100.0%)
 x: array([ 4. ,  2. ,  7. ,  0.4,  0.5])
 y: array([ 4.37998803,  2.43330576,  8.00288459,  0.51011472,  0.51739023])

======================================================================
FAIL: test_pearson (test_odr.TestODR)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/scipy/odr/tests/test_odr.py", line 236, in test_pearson
    np.array([5.4767400299231674, -0.4796082367610305]),
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 6 decimals

(mismatch 100.0%)
 x: array([ 1.,  1.])
 y: array([ 5.47674003, -0.47960824])

======================================================================
FAIL: test_iterative.test_convergence(<function bicgstab at 0x3b58938>, <nonsymposdef>)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/tests/test_iterative.py", line 194, in check_convergence
    assert_equal(info,0)
  File "/home/youngpm/pythonenv/mkl/lib/python2.7/site-packages/numpy/testing/utils.py", line 317, in assert_equal
    raise AssertionError(msg)
AssertionError:
Items are not equal:
 ACTUAL: -10
 DESIRED: 0

----------------------------------------------------------------------
Ran 8934 tests in 59.121s

FAILED (KNOWNFAIL=115, SKIP=220, failures=4)

I can recreate the failure with the following script:
 

import numpy as np
from numpy import pi
from scipy.odr import Data, Model, ODR, RealData, odr_stop

def multi_fcn(B, x):
    if (x < 0.0).any():
        raise odr_stop
    theta = pi*B[3]/2.
    ctheta = np.cos(theta)
    stheta = np.sin(theta)
    omega = np.power(2.*pi*x*np.exp(-B[2]), B[3])
    phi = np.arctan2((omega*stheta), (1.0 + omega*ctheta))
    r = (B[0] - B[1]) * np.power(np.sqrt(np.power(1.0 + omega*ctheta, 2) +
                                         np.power(omega*stheta, 2)), -B[4])
    ret = np.vstack([B[1] + r*np.cos(B[4]*phi),
                     r*np.sin(B[4]*phi)])
    return ret

if __name__ == "__main__":

    multi_mod = Model(multi_fcn,
                      meta=dict(name='Sample Multi-Response Model',
                                ref='ODRPACK UG, pg. 56'),)
    multi_x = np.array([30.0, 50.0, 70.0, 100.0, 150.0, 200.0, 300.0, 500.0,
                        700.0, 1000.0, 1500.0, 2000.0, 3000.0, 5000.0, 7000.0, 10000.0,
                        15000.0, 20000.0, 30000.0, 50000.0, 70000.0, 100000.0, 150000.0])
    multi_y = np.array([
            [4.22, 4.167, 4.132, 4.038, 4.019, 3.956, 3.884, 3.784, 3.713,
             3.633, 3.54, 3.433, 3.358, 3.258, 3.193, 3.128, 3.059, 2.984,
             2.934, 2.876, 2.838, 2.798, 2.759],
            [0.136, 0.167, 0.188, 0.212, 0.236, 0.257, 0.276, 0.297, 0.309,
             0.311, 0.314, 0.311, 0.305, 0.289, 0.277, 0.255, 0.24, 0.218,
             0.202, 0.182, 0.168, 0.153, 0.139],
            ])
    n = len(multi_x)
    multi_we = np.zeros((2, 2, n), dtype=float)
    multi_ifixx = np.ones(n, dtype=int)
    multi_delta = np.zeros(n, dtype=float)

    multi_we[0,0,:] = 559.6
    multi_we[1,0,:] = multi_we[0,1,:] = -1634.0
    multi_we[1,1,:] = 8397.0

    for i in range(n):
        if multi_x < 100.0:
            multi_ifixx = 0
        elif multi_x <= 150.0:
            pass  # defaults are fine
        elif multi_x <= 1000.0:
            multi_delta = 25.0
        elif multi_x <= 10000.0:
            multi_delta = 560.0
        elif multi_x <= 100000.0:
            multi_delta = 9500.0
        else:
            multi_delta = 144000.0
        if multi_x == 100.0 or multi_x == 150.0:
            multi_we[:,:,i] = 0.0

    multi_dat = Data(multi_x, multi_y, wd=1e-4/np.power(multi_x, 2),
                     we=multi_we)
    multi_odr = ODR(multi_dat, multi_mod, beta0=[4.,2.,7.,.4,.5],
                    delta0=multi_delta, ifixx=multi_ifixx)
    multi_odr.set_job(deriv=1, del_init=1)

    out = multi_odr.run()
    print out.beta
    print out.stopreason


My MKL build returns:

[ 4.   2.   7.   0.4  0.5]
['Problem is not full rank at solution', 'Parameter convergence']

However, my openblas version, which passes all the tests, returns:

[ 4.37998803  2.43330576  8.00288459  0.51011472  0.51739023]
['Sum of squares convergence']

Which is what the assertions in the tests are looking for.  I haven't looked at the biconjugate gradient failure yet, figured the ODR failures should be tackled first and that it's probably related.

Thanks for any help!

0 Kudos
4 Replies
Vladimir_Rapatskiy
446 Views

Yes, it is a intel fortran bug.

You should lower optimization level from -O2 to -O1 to pass all tests.

With GCC 4.7.3 gfortran compiler, all tests passed successfully.

Solution was found here: https://github.com/scipy/scipy/issues/3340

0 Kudos
Amanda_S_Intel
Employee
446 Views

I can reproduce these test failures with ifort 14.0.2. Using -O1 is a good workaround as it effectively tuns off more aggressive optimizations. I am doing further investigating with some internal builds and will provide and update soon. Thanks for reporting this.

0 Kudos
Ziyuan
Beginner
446 Views

The problem still exists in ifort 15.0.2.164.

0 Kudos
TimP
Honored Contributor III
446 Views

In case the issue is associated with fp-model setting, I saw in a presentation yesterday there should be a clause to make Simd directives observe fp model (maybe as gfortran should).  I didn't see any report about trying fp-model.

0 Kudos
Reply