- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear MKL experts,
I've met an issue on matrix multiplication DGEMM routine that if the matrix cotnains huge values, i.e. 1.0d+17, the performance dropped significantly as low as 1/3 of regular scenario.
Any idea if this is possible and/or what the reason could be? I really don't expect such behavior. BTW, this is encountered only in Windows but not in Linux.
thanks,
Link Copied
20 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, thanks for the issue.
Is that 32 or 64 bit systems?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
boreas,
Did you check the output result. Are there NaN into output results?
and what type of CPU you are working?
Gennady
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Gennady,
thanks for your reply. It is 64bit Windows Xp or Windows 7. no NaN in output results.
we've reproduced on several system with intel processors
intel core i7-2760QM
intel Xeon X5550
thanks,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks,
what is the problem size in that case?
what is the version of MKL you use? ( I hope in the all cases threaded version of mkl have been used).
/Gennady
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It would also be interesting to know if there are any +/-Infs in your output.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hello Gennady and Shane,
I use single thread, and as said, there is no NaN in output.
there are lots of dgemm call with various sizes in my applicaiton, but a typical size is about M=3769, N = 32, K = 256
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry if I wasn't clear, but I asked about INFs instead of NaNs ... they are distinct IEEE types. With such large inputs, it would seem natural to expect that overflows may be occuring and that the INFs are generated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hello Shane,
from visual studio debugger, I do not see any INFs. also I am not sure what is the right way to check if there is any INFs.
however, I do see lot of really tiny value, like 1.0d-69. not sure if this matters. And let me to illustrate what I was doing clearer
it is matrix factorization { A, B; B, C}, where A contains some huge diagonal values, and C = C - B*INV(A)*B = C - B*INV(L) * INV(D) * INV(L) * B, if A = LDL. I use DGEMM to calculate the last outer product. Since INV(D) is involved, those tiny values appear.
I monitored the slow down in DGEMM and thus I initiate this topic. Again, do you think those tiny value trigger anything like under flow or so? and does it matter to the performance? thank you so much.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The slowdown you experience could be provoked by running in gradual underflow mode (the hardware default), when individual products produce results of magnitude < TINY(1d0).
Assuming your main program is compiled by an Intel compiler, if you use an option such as /fp:source, you would follow by /Qftz to set abrupt underflow. Otherwise, you may require the C/C++ ftz intrinsic to set abrupt underflow.
A Core I7-2 or -3 CPU should not exhibit the slowdown of earlier CPU models.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello TimP,
thanks for your reply.
But, by reading the INTEL fortran compiler document (my program is in Fortran), it looks like /Qftz is the default option since my program is compiled with /Os in release mode. So it should be already on. And I saw those tiny values in debugger because the debug module is compiled with /Qd which did not trigger /Qftz. Does this make sense?
what should I try?
-ftz or /Qftz
Denormal results are flushed to zero.
Every optimization option O level, except O0, sets -ftz and /Qftz.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, an ifort main program built with -O1 -O2 or -O3 would have /Qftz set, unless you set a /fp: option. In the latter case, you would follow up with /Qftz to get back to abrupt underflow.
I guess ifort /Os is equivalent to /O1. This removes some major optimizations such as auto-vectorization, which would not be needed if all your time is spent in MKL. I was surprised by this, as -Os appears to be allowed in ifort only for Windows.
I think the sentence with "leave flags as they are" in the documentation of /Qftz is misleading. /Qftz- does mean taking the hardware default, which is the 32-bit Windows default, but X64 Windows should set abrupt underflow before starting a .exe, so /Qftz- would generate code to set gradual underflow mode.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks, but this seems not explain why I got slow-down on x64 windows. I have /Qs, which should trigger /Qftz by default as there is no /fp used.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you give us the example of this case to check the problem on our side?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
unfortunately, a small program can not reproduce this abnormal behavior. I'll get you once I could. thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Gennady, TimP, Shane,
I've attached a small example with the matrix data which contains tiny values. I built the code through VS2010 and Intel 12.1 compiler. All compiler options are default.
here the behavior I observed is not exactly the same what I saw on the large code. For this example, all computations is supposed to be on MKL dgemm and therefore, I don't expect much performance difference between my debug and release exectuables, however, I do see that,
11.6 seconds w/ debug versus 0.35 seconds w/ release exectuable.
the test is done with Intel I7-2760QM, 2.4GHz, w/ single thread
Please help. the performance variation does confuse me.
thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Yes, I checked how it works on my side.
The cause of the performance variation can be explained that intel compiler flushes denormals to zero by the default in release mode.
in the debug mode /Qftz is off.
on my local system ( win7, SNB, MKL 11.0.1, 64 bit) I had
release - 0.45 sec
debug - 12.4 sec
debug with /Qftz - 0.51 sec
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
can you let me know where you put /Qftz? compiler option or linker option. details are appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
one more question - if the main program is with Intel C or Microsoft C, which option should I use? thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The SSE intrinsics for switching underflow mode are covered (with typos) here:
http://software.intel.com/en-us/articles/how-to-avoid-performance-penalties-for-gradual-underflow-behavior
Since the article was written, ifort adopted the Fortran standard method of setting underflow mode under USE ieee_arithmetic, so you can ignore the Fortran-bashing aspect of the article.
call ieee_set_underflow_mode(.false.)
Setting the initialization in main() by /Qftz/Qftz- is supported only by the Intel compilers. It takes effect at compile time, when you build main.obj. Qftz has no effect in the building of other .obj or at link time.
By the way, the gcc equivalent of /Qftz is normally invoked by -ffast-math.
With the SSE or language standard intrinsics, you can switch the mode at any point in your program (but don't do it inside a time-consuming loop).
The usual practice of running in IEEE standard gradual underflow mode under MSVC and gcc probably motivated the changes in corei7-2 which are supposed to speed up the common cases.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, TimP. This really helps understanding. thank you all.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page