Showing results for 
Search instead for 
Did you mean: 

32 and 64bit versions of "double"


while reading this bit:

I am wondering about a simple demonstration of how it can happen with intel compiler 11 on linux/32 and linux/64bit

The OS kernel being 64bit,

Is there a difference for the "double" type between the case where you compile -m32 and the case where you compile -m64? In both cases, its sizeof() is 64bit.
The RAX and so registers are 64bit registers, the EAX are 32, the XMM0...7 ones are 128bit. (intel core2)

Can the comparison fail (like in the article) in 64bit and not in 32bit or the other way around?

0 Kudos
3 Replies
Black Belt

icpc doesn't recognize the options -m32 or -m64, unlike g++. I will assume you are asking whether there is a difference between icpc for intel64 and for ia32. The article appears to be discussing the situation where math functions which work in x87 floating point registers are executed, although it's short enough on detail that your confusion is understandable. Now I'll mention why much more detail is needed to understand this question.
The XMM/SSE2 double data types are the same in the ia32 and intel64 compilers. In the current compilers, you would have to set -mia32 to get the x87 double type (available only with the ia32 compiler).
There is no 128-bit XMM double data type; the registers contain 2 64-bit doubles, when running parallel instructions.
Another possible difference between icpc and g++ is that icpc provides a full set of math functions, both scalar and vector, which work in XMM registers, so don't exhibit the sometimes extra precision effect discussed in that URL.
64-bit g++ would run into more situations where an x87 math library function is mixed in, even though SSE2 is the default. Unfortunately, the scalar math functions are sometimes slightly more accurate than the vector functions, hence the option -[no-]fast-transcendentals to specify whether vector math is desired.
Yet another possible difference between icpc and g++ is the latter would never set 53-bit precision mode for x87 operations unless you did so explicitly, while icpc -mia32 would set 53- or 64-bit precision mode according to -pc64 or -pc80. This still isn't the entire story, as the x87 math built-ins (other than sqrt) don't recognize the precision mode setting, and run in 64-bit precision regardless.
By now, you should recognize that there's no moral imperative to avoid XMM/SSE compilation options, even if you run on a 32-bit OS, unless you must support platforms like the original Turion or Pentium-III.
Needless to say, even if you avoid extra precision, there are plenty of double expressions where varying the order of operations will produce slight numerical differences, and the techniques for minimizing them are different from the extra precision case.

Hello, thanks for the clarifications.

Are the the scalar math functions the ones in ?Where are the vector math functions?

The std:: functions in available through (msvc or g++), their implementations are provided by msvc or g++, right, not by intel? Are they inline or are they compiled already in the c++ runtime libs?

What about the x87 math builins?
These are only used if I force the intel compiler to use them?

Also, regarding the C++ faq post
That means that intermediate floating point computations often have more bits than sizeof(double), and when a floating point value is written to RAM, it often gets truncated, often losing some bits of precision.

using XMM/SSE, can this situation happen?
Is this the -pc80 flag you refer to?

Black Belt

Scalar math functions may come from mathimf, or from Microsoft or glibc run-time. Vector math functions invoked by the compiler come from svml libraries, or, with the "Intel optimized" headers, possibly from ipp.
stl functions are implemented by the headers and libraries provided by MSVC++ or g++ (the one which is active during compilation). For the most part, they are treated as inline functions.
x87 math builtins would be considered for use only under the /arch:ia32 or -mia32 options.