- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I’m working on a cross-platform C++ app that makes lots of math computations. I have some test cases to assess the correctness of the algorithms and everything works well except on some edge cases.
Even though both platforms use IEEE 754 to represent doubles and both codebases have the same rounding mode, sometimes the results for the Ln function differ. For example, for the value “0.99843853339480171” ippsLn_64f_A53 gives:
-0.0015626869647103901 on MacOs
-0.0015626869647103903 on Windows
Code was compiled without any kind of optimizations. All other math functions give the same value on both platforms except the Ln family functions. Any suggestion?
Thanks
Note: using IPP version 2019.0.1
- Tags:
- Development Tools
- General Support
- Intel® Integrated Performance Primitives
- Parallel Computing
- Vectorization
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you will truncate to a double floating point - will you see the same results?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gennady, the result is already truncate to a double floating point. The difference between the OSs is the least significant bit of the mantissa.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vitor,
It looks like the issue. We need to check the problem on our side. Do you see the problem with some specific CPU types of everywhere?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gennady, both CPUs are Intel models:
Mac: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Windows: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz
I didn't try in other CPU types.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
as IPP uses MKL VML implementation as a backend, you may try to get the bit-to-bit output with CNR mode enabled. e.x. export MKL_CBWR = AVX2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gennady, thanks for the insight about CNR mode. Although I’m using IPP (without linking with MKL libraries), I have changed the code to link with MKL but the results between platforms still have a 1-bit off on the mantissa.
This simple test case fails:
assert(mkl_cbwr_set(MKL_CBWR_COMPATIBLE) == MKL_CBWR_SUCCESS); double source = 0.99843853339480171; Ipp64f out{0}; IppStatus status = ippsLn_64f_A53(&source, &out, 1); assert(status == ippStsNoErr); double result = static_cast<double>(out); uint64_t bin = *((uint64_t*)&result); std::cout << std::hex << bin << "\n"; assert(result == -0.0015626869647103901);
On Mac the output is: bf599a625a1179c7
On Windows the assert fails and the output is: bf599a625a1179c8
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vitor,
I must admit it was my fault. Actually MKL VML is used a backend from IPP vector math functions, but IPP didn't check this environment variables.
regarding the original problem - the differences you obtained on difference OS is within +- 1 ulp and therefore it aligns with committed accuracy. MKL and IPP never accepted to provide bit-to-bit output across the different OS.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page