- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For a numerical application I'm doing, I have the choice to compute the functions in the Complex domain or in the Real domain (I will not go into details since it's not relevant). I was wondering what is the difference in computational cost of evaluating a function in each domain. For instance, consider the following two functions:
function f_cmlpx(x) result(y) double complex, intent(in) :: x ! input double complex :: y ! output y = atan(x)/(1+exp(-x**2)) end function f_cmlpx function f(x) result(y) double precision, intent(in) :: x ! input double precision :: y ! output y = atan(x)/(1+exp(-x**2)) end function f
How much more expensive would be the complex evaluation than the real one?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Basic mathematics will tell you that any complex operation will require two to four times as much work as a real operation. There's also twice as much data in a complex value, which incurs memory and data transfer costs. None of that comes for free.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Steve Lionel (Ret.) wrote:
Basic mathematics will tell you that any complex operation will require two to four times as much work as a real operation. There's also twice as much data in a complex value, which incurs memory and data transfer costs. None of that comes for free.
Hi Steve, thanks for the input. Sometimes, unfortunately, it is not that obvious. For instance, to multiply two nXn complex matrices requires 8n^3 complex scalar multiplications if the obvious way. But if you use a formula for multiplying two complex scalars in 3 real multiplications the cost can be reduced to 6n^3 flops [1].
So, my question was more to the sense, are there any optimisations performed behind the scenes or is it done in the "basic mathematics" way?
Thanks again.
[1] Higham, N.J.: Accuracy and Stability of Numerical Algorithms. Second edn. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA (2002)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have never found it helpful to generate an algorithm or to code based on what I think the compiler might be doing under full optimization. It has always proved necessary to generate reasonable tests, do some timings, and use VTune to see what is computationally expensive. But some things can be anticipated . . . in the code sample you provided, it surely must be the case that asking for a complex arctan and exponentiation is considerably more expensive than the simple real equivalent. But I have found that only testing with code compiled under full optimization shows whether the difference in execution time is actually important.
The SIAM book you reference does contain some very helpful algorithms, but (again) I have found surprises in actual code performance. The 25% reduction in flops you pointed out could be completely swamped by memory access times, cache flushing, poor array layout, and other problems. Threading adds another level of uncertainty to this problem.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page