- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there, in general, any performance penalty when using double precision instead of single precision? I thought I had seen some documentation that double precision calculations might actually be faster, but now I can't find it.
Thanks,
Andy
Thanks,
Andy
Link Copied
3 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The answer is very much dependent upon hardware platform. Some platforms may have to "emulate" double precision, although nothing recent. Other platforms prefer dealing with 64 bit entities rather than 32 bit entities and on those system computations are actually optimized for double precision work. Of course if you are storing a great number of floating point numbers then obviously the memory requirements are doubled, how that is included in performance is very much application dependent.
James
James
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was interested primarily in IA-32 systems.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are using SSE code (IFL -QxK and the like), double precision divide and sqrt()take more cycles than single. Other than that, the main difference would be from cache and (if you read or write large files) disk buffering. The difference would vary from negligible to large depending on your application. I assume you aren't using the free Windows compilers, where you must go out of your way to get aligned storage of doubles, and that you heed your compiler's warnings against forcing un-aligned storage.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page