- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A quick general question. Does it provide any benefit (other than some memory savings) to tailor your integer size to fit your data range instead of just assuming INTEGER*4 for all integers? I have numerous integer variables which are used as flag which have values of between -10 and 10. In this case, INTEGER*1 would be the best fit. Is there an optimum integer size for maximum performance using the Intel compiler? In other words, does using a small integer size impact performance in any way? Thanks...
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
No, assuming that the integer size's range is sufficient. For example, if you are using a variable to keep track of indexes or sizes of an object whose size can exceed 2GB, then you'll want it to be the larger size (like C's size_t).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Steve. Just to be sure I understand. There is no performance penalty in using an integer size small than INTEGER*4.
Do most people bother to tailor the size of their integer variables?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Steve. Just to be sure I understand. There is no performance penalty in using an integer size small than INTEGER*4.
Do most people bother to tailor the size of their integer variables?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Smaller than INTEGER(4)? No, don't do that - there is a performance penalty. I thought you were asking about using INTEGER(8) on a 64-bit platform. Use INTEGER(4) unless you require a different size.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page