- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello All,
We upgraded from Intel compiler XE 2011 to the most recent version, 2020, and noticed an change that is blocking the upgrade.
We've developed our own specific memory allocator, and therefore we overload the new and delete operator in our classes. My question is about the way the size of the allocation, passed to the new[] () operator, is computed when allocating an array of objects whose class have a destructor. For example:
class Foo {
private:
int a_ = { 0 };
public:
Foo(int a = 0) a_(a) { ; }
~Foo() {
// ... do something
}
void* operator new[](std::size_t count) {
void* ptr = nullptr;
// Allocate memory for the array
return ptr;
}
};
int main() {
Foo* pFoo = new Foo[1];
}
If the class does not have destructor, the allocation size passed onto the new[] operator, count, is straightforward: it's the number of allocated objects multiplied by the size of the class: N x sizeof(Foo)
If the Foo class defines a destructor, then the allocation is N x sizeof(Foo) + offset. I believe the offset is to store a pointer to the destructor. With Intel 2011, when compiling on Windows 64bits, this offset was always 8 bytes. But with Intel compiler 2020, it varies. If the size of the class is smaller than 8 bytes, then the offset is 4 bytes. Otherwise it's 8 bytes as before.
Is there an explanation for that offset ? And can it be changed so that the offset is constant as before ?
I tried with Visual Studio 2019, and the offset is always 8 bytes.
I include a small testcase that shows the behavior.
Thanks for any info on that subject,
Brendan
- Tags:
- Bug
- CC++
- Development Tools
- Intel® C++ Compiler
- Intel® Parallel Studio XE
- Intel® System Studio
- Optimization
- Parallel Computing
- Vectorization
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brendan,
Thanks for reaching out to us!
We understood the issue that you are facing, we are working on your code.
We will discuss this with the concerned engineering team and we will get back to you.
Regards
Goutham
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't have boost installed on my system. Can you provide another testcase that independent to boost? And also what results you got for Microsoft compiler?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I've uploaded an updated testcase which does not use boost.
In the uploaded zip file i've included the output using the Intel compiler and the output using Visual Studio 2019. Note that when allocating an array of objects, with VS2019, the 'alloc size' is always equal to N x sizeof(BooT or FooT) + 8. But with the Intel compiler it's N x sizeof(BooT or FooT) + 4 if the sizeof(BooT or FooT) is lower than 8 bytes and it's N x sizeof(BooT or FooT) + 8 if the sizeof(BooT or FooT) is equal to or greater than 8 bytes.
Do you know why the offset varies between 4 and 8 bytes ? Is this related to some memory alignment option ? And is there a compiler option to configure this behavior ?
Thanks for your help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the testcase. I'll look into it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Closing as per your request as we have resolved your issue in the priority support.
Thanks & Regards
Goutham

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page