Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.
2466 Discussions

scalable_malloc fails to allocate memory while there is much memory avaliable.

wzpstbb
Beginner
1,416 Views
Hi,

We are using tbbmalloc to manage our system memory. We use scalable_malloc/scalable_free to allocate/deallocate system memory. Everything worked fine before we ran into below case:
1. Keep allocating 1M textureusing DX10 until the allocation fails. Note that some of the system memory will be consumed by doing this.
2. Release all the allocated textures in step #1.
3. Create some objects on heaps. scalable_malloc returns null while I believe there are a lot system memory available at this point. And we have tried to replace scalable_malloc with malloc, then the memory can be allocated.

Does anyone have any idea about why?

Thanks,
Wallace
0 Kudos
25 Replies
SergeyKostrov
Valued Contributor II
303 Views
Quoting wzpstbb
Thank you for all the answers.

One reason we decided to override the global new/delete is that we want to handle all the low-memory/out-of-memorysituations by ourself.

[SergeyK] Why don't you use 'set_new_handler' function to set anerror handling function for
cases when a'new' operator fails?

In addition,dropping the overriden global new/delete operators would have a big impact on our clients. We will update to TBB 4.0 in our next release.

Wallace

0 Kudos
wzpstbb
Beginner
303 Views

We want to act on both low-memory and out-of-memory situations. Basically we would page some resources out to disk. The paging strategy is less aggressivein low-memory situation than out-of-memory situation.

Thanks,
Wallace

0 Kudos
RafSchietekat
Valued Contributor III
303 Views
Handling out-of-memory situations? Sounds ambitious. You would have to rearrange things without being able to make any other dynamic allocation, not even an implicit one. Better get your low-memory handling right to steer clear of this situation!

How about digging into the source code and redirecting TBB's attempts to allocate the big chunks of memory from which it serves its own clients? You'll need to be able and willing to adapt to changes in the implementation, of course, because it won't be supported. Allocate several more chunks than needed as a buffer to put into use when your own code cannot get more memory, triggering low-memory handling but still serving the request, and beyond that also serve the request but initiate a clean shutdown instead. Does that make sense?
0 Kudos
jimdempseyatthecove
Honored Contributor III
303 Views
I agree with Raf "Better get your low-memory handling right to steer clear of this situation!"

As soon as you enter the operational realm of low/out ofmemory (oom)situations, most corrective measures are short lived. i.e. you aggravate the situation whereby you run of of memory sooner.

Placing memory in reserve might buy you some time but also means you reach oom sooner.
Under some circumstances, the reserve might be a good option.

As and example: I have a simulation program that I use for simulationg Space Elevators. Simulation runs can run weeks. While I do not experience oom, should I encounter oom two weeks into a run, I'd be rather upset. I do experience crashes (simulation blows up due to infinities etc...). To combat the crash, periodically the program checkpoints itself. Should a crash occure, I can restart from checkpoint, then resume running monitoring the model to find out what caused the crash (e.g. too large of integration step size at a critical point in the simulation).

Well, in your case, combining this with Raf's suggestion (clean shutdown), when you reach the oom with reserve memory, release the reserve memory and set a flag to indicate "make check point and restart as soon as possible".

Issues:

You will have to write code for checkpoint and restart (assuming you have not done this already).
The additional code may aggravate your oom situation.

You will have to run some stress tests to determine the working size of the reserve memory block.
The additional memory reserved may aggravate your oom situation.

An alternative is to rework your code such that it will not reach oom (for all permitted initial conditions).

Before you take these corrective measures you might consider reworking your code such that it is stack conservative. i.e. allocate large-is objects from heap as opposed to from stack. If your current program has stack requirements of 10's of MB, you will likely find that the 10's of MB are actually used by one or less than all threads (at the same time). This may be an un-utilized reservoir of memory. (remember to reduce the linker and/or reserve stack values).

Jim Dempsey
0 Kudos
SergeyKostrov
Valued Contributor II
303 Views
Handling out-of-memory situations?...


That's a common problem with 32-bitWindows platforms. It is impossible to allocate more than 2GB
of memory for an application. But, data sets are growing in sizeand more memory is needed for processing!

Note: It is assumed that a 32-bitWindows platform doesn't support AWE ( Address Windowing Extensions ).

0 Kudos
Reply