- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I'm trying to learn TBB, and I wrote a pilot to better understand how the task scheduler works. I'm trying to do something probably unclever, however, I don't see why it should be illegal to do so: I wrote kind of scheduler, that just redirects requests to execute a function to the TBB scheduler, by spawning a new root task for each function submitted for execution. Here is the code:
void scheduler::submit(process *p){
::tbb::task::spawn_root_and_wait( *new(::tbb::task::allocate_root()) task_internal(p) );
}
,where task_internal inherits from ::tbb::task and just invokes process::go(), which is a contract of my scheduler to execute user processes.
Then I create a process, that performs the following code in its go() method:
if(! /*recursion termination condition here*/){
process* tmp = new process(...);
m_sched->submit(tmp);
}
The problem:
When run the recursion executes for about 23300 iterations (each time different number; if more unrelated tasks were spawned before, then less iterations are executed) and crashes with segfault. Whenever code is modified, the crash occurs in different places:
, or
0x00433e81 in checkInitialization () at ../../src/tbbmalloc/MemoryAllocator.cpp:1794, or
0x00436101 in rml::internal::RecursiveMallocCallProtector::sameThreadActive () at ../../src/tbbmalloc/MemoryAllocator.cpp:165, or
Thanks for help,
Daniel
I'm trying to learn TBB, and I wrote a pilot to better understand how the task scheduler works. I'm trying to do something probably unclever, however, I don't see why it should be illegal to do so: I wrote kind of scheduler, that just redirects requests to execute a function to the TBB scheduler, by spawning a new root task for each function submitted for execution. Here is the code:
void scheduler::submit(process *p){
::tbb::task::spawn_root_and_wait( *new(::tbb::task::allocate_root()) task_internal(p) );
}
,where task_internal inherits from ::tbb::task and just invokes process::go(), which is a contract of my scheduler to execute user processes.
Then I create a process, that performs the following code in its go() method:
if(! /*recursion termination condition here*/){
process* tmp = new process(...);
m_sched->submit(tmp);
}
The problem:
When run the recursion executes for about 23300 iterations (each time different number; if more unrelated tasks were spawned before, then less iterations are executed) and crashes with segfault. Whenever code is modified, the crash occurs in different places:
0x00433e81 in checkInitialization () at ../../src/tbbmalloc/MemoryAllocator.cpp:1794, or
0x00436101 in rml::internal::RecursiveMallocCallProtector::sameThreadActive () at ../../src/tbbmalloc/MemoryAllocator.cpp:165, or
Thanks for help,
Daniel
1 Solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For TBB worker threads, default stack size is 2 or 4 megabytes.
The output you see is because pthread_attr_setstacksize did not like 1024 as the argument. The way we report it should probably be revised.
The output you see is because pthread_attr_setstacksize did not like 1024 as the argument. The way we report it should probably be revised.
Link Copied
11 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Does task_internal release its constructor argument?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Not sure what you're speaking about, the code of task_internal is as follows:
The same behaviour is observed, however, when the destructor is left blank.
[cpp]class task_internal : public ::tbb::task { public: ::tbb::task* execute(){ m_process->go(); return (task*)0; } task_internal(process* p) : m_process(p){} virtual ~task_internal(){ if(m_process) delete m_process; m_process=0; } private: process* m_process; };[/cpp]
The same behaviour is observed, however, when the destructor is left blank.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Som_process isn't getting leaked...
I don't know, I would check the recursion depth (stacks are of limited size), and perhaps use a child task instead, but maybe somebody else has a better idea?
I don't know, I would check the recursion depth (stacks are of limited size), and perhaps use a child task instead, but maybe somebody else has a better idea?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I guess it is the problem w/ the stack size. Is there any debugging feature in TBB to detect stack explosion when spawning threads?
Tried to set thread_stack_size parameter of task_scheduler_init. I'm getting the following run-time error:
thread_monitor Invalid argument
for any positive value.
What's wrong?
Thanks for your answer
Tried to set thread_stack_size parameter of task_scheduler_init. I'm getting the following run-time error:
thread_monitor Invalid argument
for any positive value.
What's wrong?
Thanks for your answer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hmm, a good algorithm shouldn't exhaust stack space, but I wouldn't know how to monitor that from within the program itself. (I presume you mean spawning tasks, not threads.)
I don't know what the error is about, but you might try to make sure to have an explicit task_scheduler_init constructed early in main() or so with the value you want, before TBB comes up with whatever it likes, because what the first task_scheduler_init says goes, I think.
But at this point you should really start thinking about redesigning the code.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't have "my code" yet. I'm just studying TBB prior to intoducing it into an existing code. To be on the safe side, I am writing a few very simple pilots to make sure I understand the contract well. The pilot, as I have written in the 1st post, in essence, recursively spawns root tasks, each doing some dummy work (2^23 flops). I thought, this is a very basic thing that should be legal, according to the reference.
After spawning ~23300 task segfault occurs as described in the 1st post. Then I suspected, the cause is stack explosion, but was not sure of it, and thus, I wonder, if there is a way to fail gracefully under stack exhaustion for tbb.
Regarding the sched. initialization, I'm trying to init. it at the 1st line of main. Written a pilot to test it independently:
main.cpp:
The output then is:
thread_monitor Invalid argument
After spawning ~23300 task segfault occurs as described in the 1st post. Then I suspected, the cause is stack explosion, but was not sure of it, and thus, I wonder, if there is a way to fail gracefully under stack exhaustion for tbb.
Regarding the sched. initialization, I'm trying to init. it at the 1st line of main. Written a pilot to test it independently:
main.cpp:
[cpp]#include "tbb/task_scheduler_init.h" int main(int argc, char** argv){ ::tbb::task_scheduler_init init(::tbb::task_scheduler_init::default_num_threads(),1024); return 0; }[/cpp]Having tbbvars.sh set the vars., built with: g++ main.cpp -ltbb_debug -o init
The output then is:
thread_monitor Invalid argument
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Have you tried a realistic number, like a megabyte or so?
But unbounded recursion is a recipe for failure, whatever the stack size.
But unbounded recursion is a recipe for failure, whatever the stack size.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, megabyte did the job. What is the default value, by the way?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For TBB worker threads, default stack size is 2 or 4 megabytes.
The output you see is because pthread_attr_setstacksize did not like 1024 as the argument. The way we report it should probably be revised.
The output you see is because pthread_attr_setstacksize did not like 1024 as the argument. The way we report it should probably be revised.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For whom it may interest, the original problem indeed seems to be caused by stack explosion, as observed number of iterations before the segfault scales with stack_size parameter to the scheduler init.
EDIT:
Sorry to say, it appears that it probably not the cause. Runned the program for a few times again , it inevitably segfaults on ~24235 iterations, no matter what is the stack_size argument.
EDIT:
Sorry to say, it appears that it probably not the cause. Runned the program for a few times again , it inevitably segfaults on ~24235 iterations, no matter what is the stack_size argument.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Without knowing more, I'm out of
ideas. But, except perhaps as an academic exercise to find out what happened, do restructure that program.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page