- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
Is any restrictions on creating task number, maximum task number?
1 Solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Performance will suffer if you don't practice "recursive parallelism", where every task generates only (roughly) the number of tasks necessary to create "parallel slack" that allows the scheduler to efficiently divide the work for execution with efficient use of the local cache (both aspects are important). Otherwise, feel free to fill physical RAM and even your swap space with tasks before you even start executing them.
Link Copied
5 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Performance will suffer if you don't practice "recursive parallelism", where every task generates only (roughly) the number of tasks necessary to create "parallel slack" that allows the scheduler to efficiently divide the work for execution with efficient use of the local cache (both aspects are important). Otherwise, feel free to fill physical RAM and even your swap space with tasks before you even start executing them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In following up on Raf's comment...
Assume you have a large input dataset in a file that can be divided up into a list of tasks. You could program (pseudo code)
while read chunk
parallel task chunk
end while
The above could concievably enqueue as many task as there are chunks in your data set (chunk == task sized hunk of data).
More importantly, it is concievable (though not probable) that the read loop could complete before the first enqueued task completes.
IOW memory filled (or overflowed) with tasks.
A better strategy in the above situation is to use the parallel_pipeline or similar strategy, where the read loop (pipe in pipeline) is dependent upon the availability of a buffer (token). Using this method limits the number of pending task to a good (efficient) working set.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The answer can be given at different levels of detail, of course, and maybe even my warning against the use of large numbers of simultaneous tasks was superfluous. It is always nice when a motivation is given together with a question, so that we don't have to guess what level of detail would be useful, etc.
More technically, I am not aware of any relevant artificial restriction, either at any one time, or, I might add, over the lifetime of a program. I remember a problem with an algorithm whose implementation was susceptible to a wraparound problem, but that was above the task level and has since been remedied.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, guys!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everybody,
Daniil,
Raf and Jim already answered your question. I'd like to provide some sources. Unfortunately, I don't have a stress test for 'tbb_task' class but I've enclosed a 'tbbthreadtest.txt' file with a stress test for 'tbb_thread' class. Take a look if interested.
Best regards,
Sergey

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page