Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

Being concurrent but not in parallel

smallb
Beginner
313 Views
Guys, I know that I can create tasks and tbb scheduler will spread them over available processors and try to do them in parallel. But what if I want to use just one (or n number to use)core/s? Can I somehow configure task scheduler to use justn number of samecoresto do all those tasks (concurrently) and don't use the other cores?
0 Kudos
1 Solution
ahelwer
New Contributor I
313 Views
If you don't care whichcores are used, construct task_scheduler_init with an integer argument specifying the thread pool size. You may also limit the number of tokens when using parallel pipeline. Certain flow graph nodes also have concurrency limits you can set.

If you want the tasks to be scheduled on a specific subset of the cores, that is only somewhat more difficult. You must use something called task_scheduler_observer. Virtual function on_scheduler_entry() is called whenever tbb appropriates a thread to use with its scheduler. By creating a class which implements on_scheduler_entry (and also on_scheduler_exit(), which is executed when tbb releases the thread), you can set the affinity of that thread to any subset of the processor cores you would like, using normal OS API calls. You should also call task_scheduler_init with the size of the subset of processor cores you are using, so that oversubscription does not take place (on some operating systems). For an example using task_scheduler_observer, see here:http://software.intel.com/en-us/blogs/2008/05/19/under-the-hood-building-hooks-to-explore-tbb-task-scheduler/

Another (easier?) way of doing this is to restrict the process affinity to the cores you want to run on, and tbb will take care of the rest. For a discussion of this and other issues associated with mixing tbb and processor affinity, see here:http://software.intel.com/en-us/blogs/2010/12/28/tbb-30-and-processor-affinity/

If you want to have multiple tbb schedulers running within the same process, all using separate thread pools with different affinities (for instance, if you want your application to be NUMA aware)... that isn't something you can do just yet.

View solution in original post

0 Kudos
1 Reply
ahelwer
New Contributor I
314 Views
If you don't care whichcores are used, construct task_scheduler_init with an integer argument specifying the thread pool size. You may also limit the number of tokens when using parallel pipeline. Certain flow graph nodes also have concurrency limits you can set.

If you want the tasks to be scheduled on a specific subset of the cores, that is only somewhat more difficult. You must use something called task_scheduler_observer. Virtual function on_scheduler_entry() is called whenever tbb appropriates a thread to use with its scheduler. By creating a class which implements on_scheduler_entry (and also on_scheduler_exit(), which is executed when tbb releases the thread), you can set the affinity of that thread to any subset of the processor cores you would like, using normal OS API calls. You should also call task_scheduler_init with the size of the subset of processor cores you are using, so that oversubscription does not take place (on some operating systems). For an example using task_scheduler_observer, see here:http://software.intel.com/en-us/blogs/2008/05/19/under-the-hood-building-hooks-to-explore-tbb-task-scheduler/

Another (easier?) way of doing this is to restrict the process affinity to the cores you want to run on, and tbb will take care of the rest. For a discussion of this and other issues associated with mixing tbb and processor affinity, see here:http://software.intel.com/en-us/blogs/2010/12/28/tbb-30-and-processor-affinity/

If you want to have multiple tbb schedulers running within the same process, all using separate thread pools with different affinities (for instance, if you want your application to be NUMA aware)... that isn't something you can do just yet.
0 Kudos
Reply