This is a question about Tim Mattson's article entitled Nuts and Bolts of Multithreaded Programming.
Assuming you have two threads A & B, how do you tell one core to do A's processing, and the other B's?
We forwarded this question to the author, who provided the following information:
If you look at the major APIs for multithreading, youll notice that they dont include a way to assign specific threads to processors. In the overwhelming majority of cases, this would be a bad thing to do. Why? Because the operating system manages all the threads running on the system and goes to great lengths to balance the load. If you start assigning threads to specific processors, you interfere with the OS scheduling, and that is usually not a good idea.
Note that in the overwhelming majority of cases, if your computer isnt busy with other CPU-intensive work, a pair of threads will tend to be assigned to each to the two cores. Hence, you are almost assured of getting the behavior you requested in your question, i.e. one thread to each core.
By the way, the technical issues behind your question are a frequent topic of conversation among designers of multi-threaded languages. The problem isnt usually with the initial distribution of threads on the system. The OS scheduler does a pretty good job of evenly spreading out the threads. The problem is with the caches. If Ive gone to great lengths to fill my caches with the data they need and then the OS migrates my threads to improve the load balance, my performance could suffer due to all the extra cache misses. This will be even more important on systems with complex cache hierarchies (such as a multiple socket system with multi-core processors in each socket). Hence, you may someday see changes in multithreading APIs to address this issue and somehow lock threads down to processors. This is a controversial topic, however, and it will take a while to work out how APIs need to change (if at all) to address this problem.