Does TBB follows the Task farming (master/worker) paradigm?
If yes, then how. If not, can you give a parallel programming language that follows it - other than cilk.
Thank you. I got it.
Can you suggest a parallel programming language that follows the task farming paradigm? i.e. a single threaded master deligates big tasks to other processors.
I've already worked with ZPL which follows the SPMD (Single program multiple data) paradigm.
At the risk of continued criticism aboutnot sounding inviting in my response to the "task farming paradigm," I will continue in my line of reasoning andinsist that it sounds like we're talking abouttwo different beasties here. The references I could find for "task farming" were pretty specific about its nature as scheduling technique for distributing a collection of single-task invocations of a program over a server farm or cluster, each invocation running in isolation, not interfering with each other. In some sense, the Google MapReduce operates in this space, running the same query across a mass of machines, each processing different data. I have personal experience using such a technique; at one point in my work life I studied particular algorithms running on a proposed architecture family by "net-batching" a set of simulations, all using the same input data but varying parameters that controlled the architectural model; for all intents and purposes these were independent runs that fit squarely in the "task farming paradigm."
I also did a quick scan of ZPL. This looks like a superset of Pascal (Modula, maybe?) with some added syntax to support data parallelism. Though you might be able to use this language to implement "task farming," it looks more closely aligned with cluster data parallel programming techniques for describing large array problems intended to be divided across a cluster. The two code samples I saw were the Jacobi operator and matrix multiplication. ZPL relies on an inherent cluster communications fabric (MPI and RDMA are both mentioned) and at least for the Jacobi example, the interaction model is anything but "separate, independent tasks." Each task has to communicate with "neighbor" tasks at some array partitioning granularity in order to communicate the intermediate values in the relaxation method across the array, until the error is minimized to an acceptible epsilon. In ZPL this is all hidden in the implementation below the level of the language (though the new Grid construct seems to offer a window for controlling the underlying computational fabric).
Though MapReduce provides a form of task farming, it's not at a language level. If what you're looking for is a data-parallel languagethat works in a way like ZPL (the omega to the alpha of APL?), it looks like most of the work was done back in the 1990s, with such contributions as NESL, C-HELP, C// (a special purpose language for the GFLOPS machine), etc. The advent of GPGPU revitalized the data parallel world around narrow width vector processing characteristic of modern Graphics Processing Units with languages as obscure as 81/2and the more mainstream Brook, RapidMind (which grew out of Sh, another GPGPU "language") and others. Intel has its own entry in this burgeoning "data-parallel" market, Ct(which I've heard uses TBB under the hood). How's that for bringing the story full circle?