Announcements
This community is designed for sharing of public information. Please do not share Intel or third-party confidential information here.
2421 Discussions

## Dijkstra's method whether it will give performance or not on intel tbb

Beginner
139 Views
which one will give better performance either parallel_for or parallel_do,or parallel_reduce or concurrent_vector.please suggest me....
7 Replies
Employee
139 Views
I guess you have to find it out yourself. It's notunkely somebody else already experimented with parallelizing the algorithm; try finding it and see whatparallel patterns are observed there.
Black Belt
139 Views
I suggest for a good starting point of a model to look at Arch Robinson's n-Body code in the Blogs archive (see link at top of page).

With Dijkstra's method you should be able to partition the verticies and work the partitions in parallel, with thread safe code to combine the results between partitions. As you read the n-Body exampleyou will find that your performance can very greatly if you can code to avoid interference between threads. (eliminate locks where they are not needed)

This sounds like this is an assignment.

Jim Dempsey
Black Belt
139 Views
Dijkstra's algorithm seems inherently serial, at each iteration evaluating the nearest unvisited point, so by the time you'll have a halfway decent parallel version it quite likely won't be anything like Dijkstra's algorithm anymore. Perhaps parallel_reduce might help to evaluate that nearest point in a large graph, but that seems pretty lame.

Other than finding a solution somebody else has already provided (Alexey's suggestion is way too practical), I would first have a look at what A* says, and then try to devise something that doesn't depend on exact global aggregates. But I don't know if it will be better than Jim's intuition to first solve partitions, although I don't see how you could partition effectively without at the same time already solving the problem.
Black Belt
139 Views

Look at the simplified case of N vertices 2 threads.

Divide N / 2 and have each thread solve the solution for each half. There may be an advantage to make the cut isolate nearby vertices (e.g. down the middle). Once each half has been solved, then each vertex need only to make a single probe to each of the other vertices (once) in the other zone. Meaning pick one zone, say zone A, divide vertices in two, have each thread process half the vertices of zone A with distance to all vertices in zone B. All the other paths (chains) have already been determined (weights may need to be adjusted). Extend the method to an arbitrary number of threads.

Effectively what you are doing is constructing chains or hyper threads within each zone by one thread per zone. Once those chains are constructed in each zone, the remaining task is to combine chains between zonesnot vertices. The work should be reduced. Taking care not to compare same chain pair or sequence more than once. As you combine chains, you may need only to check with chains whos ends are at/near the perimeter of the area with the chains at the perimeter of adjacent areas. e.g. look at chains of roads between Illinois and Missouri not Illinois and Massachusetts.

Jim Dempsey

Black Belt
139 Views
Wouldn't one connection between individually processed zones possibly mean that nearly all the work in one or more of the zones has to be thrown away? Wouldn't that be very likely unless those zones are determined only while traversing the graph? And how do you avoid finding the distance at each point in the whole graph just to find the shortest route between two points that may be very close together? I have another intuition involving scouts (a limited number of recyclable tasks) that partition the graph as they traverse it, but to find the right compromise between minimal work and minimal inter-scout communication does not seem trivial...
Black Belt
139 Views
Raf,

A determination has to be made as if you are to solve

a) the shortestpath between two specific points (specific solution)
b) the shortestpath between all points to all other points (general solution)

When performing a sieve operation to find a small number of items in a vast population most of the work is thrown away. Actually the "work" is discarding non-matching items. The trick is in reducing the amount of work and/or in distributing the work amongst multiple worker threads. Sometimes a "trick" can be employed for fast elimination of impossible routes. Most of the GPS solve for a), but route tables are solutions for b).

In a topology map (e.g. cities and roads), one might assume progression in the direction between the two points would yield the shortest or quickest path. This is not always the case. Sometimes going in the direction away from your destinationproduces ashorter path. Examples are where roads must cross bridges, and some times you must go away from your destination in order to cross a bridge leading to the shortest path. For these situations, the bridges represent choke points (as would sparse roads across wilderness).

Jim
Black Belt
139 Views
Only days ago I was looking at a list on my GPS device of restaurants presented by distance as the crow flies, and I could see the lights of the town where they were located from my higher-altitude vantage point, but I knew that they were all much farther away as the car drives than another result several pages down, which is a bit lame (why couldn't the device order them more meaningfully, perhaps as an option?).

Regardless of topology, though, it should be possible to find the best route from New York to Newark without having to consider Forks, Washington (of Twilight fame), and I don't see a way to guarantee that without making the partitioning part of the search, not even considering cache locality. (Heuristics are fine if you can accept a possibly suboptimal solution, and once you have a route in euclidian space you can verify it by eliminating all points that are farther away in a direct line than the route already found, but perhaps that's not enough in a situation where you would want to resort to parallel searching.)