Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Choosing an Interconnect

Vikram_C_Intel
Employee
752 Views

When designing a HPC cluster, what price to performance ratio drives your decision-making process around choosing interconnects? From all accounts, it appears Infiniband has a compelling story in its favor?

0 Kudos
3 Replies
TimP
Honored Contributor III
752 Views
Infiniband clearly has the potential to support effective performance on a larger cluster than gigEthernet, but the support headaches haven't been taken care of yet. For many applications, up to 8 CPUs, gigE is sufficient. In an intermediate range, it's large a matter of overall price to performance. If an Infiniband cluster of 16 CPUs turns out to have the performance of a gigE cluster of 18 CPUs, and either will satisfy the requirement,there is a clear limit to thepremium interconnectprice and hassle which will be acceptable. Infiniband appears also to have the potential of not costing more than other types of interconnect with performance between that of gigE and IFB, so it seems that reliability and support will have an important role in determining which types have a "compelling story."
0 Kudos
ClayB
New Contributor I
752 Views

Vikram -

Besides all the price-performance ratio issues, one must really consider the applications that are going to be performed on the cluster. If the applications are compute bound, the network becomes less of an issue since it's affect on the execution time is minimal. Spending extra money on a network where the applications dispatch work with a singel integer and collect a single value back as the results after 10 hours of computation will be a waste.

If, on the other hand, applications are contiuously sharing data across the network, one would hope to have the fastest solution that was affordable and maintainable. Authors of such codes, too, should be looking into ways that might improve the communication patterns. For example, don't send 100 four-byte messages when you can group all of them together and send the whole set at once; try to use collective communications when possible (since these can be better tuned to network topologies than point-to-point).

--clay
0 Kudos
Intel_C_Intel
Employee
752 Views
As with everything in the HPC space, the answer is 'it depends on the application'.
For a very tightly coupled parrallel app, GigE cannot compare in any way with any of the low latency interconnects. Bandwidth would not likely be the issue. For a small cluster, a hacked together raw communication method over RS232 would likely outperform the GigE, simply because of the tcp/ip overhead.
If you need low latency and high throughput, IB and dual-port Myrinet are both very attractive options at present. I personally am still recommending Myrinet because it is a known, proven solution. The IB driver stack is *HUGE*. It is still bleeding edge and I would prefer to let some of the bugs get worked out. I do think IB will mature rapidly though.
0 Kudos
Reply