- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
When designing a HPC cluster, what price to performance ratio drives your decision-making process around choosing interconnects? From all accounts, it appears Infiniband has a compelling story in its favor?
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Vikram -
Besides all the price-performance ratio issues, one must really consider the applications that are going to be performed on the cluster. If the applications are compute bound, the network becomes less of an issue since it's affect on the execution time is minimal. Spending extra money on a network where the applications dispatch work with a singel integer and collect a single value back as the results after 10 hours of computation will be a waste.
If, on the other hand, applications are contiuously sharing data across the network, one would hope to have the fastest solution that was affordable and maintainable. Authors of such codes, too, should be looking into ways that might improve the communication patterns. For example, don't send 100 four-byte messages when you can group all of them together and send the whole set at once; try to use collective communications when possible (since these can be better tuned to network topologies than point-to-point).
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고