- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I need help deciding between the following two cluster designs for HPC puposes:
4x Nodes with 2x Intel Xeon E5-2450 2.20GHz, 20M Cache, 8.0GT/s QPI, Turbo, 8C, 95W
16 - 20x Nodes with Intel Xeon E3-1230v2 Processor 3.3GHz, 4C/8T, 8M Cache, 69W
The interconnect I don't know yet, but I don't think I will be able to afford 10GbE, so I am wondering if normal 1GbE will be suppose and if link aggregation will provide better performance.
Thanks
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you're trying to get by with 1GbE, the smaller number of more powerful nodes should be successful on a wider variety of applications. 10GbE has good bandwidth for very large messages, but typically doesn't improve latency as Infiniband would do.
I haven't heard of anyone trying multi-rail or the like with 1GbE, as the opportunities for it to be cost effective look limited. The main use of dual port adaptors is to support a local network exclusively for your cluster, keeping communication outside the cluster on the other ports.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page