- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am running into an issue running the IMB benchmark (v2.3), specifically the Alltoall test. When the MPI message size increases above 4096 the nodes start to drop packets.
I am using:
2.6.16.11
e1000 driver ver 7.0.41
IMB ver 2.3
The benchmark is being run on six dual core nodes. The mpirun commandline reads:
/opt/src/wsm/mpich/mpich-1.2.7p1/bin/mpirun -v -nolocal -np 12 -machinefile /opt/mpich/gnu/share/machines /home/wsm/mpich-wsm/IMB_2.3/src/IMB-MPI1 Alltoall -npmin 12
I have outfitted two of the nodes with Pro/1000-PT (82572GI) PCIe adapter cards and those nodes no longer have any dropped packets even when Alltoall msgsize increments all the way to 4MB.
I have raised this issue via Intel quad support as a hardware issue but I wanted post this in the HPC group in case there was different insight.
I am using:
2.6.16.11
e1000 driver ver 7.0.41
IMB ver 2.3
The benchmark is being run on six dual core nodes. The mpirun commandline reads:
/opt/src/wsm/mpich/mpich-1.2.7p1/bin/mpirun -v -nolocal -np 12 -machinefile /opt/mpich/gnu/share/machines /home/wsm/mpich-wsm/IMB_2.3/src/IMB-MPI1 Alltoall -npmin 12
I have outfitted two of the nodes with Pro/1000-PT (82572GI) PCIe adapter cards and those nodes no longer have any dropped packets even when Alltoall msgsize increments all the way to 4MB.
I have raised this issue via Intel quad support as a hardware issue but I wanted post this in the HPC group in case there was different insight.
Link Copied
0 Replies

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page