Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2161 Discussions

Intel Micro Benchmark Result Problem with PCI-passthrough via FDR infiniband

phonlawat_k_
Beginner
646 Views

Hi everyone, I still evaluate cluster performance. For now, i move on virtualization with PCI-passthrough via FDR infiniband on KVM hypervisor.

My problem is Sendrecv throughput that decrease by half when compare with physical machine and i use 1 rank/node. For example

Node           Bare-metal (MB/s)               PCI-passthrough (MB/s)

    2                14,600                                        13,000

   4                 14,500                                        12,000

  16                14,300                                        11,000

  32                14,290                                        10,000

  64                14,200                                         7,100

What do you think about this behavior ? Is it about mellanox software or other things or overhead ?

Thank you

Cartridge Carl

0 Kudos
3 Replies
James_T_Intel
Moderator
646 Views

I'll check with our developers and see if they have any ideas.

0 Kudos
James_T_Intel
Moderator
646 Views

What fabric are you using?  Can you attach output with I_MPI_DEBUG=5?

0 Kudos
phonlawat_k_
Beginner
646 Views

I use openmpi so this is one reason for this problem which i cannot attatch output with I_MPI_DEBUG=5. Anyway i use Mellanox ConnectX-3 FDR Infiniband HCA (56 Gbps)

0 Kudos
Reply