Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
Announcements
Welcome to the Intel Community. If you get an answer you like, please mark it as an Accepted Solution to help others. Thank you!
1829 Discussions

Intel Micro Benchmark Result Problem with PCI-passthrough via FDR infiniband

phonlawat_k_
Beginner
118 Views

Hi everyone, I still evaluate cluster performance. For now, i move on virtualization with PCI-passthrough via FDR infiniband on KVM hypervisor.

My problem is Sendrecv throughput that decrease by half when compare with physical machine and i use 1 rank/node. For example

Node           Bare-metal (MB/s)               PCI-passthrough (MB/s)

    2                14,600                                        13,000

   4                 14,500                                        12,000

  16                14,300                                        11,000

  32                14,290                                        10,000

  64                14,200                                         7,100

What do you think about this behavior ? Is it about mellanox software or other things or overhead ?

Thank you

Cartridge Carl

0 Kudos
3 Replies
James_T_Intel
Moderator
118 Views

I'll check with our developers and see if they have any ideas.

James_T_Intel
Moderator
118 Views

What fabric are you using?  Can you attach output with I_MPI_DEBUG=5?

phonlawat_k_
Beginner
118 Views

I use openmpi so this is one reason for this problem which i cannot attatch output with I_MPI_DEBUG=5. Anyway i use Mellanox ConnectX-3 FDR Infiniband HCA (56 Gbps)

Reply