Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
27 Views

Intel Micro Benchmark Result Problem with PCI-passthrough via FDR infiniband

Hi everyone, I still evaluate cluster performance. For now, i move on virtualization with PCI-passthrough via FDR infiniband on KVM hypervisor.

My problem is Sendrecv throughput that decrease by half when compare with physical machine and i use 1 rank/node. For example

Node           Bare-metal (MB/s)               PCI-passthrough (MB/s)

    2                14,600                                        13,000

   4                 14,500                                        12,000

  16                14,300                                        11,000

  32                14,290                                        10,000

  64                14,200                                         7,100

What do you think about this behavior ? Is it about mellanox software or other things or overhead ?

Thank you

Cartridge Carl

0 Kudos
3 Replies
Highlighted
Moderator
27 Views

I'll check with our developers and see if they have any ideas.

0 Kudos
Highlighted
Moderator
27 Views

What fabric are you using?  Can you attach output with I_MPI_DEBUG=5?

0 Kudos
Highlighted
Beginner
27 Views

I use openmpi so this is one reason for this problem which i cannot attatch output with I_MPI_DEBUG=5. Anyway i use Mellanox ConnectX-3 FDR Infiniband HCA (56 Gbps)

0 Kudos