Software Archive
Read-only legacy content
17061 Discussions

Can the Intel Phi communicate directly with a host NIC?

EJ1
Beginner
1,011 Views

Hi -

With the Intel Phi, must all network traffic be brokered through the host? Or can one "assign" one of the NICs hosted in the PCI-Express bus and "give" it to the Intel Phi for it to use all by itself? Or must the host always act as a middleman for interrupts and packet delivery?

I don't (yet) have an Intel Phi to play with and my attempts at googling to produce a clear answer have failed. 

Thanks!

EJ

0 Kudos
1 Solution
TaylorIoTKidd
New Contributor I
1,011 Views

I don't believe there are any plans in the KNC (current generation Xeon Phi) generation. The next generation (KNL), I can't comment on, but I can't prevent you from guessing.

View solution in original post

0 Kudos
11 Replies
Loc_N_Intel
Employee
1,011 Views

Hello,

I understand that the host acts as a midleman for interrpts and package delivery. Thank you.

0 Kudos
Vladimir_Dergachev
1,011 Views

You would have to configure the host to act as the middleman.

However, at present the Linux kernel and system libraries running on Phi are compiled with mpss supplied gcc which is severely crippled. I could not get better network transfer performance than ~20 MB/sec anyway, so host limitations are not important.

0 Kudos
Nick_W_
Beginner
1,011 Views

Has this changed since the thread was first posted?

I thought I would ask since I found this and don't know if it's incorrect or is a more recent development...

https://software.intel.com/en-us/articles/intel-xeon-phi-coprocessor-codename-knights-corner

" the coprocessors can also communicate through a network card such as InfiniBand or Ethernet, without any intervention from the host."

We have a requirement to read directly from a 10G NIC if possible. if not possible, would a mapped memory area on the host which the NIC writes into be best?

0 Kudos
TaylorIoTKidd
New Contributor I
1,011 Views

Hi Nick,

The answer depends upon what you are asking.

If you are asking if card to host, card to network, and card to card TCPIP connections are possible, then the answer is yes. The coprocessor behaves as if it is hanging directly off of the network.

If you are asking if all communication still must go through the host, the answer is that it does. It is transparent to applications attempting to communicate across the network, either coprocessor to external or coprocessor to coprocessor.

Regards
--
Taylor
 

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,011 Views

EJ has a concern not over transparency, rather over throughput. After all, should you have a 10-base NIC interface card laying around, you could "transparently" communicate over that too.

If the MIC card has PCIe (it does) and if the motherboard mounted NIC is PCIe accessible by the MIC (possibly/possibly not), .OR. if a PCIe NIC card is installed into the system, then the MIC "could" directly control the NIC assuming it were possible to configure the MIC to handle interrupts from the PCIe devices other than the Host CPU. This would also require drivers on both systems to arbitrate ownership of NIC.

Jim Dempsey

0 Kudos
Nick_W_
Beginner
1,011 Views

Hi Taylor, thanks for the reply.

We are going to use a Myricom 10G card with their sniffer 10G library. We could naively suppose that this library could be compiled into our Phi native app..?

Quote from Myricom website: " Sniffer10Gsoftware uses a firmware extension and a user-level library to provide small-packet coalescing and an efficient zero-copy path to host memory (through OS bypass). "

If this is not an option, we could create a host application to either:

- accumulate a fixed size block of input and then SCIF DMA it to the Phi (latency is not as important as throughput to us)

- use a SCIF mapped memory area as a circular input buffer

0 Kudos
Frances_R_Intel
Employee
1,011 Views

We have been talking to one of the MPSS developers and the short answer to your question is: no, you can't take the simple way out and set up any kind of direct connection to the Ethernet card, then use the Myricom sniffer library. The closest you could come to this would be to set up something like what is done for the IB cards with CCL-direct - set up a proxy driver on the host and an Ethernet driver on the coprocessor that knows how to talk to the proxy driver and can deal with the coalesced packets. The proxy driver would need to program the  NIC and keep track of state (and probably still handle the interrupts), but then the DMA  engine on the NIC should be able to read and write packets straight to the coprocessor's memory using P2P transactions. This is obviously a very simplified explanation and implementing this would be a lot of work. Setting up a host application to do SCIF DMA transfers from host to coprocessor should be much more straight forward.

0 Kudos
EJ1
Beginner
1,011 Views

Thanks for the details. Sounds like a fair amount of work.

Do you know if there are plans for the co-processor card to have its own onboard nic? Ideally, it would be a 10gbe.

 

0 Kudos
TaylorIoTKidd
New Contributor I
1,012 Views

I don't believe there are any plans in the KNC (current generation Xeon Phi) generation. The next generation (KNL), I can't comment on, but I can't prevent you from guessing.

0 Kudos
Evan_P_Intel
Employee
1,011 Views

Frances Roth (Intel) wrote:

set up a proxy driver on the host and an Ethernet driver on the coprocessor that knows how to talk to the proxy driver and can deal with the coalesced packets. The proxy driver would need to program the NIC and keep track of state (and probably still handle the interrupts), but then the DMA  engine on the NIC should be able to read and write packets straight to the coprocessor's memory using P2P transactions.

I thought I'd describe the technical situation in a little more detail.

  • It is indeed possible to read and write hardware control registers on other PCI devices, provided that the host computer's I/O-MMU is disabled or has been configured to allow such accesses.
  • Similarly, if the NIC has its own DMA engine or similar, the I/O-MMU determines whether it can directly access Xeon Phi memory.
  • It is not possible to convince a PCI device (NIC in this case) to send its interrupts directly to another (the Xeon Phi, in this case), although of course a "proxy driver" on the host could forward them (at a cost to overall interrupt latency).
  • The Xeon Phi cannot directly obtain any information about the PCI bus--another job of the "proxy driver" would be to inform the NIC driver on the Xeon Phi of essential information like where the NIC's MMIO registers were mapped in the host's physical address space.
  • The performance of direct access (e.g. DMA) between one PCI device and another depends on a number of hardware details, such as the size of certain hardware queues within the two devices and the platform (as but one example); these queues may have been sized optimally for the most common use cases, but not for a NIC and Xeon Phi communicating directly.

Writing a Xeon Phi driver for directly controlling an external PCI NIC would therefore require writing two cooperating and coordinated drivers, one for the host (the "proxy driver," which must replace the existing NIC driver) and one for the card.

The "coalesced packets" bit in the quote above is referring to the fact that it's helpful for performance in practice if several packets destined for Xeon Phi are coalesced into a single one before being delivered to the card; doing that reduces the incurred CPU overhead since Xeon Phi's small in-order cores are poorly suited to branchy, scalar driver code.

Note that this observation implies that direct NIC access may not actually be very successful in practice--one of the reasons why CCL-direct works well is because Infiniband hardware is intentionally designed in such a way that only a few simple control register manipulations are necessary in the fast path (perhaps as simple as a single register write)--which neatly avoids the concern about the Xeon Phi's small in-order cores. NIC hardware may not be so amenable.

Bottom line is that it's challenging but possible.in principle. Whether it would be a good idea in practice is much less clear.

0 Kudos
Nick_W_
Beginner
1,011 Views

Thanks very much for the replies. The host app to DMA large buffers of packets to the Phi will be fine for our current latency requirements. This may change in the future however.

0 Kudos
Reply