Community
cancel
Showing results for 
Search instead for 
Did you mean: 
gmei2
Beginner
1,266 Views

How to improve i9-7980X CPU PCIe transfer latency issue?

We are using i9-7980x (18 cores, 36 threads) on Asus X299 SAGE motherboard. Insider this CPU chip, is there any DMA engines for transferring data to host memory? 

We are asking so because we see there is a 6-way DMA controller insider a Xeon D 1500 CPU. On this Xeon D CPU, after we program the IIO_LLC_WAYS to 6 way, it significantly boosted the PCIe transfer performance.  

 

We are testing two FPGA PCIe card on ASUS x299 SAGE mother board with Intel i9-7980X and X299 chipset.

the two FPGA cards are programmed to start data transfer "simultaneously", by a hardware start signal.

We see that at the very beginning of data transfer, the PCIe root complex is not fast enough in transferring the PCIe packets hence caused the FPGA card has some data FIFO overflow.

The same two FPGA card, when tested on P9X79 WS mother board, then there is no such issues.

We are running Linux OS. (ubuntu 1604 LTS).

Is there any parameters we can set in the CPU to improve the PCIe transfer performance?

 

 

0 Kudos
5 Replies
Wanner_G_Intel
Moderator
170 Views

Hello gmei2, Thank you for joining this Intel Community. We will do research about these questions. We will get back to you soon. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
gmei2
Beginner
170 Views

Hi Wanner,

 

Thank you for your reply.

 

I would like to provide some further test observations:

 

the FPGA board performs linked list DMA operation write into host memory. this FPGA board was proven working well and correct on P9X79 WS mother board. The linked list DMA means this board can perform DMA operation based on a link list of DMA descriptors, which defines the destination address in host memory for DMA.

 

Now on X299, if shows that if number of link list =2, then two boards can work well without FIFO overflow.

if number of link list =8, sometimes it works well for half a day; sometimes it exhibits issue; sometimes for half -day, it just don't work. (please note all the setup are the same).

Each DMA size = 2.4MBytes; on each board, there are 8 DMA channels; total two boards.

 

It seems that the FIFO overflow issue relates to number of link list I use.

 

so in Cpu /bios settings, is there any setting relates to the above, or might affect the latency in responding to the PCIe request?

 

Thank you.

Mei Guodong

Dynamic C4 Pte Ltd. , Singapore.

 

 

Wanner_G_Intel
Moderator
170 Views

Hello gmei2, Thank you for posting more information. As soon as we have any update about this issue, we will get back to you. Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
gmei2
Beginner
170 Views

Hello Wanner,

 

Please refer to this link, where a similar issue was resovled by increasing the IIO_LLC_WAYS on Broadwell CPU.

https://software.intel.com/en-us/forums/software-tuning-performance-optimization-platform-monitoring...

 

I would like to check is there any similar settings on the skylake CPU?

 

Thank you.

Mei guodong

Wanner_G_Intel
Moderator
170 Views

Hello gmei2, Thank you for your response. In order to better assist you, we recommend posting your question on the following support channels: 1. FPGA Design Tools Forum https://forums.intel.com/s/topic/0TO0P000000MWjOWAW/intel-quartus-prime-software?language=en_US If you have Premier Support, you may submit a ticket by going to the My Intel Dashboard, and under the My Support section, click the Intel® Premier Support button to submit your request. 2. Intel® Developer Zone (IDZ). https://software.intel.com/en-us/forum Wanner G. Intel Customer Support Technician Under Contract to Intel Corporation
Reply