We are using i9-7980x (18 cores, 36 threads) on Asus X299 SAGE motherboard. Insider this CPU chip, is there any DMA engines for transferring data to host memory?
We are asking so because we see there is a 6-way DMA controller insider a Xeon D 1500 CPU. On this Xeon D CPU, after we program the IIO_LLC_WAYS to 6 way, it significantly boosted the PCIe transfer performance.
We are testing two FPGA PCIe card on ASUS x299 SAGE mother board with Intel i9-7980X and X299 chipset.
the two FPGA cards are programmed to start data transfer "simultaneously", by a hardware start signal.
We see that at the very beginning of data transfer, the PCIe root complex is not fast enough in transferring the PCIe packets hence caused the FPGA card has some data FIFO overflow.
The same two FPGA card, when tested on P9X79 WS mother board, then there is no such issues.
We are running Linux OS. (ubuntu 1604 LTS).
Is there any parameters we can set in the CPU to improve the PCIe transfer performance?
Thank you for your reply.
I would like to provide some further test observations:
the FPGA board performs linked list DMA operation write into host memory. this FPGA board was proven working well and correct on P9X79 WS mother board. The linked list DMA means this board can perform DMA operation based on a link list of DMA descriptors, which defines the destination address in host memory for DMA.
Now on X299, if shows that if number of link list =2, then two boards can work well without FIFO overflow.
if number of link list =8, sometimes it works well for half a day; sometimes it exhibits issue; sometimes for half -day, it just don't work. (please note all the setup are the same).
Each DMA size = 2.4MBytes; on each board, there are 8 DMA channels; total two boards.
It seems that the FIFO overflow issue relates to number of link list I use.
so in Cpu /bios settings, is there any setting relates to the above, or might affect the latency in responding to the PCIe request?
Dynamic C4 Pte Ltd. , Singapore.
Please refer to this link, where a similar issue was resovled by increasing the IIO_LLC_WAYS on Broadwell CPU.
I would like to check is there any similar settings on the skylake CPU?