FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6343 Discussions

MCDMA/PCIE max number of channels - how to get 256 physical channels

grspbr
Novice
5,030 Views

Hi @wchiah ,

I am recreating this question because the older one was closed. There is a problem with notifications. I am not getting notifications of replies to my posts. How can I correct this?

To get to the problem, The MCDMA/PCIE core has 11 bits of DMA channel address when using multiple logical channels on one physical interface. We planned to use 256 channels (physically) but probably only a dozen or so logical channels. However, when we try to drive it using Intel's DPDK, we seem to get corruption of the D2H DMA descriptors if we use more than 64 channels. Why are we seeing this limitation? Is it in the DPDK itself? Also, it seems we cannot use logical channels that we can map to a physical channel address. It seems that the creation of channels automatically uses sequential physical channel addresses starting from 0. Is it not possible to map logical channels to physical channels?

To answer your question, we are using Quartus 21.2 for Agilex AGFB014 (P+E Tile).

In our build settings, we have:

PCIe0 Settings/PCIe0 IP Settings/PCIe0 PCI Express/PCI Capabilities/PCIe0 Device/PF0 = 256; and PCIe0 Settings/PCIe0 IP Settings/MCDMA Settings/D2H Prefetch channels = 256; and Maximum Descriptor Fetch = 16

 

0 Kudos
32 Replies
wchiah
Employee
3,680 Views

Hi,


Thanks for reaching again.


  1. Do you try to run the example design without modification ? is it behave the same ?
  2. What kind of payload size you are using to do the test
  3. Did you develop your own driver or you are using our example design driver ?
  4. Did you capture the below signals to check if the data is mismatch or not ?

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_rx_st_valid_o[1:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_rx_st_data_o[511:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_rx_st_ready_i

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_rx_st_hdr_o[255:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_rx_st_sop_o[1:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_tx_st_data_i[511:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_tx_st_eop_i[1:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_tx_st_sop_i[1:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_tx_st_ready_o

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_tx_st_valid_i[1:0]

pcie_ed_tb.pcie_ed_inst.dut.dut.ast.p0_tx_st_hdr_i[255:0]


Looking forward to hear back from you.

Regards,

Wincent_Intel


0 Kudos
grspbr
Novice
3,672 Views

Hi @wchiah , thanks for the reply,

we used the example design originally without modification and it was working, with both example driver and our driver.

Then we removed the packet generator/checker from the example to create our own design which also worked with both our driver and example design driver.

Then we tried to modify the MCDMA/PCIE core to 256 channels from 64, and this is where we start having problems. We used payloads of 8K for the 64 channel case, and 2K for the 256 channel case.

Why does the example design only use 64 channels? Why not 256 or the max 2K?

We are now in the process of trying the example design customized for 256 channels. We will try to get reports from the example driver to you.

We do monitor those signals in our design with signaltap but do not yet have a capture when it fails, but we'll keep trying.

regards,

Greg

 

 

0 Kudos
grspbr
Novice
3,661 Views

Hi @wchiah 

I'm uploading a report showing the results of our tests for 64 channels and 65 channels using the example design with settings for 256 channel support. Also, the example design driver is used. 

regards,

Greg

 

0 Kudos
wchiah
Employee
3,647 Views

Hi @grspbr ,

 

If refer to the user guide, there is a 1.2. Known Issues
Where if setting Multichannel D2H AVST configuration has stability issues when total number of D2H channels configured is greater than 256. Please consider this in your design.

 

Also, if refer to 3.1.6.1. Avalon-ST 1-Port Mode

  • In the current Intel® Quartus® Prime release, the D2H Prefetch Channels follows the total number of DMA channels that you select up to 256 total channels.
  • When the total number of channels selected is greater than 256, then D2H Prefetch channels are fixed to 64.
  • The resource utilization shall increase with the number of D2H prefetch channels.
  • For details about these parameters, refer to the 4.12.2. D2H Data Mover Interface

Hope this answer your question.

Regards,

Wincent_Intel

0 Kudos
grspbr
Novice
3,609 Views

Hi @wchiah 

Note that we are not exceeding 256 channels. We are trying to use more than 64 channels though. When we try 65 channels, for example, we get a hang up in the PC. We think that the 64 limit on descriptor prefetch may not be handled properly, either in our code or the example driver so we are looking at that right now. 

 

regards,

Greg

 

0 Kudos
wchiah
Employee
3,597 Views

Hi,

 

Yes, you are correct. there is limit on descriptor prefetch as per stated in the user guide.
Please consider it in your design .

 

Regards,

Wincent_Intel

0 Kudos
grspbr
Novice
3,590 Views

The documentation is lacking here. There are 2K physical channels - i.e. 11 bits of physical address on the core for channel ID - yet we can only build a MCDMA with 256 channels, and yet again we only have 64 descriptor prefetch buffers. We don't find any documentation on how to address the 256 channels with 64 prefetch buffers. There needs to be a mapping of the prefetch buffer to a physical channel but we find no documentation for this in the software or hardware references. So how do we use 256 channels? We want this because we have a 256 channel AVST interconnect in our design.

Regards,

Greg

 

0 Kudos
wchiah
Employee
3,585 Views

Hi Greg,

If refer to the user guide, the D2H is limit to 64 channel and not available be set to more than that.

wchiah_0-1673359285073.png

  • If refer to your first question, 
    "However, when we try to drive it using Intel's DPDK, we seem to get corruption of the D2H DMA descriptors if we use more than 64 channels. "
  • You are trying to set the D2H more than 64 channels and it is fail.
  • Or you are trying to use 256 MCDMA channel ? Appreciate if you can describe more about it.
  • Can I know which version of quartus you are using ?
  • Can you please update your Quartus version into latest 22.3 or 22.3 and see if the same problem still happen?

Hoping to hear back from you.

Regards,

Wincent_Intel

 

0 Kudos
grspbr
Novice
3,574 Views

Hi Wincent,

I think you are misreading the guide... it says if you select GREATER than 256 channels, THEN prefetch channels is limited to 64. However, we are selecting EXACTLY 256 channels. Further, I may have mispoke in previous post, but we have also selected 256 Prefetch channels (not 64 as we once did). So we have 256 channels and 256 prefetch channels which gave no errors in Platform Designer and compiled successfully in Quartus 21.2. Or problem occurs when we try to configure channels above 64 using Intel's driver. Our software guy is checking the code again for possible roll-over in the channel numbering.  Honestly, this looks like a software/driver problem to me.

 

0 Kudos
grspbr
Novice
3,569 Views

Hi Wincent,

Our software guy believes he has found a bug in the MCDMA core. By reading the prefetch channel QSR info, the start_address and consumed_address have experienced a roll-over in the configured address - i.e. does not match programmed linearly increasing address. I will try to conpile the design with the latest compiler and see if the problem persists.

0 Kudos
grspbr
Novice
3,564 Views

Attached is the explanation from our software guy...

0 Kudos
wchiah
Employee
3,551 Views

Hi Greg,

Thanks for providing those detail, please allow me to have sometime to investigate on it.
Meanwhile, can I get your help to try to generate a new design example from your current Quartus v21.2 to the latest v22.3 or 22.4 and see if the problem able to replicate?

Regards,
Wincent_Intel

0 Kudos
wchiah
Employee
3,549 Views

Also, Appreciate if you can provide the .qar file, so that we can look deeper on it.

0 Kudos
grspbr
Novice
3,537 Views

Hi Wincent,

Attached is the qar file of the example design we just tried, from Quartus 22.4. It also failed when trying to use more than 64 channels. I'll send the qar file of our quartus 21.2 version as well.

 

I could not attach the file as its 78 MB and the max is 71MB

 

maybe you have ftp I can use?

0 Kudos
wchiah
Employee
3,523 Views

Hi ,

 

Can you try to zip it and sent here ?

Regards,

Wincent_Intel

0 Kudos
grspbr
Novice
3,504 Views

Hi Wincent,

I guess they are already compressed because zipping didn't do much. However, I the qar file from quartus 21.2 is small enough so I'll attach it here. (We are using v21.2 anyway.)

Regards,

Greg

 

0 Kudos
wchiah
Employee
3,444 Views

Hi Greg,

 

Can you provide me the .qar file for v22.4 as soon as possible ?
you can sent to me via email

 

Regards,

Wincent_Intel

0 Kudos
wchiah
Employee
3,442 Views

.qar file v22.4 with setting of 256 channels instead if 65 channels, together with the fail log file

0 Kudos
grspbr
Novice
3,429 Views

Hi Wincent,

It's too big to send by email. Do you have a shared folder like dropbox that I can send it to? I'm investigating our side to see if we have something like that or FTP. I'll let you know...

Regards,

Greg

 

0 Kudos
grspbr
Novice
3,410 Views

Hi @wchiah 

 

I have sent you a link (by email) where you can retrieve the QAR files from our cloud server.

Best regards,

Greg

 

0 Kudos
Reply