- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm running the intel_fpga_pcie_link_test software, following all the steps in the Avalon-MM Intel Stratix 10 Hard IP+ for PCI Express document. When running option 0 - Run DMA, I see about 13 GB/sec read throughput. Seems normal, max theoretical is 16. However, I frequently see 0.49 GB/sec write throughput. Before I hunt down a PCI-E analyzer, is there any easy check to see what is the problem? I'm using default numbers of 2048 dwords/desc and 128 descriptors.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am referencing "Avalon-MM Intel Stratix 10 Hard IP+ for PCI Express Solutions User Guide", which has "UG-20170" near the bottom of the title page. The design is 16-lanes.
Some further information: On intial loading of the driver and running of the user example program "intel_fpga_pcie_link_test", as long as I turn off "Run Simultaneous" on the DMA options, then I see about 8 GB/sec write throughput. This is true even if "Run read" is on. It is true before or after running the "link test" option.
As soon as DMA option "Run Simultaneous" is on, write throughput drops to 0.49 GB/sec and stays there. It remains low even if "Run Simultaneous" is turned back off. It remains low even after doing an "unload" and "load" of the kernel driver.
Tom
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would suggest you to test the AN829, there is some data to benchmark the throughput of DMA, if based on the user guide above, it mainly is to test 100 write and read.
Regards -SK

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page