FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
Announcements
Intel Support hours are Monday-Fridays, 8am-5pm PST, except Holidays. Thanks to our community members who provide support during our down time or before we get to your questions. We appreciate you!

Need Forum Guidance? Click here
Search our FPGA Knowledge Articles here.
5881 Discussions

Cyclone 10Gx PCIe Hard IP Avalon access speed

JBayl
Beginner
165 Views

Hi,

We upgraded our old PCIe Gen 1 board (NXP's PX1011B and Cyclone III)  to a Cyclone 10Gx based board with the Hard IP configured at Gen2 64-bit 125Mhz. 

We've maintained the same SW  (windows based) and access to our IP is the same (single read/write data transfer to a FIFO), but now through an Avalon MM interface with data transfer still single read/write register access and no bursting option.   In this configuration, we noticed that the performance has gone slower compared to the old design.

Changing our IP's FPGA code is currently not an option because the data is tied to the FIFO (can't do burst write/read).  Can you please advise how we can optimize this or determine where the bottleneck is?

Thanks!

 

 

0 Kudos
6 Replies
Deshi_Intel
Moderator
155 Views

Hi,


Common factor that will affect PCIe performance is like below

  1. PCIe speed
  • Your old design is running with PCIe Gen 1 which is just 2.5GT/s
    • Now you already upgrade to PCIe Gen 2 which becomes 5GT/s
    • Byright you already enjoy performance boost but you are saying NO ?
  1. PCIe link width
  • Any changes on the link width between old design vs new design ?
    • x1, x4, x8 and etc ?
  1. Max payload size
  • have you tried to increase this setting ?
  1. Max read request size
  • have you tried to increase this setting ?


Thanks.


Regards,

dlim


JBayl
Beginner
151 Views

Hi Dlim,

1.  Yes, there should be performance boost if you look at the PCIe interface.  But what we're measuring is showing the opposite.  Our software team timed a pre-set transaction (serial data to/from UUT) to compare the Version1 performance with the Version2 board.   

2. PCIe link width is x1 on both boards.

3. It's currently set to 256 bytes.  I'll try increase it and let our SW team know.

4. Is this an FPGA/HardIP setting?  I can't find it on the HIP parameters tab.

 

Thanks!

 

Regards,

jbayl 

 

 

JBayl
Beginner
150 Views

I'm getting a message on Platform Designer that 256 bytes is the maximum payload for an Avalon MM interface.

Deshi_Intel
Moderator
138 Views

HI jbayl,


Ya, it's weird as your PCIe BW by right has been double from 2.5G to 5G. You may want to consult your software team on potential design changes.

  • After all, you have upgrade from PCIe Gen 1 to Gen 2. The design cannot be the same. There must be some changes.


Max payload setting is a setting in PCIe hard IP itself.

  • What do you set in the hard IP ? Does it show error if you set it to something >256 ?


For the max_read_request_size, this is bit [14:12] of PCIe spec capability register -> device control register at address 0x088.

  • You can feedback to your software team and they should know how to change it


The other thing that you can feedback to your software team is PCIe user guide chapter 11 did mentioned about PCIe throughput optimization techniques done at software application layer. Check out below link page 108, chapter 11


Thanks.


Regards,

dlim


JBayl
Beginner
120 Views

Hi dlim,

 

This may be off topic but is there another way of interfacing to the PCIe Hard IP other than the Avalon Bus?

Our SW team is looking at possible ways of increasing the Avalon bus speeds but we are currently unsuccessful.  We're looking at increasing the Avalon bus to 64 bits as a last resort.

Thanks!

 

 

 

 

Deshi_Intel
Moderator
113 Views

HI,


Unfortunately NO.


The PCIe hard IP interface is either connected to Avalon MM or Avalon ST interface bus only depends on which PCIe IP that user choose.


And the actual supported bit width is listed in PCIe hard IP drop down selection itself.


Thanks.


Regards,

dlim


Reply