I have some questions with regards to the use of the Cyclone V Avalon-MM Interface for PCIe, used as a root port.
I have a pretty good understanding of PCIe. I have implemented PCIe endpoints in FPGA fabric before, but this is my first root port implementation in FPGA fabric.
The documentation with regards to the use of the Cyclone V Avalon-MM Interface for PCIe, when used as a root port, is fairly lacking in my opinion. I have a bunch of questions with regards to its use, but I will try to start with a few more basic ones.
But first, I will supply some more details about my configuration.
I am using a Cyclone V SoC with the Cyclone V Avalon-MM Interface for PCIe, configured as a root port, 2 lane, Gen1 device.
The PCIe endpoint connecting to the root port is another FPGA device, 2 lane, Gen2. This will be the only endpoint device used. This helps bound the problem to a certain extent, as I will not need to support any type of endpoint device.
The root port device will be the only device performing memory reads and writes to the endpoint. The endpoint will not be performing any memory reads or writes to the root port. (In other words, the root port will always the requester in the endpoint will always be the completer.)
The root port device will only be performing memory reads and writes to the endpoint BAR space.
On to my first round of questions.
What parts of the PCIe protocol stack does the IP perform, when configured as a root port? I can see that it takes care of the physical and link layers, but how much of the transaction layer is covered? I suppose my question, more specifically, is what do I need to submit to the IP? Do I need to supply a complete TLP, along with TLP header, payload, etc.?
Does the IP, when configured as a root port, perform any transaction level configuration, e.g. send any configuration TLPs?
Is the only way to send TLPs is through the root port TLP data registers? Or can the Avalon-MM bridge (Txs) be used to submit TLPs? If the Avalon-MM bridge can be used, how would you send Configuration TLPs? More specifically, what is the Avalon-MM address which would be used? Is there a translation table which needs to be used? Where would this translation table be located?
It looks to me like the transaction level configuration will need to be done by my custom logic. I will need to do bus enumeration, along with pulling the configuration of the endpoint. I just need to know how to interact with the PCIe root port IP in order to do this.
Thanks in advance.
To get a RC PCIe core working, you need to connect its Avalon interface to an upstream device such as processor. This processor will serve as the actual initiator which will send the data request to PCIe RC via the Avalon interface. The PCIe RC will have the the TLP, DLLP & PHY layers implemented.
To initiate a data transfer between RC and EP, the software/firmware at the processor side will initiate either a memory read or memory write to the EP device in question. The device driver in turn will kick start the SGDMA to get the data from system memory to the RC (n case of write) which will then form the TLPs and send them across to the EP via the PCIe lanes.
When the system with processor, RC and EP is powered on, the RC will automatically perform an enumeration to determine if there are any downstream EPs and devices. Once it finds an EP/Device it will start the link training and configuration automatically. But all of this has to be taken care of by the firmware in the system.
Even an RC needs a processor like Bus Master to initiate the data transfers from the application layer and RC cannot initiate data transfers by itself.
Hope this answers your questions.
Thank you for your reply, Abraham. I think that I understand most of that, but I have some more specific questions.
The reason that my questions are more specific is because I am not planning on using the processor, but using FPGA fabric/firmware/custom logic to do the "processor"/bus mastering functions that you speak of.
I'm assuming that the processor performs bus enumeration by submitting Configuration TLPs. How is this accomplished? Through the root port TLP data registers or can this be done through the Avalon-MM Txs slave interface? If it can be done through the Txs interface, what is the format of the data necessary and what address should be used?
Once bus enumeration is performed, how do you map a bus, device, function, and/or BAR space to the Avalon-MM Txs slave address for sending Memory Read/Write TLP data? What is the format of the data (e.g. do you need to supply the TLP header along with the TLP data/payload, or do you just need to supply the TLP data/payload)?
Thanks again, in advance.
You can use custom logic or the NIOS/HPS on the FPGA to act as the bus master in your design. The only thing you need to make sure is that it is accessible via firmware/software. As I mentioned before, the RC cannot perform the configuration, data transfer on its own. It needs the help of the firmware/drivers and application software to do so. When the system is powered on, the firmware will ensure that the RC is enumerated / detected , BARs configured . The next step is for the firmware to start the PCIe discovery process and also the link training and configuration for the downstream EPs. Once the RC responds back to the firmware with a list of EPs downstream and the configuration is done, the firmware hands control to the application software which then starts/initiates the data transfer.
When you implement the RC/EP you will have to specify how many BARs you want to use and its address range. The BARs have its own configuration and parameters that need to be set. Once you have set these parameters and configured the RC/EP IP, you can generate the IP and connect it to the rest of the design.
Thus when the firmware is configuring the RC/EP, it will try to write a series of 1's to the BARs and wait for a response from the RC/EP. Once it gets the required response with the data, it will know the BAR details of the PCIe device being configured, its memory mapping, range, device ID, vendor ID, capabilities, max payload, etc.
Thanks for the link to a newer Cyclone V PCIe Root Port Rocket Boards project. I did not realize there was a newer version. I had been working with the older version (built with Quartus II v15.0.1). For others' reference:
Based on the short amount of time I have been looking at the new project documentation, it looks like I will need to look at the HPS driver for information on how to enumerate the endpoint on the root complex. (The PCIe root port's CRA Avalon-MM slave interface is connected to the lightweight HPS-to-FPGA Avalon-MM master interface.) According to the project documentation, the CRA Avalon-MM slave interface is used to send configuration TLPs.