i work on the communication between two fpgas cyclone iv gx via idt pcie switch.
i already configured the switch in such a way to route packets from fpga1 to fpga2 respecting any address ranges.
i followed the steps described in user guide in order to configure the hard ip in the fpgas using megawizard plug-manager.
but i am stuck in how to write a pcie memory transaction? (for example how to write data to fpga2’s onchip memory)?
any suggestions please!!
thanks in advice.
--- Quote Start --- For example how to write data to FPGA2’s onchip memory --- Quote End --- Linux's lspci from the host CPU will tell you the PCIe address of the BAR region that you have mapped the on-chip RAM to. (PCITree can be used under Windows). Lets say it is 0x1234_5678_0000_0000 on one of the boards. If you have created a design based on the Qsys PCIe bridge, then the TXS slave interface can generate PCIe transactions. The second board can write to the onchip memory of the other board by issuing a write to the TXS slave. However, the TXS slave can only 'see' a 1MB region of PCIe addresses at any one time, so before issuing the write, the CRA register that sets the most significant bits of the PCIe address needs to be configured with 0x1234_5678_0000_0000, and then your write to the TXS slave will have those address MSBs inserted into the 64-bit address issued on the PCIe bus. Cheers, Dave
You’re much too vague in what you need and expect from us. If you start from the PCIe MegaWizard configuration, you will have to write your own PCIe TLPs. There is nothing wrong with doing so, but it requires deep knowledge of PCI and PCIe, and I’d suggest to get hold of the respective official specifications. You will learn about PCI device enumeration, addressing modes, when and how to issue DMA transfers, the magic behind completions you receive and those you send, correctly terminating invalid or unexpected requests, handling interrupts the way you want, and last but not least, writing a correct driver for it.Long story short: We can only guide you through this process, but we cannot give you all the ready-made FPGA and driver code. – Matthias
I have studied enough official PCIe specifications: topology, layers, different types of transaction, enumeration, routing methods, flow control....I have studied also PCIe in FPGA cyclone IV GX and how to configure the PCIe Hard IP with either MegaWizard plug-in manager and Qsys... so I try to put into practice all that by making communication between two FPGAs through an IDT PCIe switch. if you can just help me with a small description of the steps I should follow or recommend me an example on which I can base to success these transactions.?
Okay, then …A device-to-device transaction is in no way different from a device-to-main-memory transaction, it’s just the preparation that is different. Say, you want to transfer data from FPGA1 to FPGA2.
- First you have to get hold of the FPGA2 PCI address assigned to its target BAR, say, BAR2. The BARs are typically visible to the driver of FPGA2 (only), but it has to be transported to the driver of FPGA1.
- This communication can be set up in user space or in kernel space, depending on the actual needs.
- As a result, driver1 knows the destination memory region BAR2 of FPGA2, and it communicates it to FPGA1, say, via MMIO register accesses. It could also setup a descriptor-like table in main memory or even on the FPGA2 device.
- A different approach would be to indicate the PCI ID of FPGA2 to FPGA2, and let the two devices exchange their BAR addresses via PCI Message TLPs with Device Routing. Once assigned by the driver, the BARs can be fetched from the Hard IP Configuration space.
- FPGA1 can now write data – even as big chunks – to FPGA2 by using the indicated addresses. This is done by standard Write TLPs, the same ones as used when writing to main memory. You can and probably should use transfer acceleration features like Relaxed Ordering for data transfers, and strict ordering for ‘commit’ messages, just like you do when writing data to main memory.
- BAR2 of FPGA2 might be cachable, so be prepared for write transaction combining and collapsing, done by the IDT Switch. Send your ‘commit’ messages to a non-cachable BAR (like BAR0) of FPGA2 if this could harm the communication handshake.
- FPGA2 now receives large Write TLPs – something that doesn’t happen when only the CPU issues 4 to 8 bytes maximum requests.
- FPGA2 might want to respond in some way, semantically – remember, write TLPs are not followed by completions – so you need a custom method. If it is just some kind of simple Acknowledgment, sending it to the driver and let the user code handle this data direction might be sufficient. In the other case where both directions should carry significant amounts of bandwidth, a BAR on FPGA1 might be indicated to FPGA2 as well. This BAR address exchange could be done in the same way as described above, using either driver-level or device-level setup.
- FPGA1 could even issue read requests to FPGA2 which request large amounts of data, probably requiring multiple completions to do, so the design might need a change, compared to simple one-word CPU requests, for this to work reliably under all load conditions.
- Remember that PCI devices may come and go. Try maintaining good communication with the driver regarding the other FPGA’s presence state. Be prepared for integrating timeout mechanisms.
Hi again,I have established a Qsys system, with all needed avalon components (pcie hard ip, sgdma dispatcher, dma read/write, ...) so as to have a pcie communication on Cyclone IV GX [i was based on provided Altera examples] .. I connected the FPGA to PC, and i'm able to send pcie transactions from PC to FPGA (to the onchip memory on the avalon bus), without problems. Now, i want to send transactions from FPGA to PC, does the provided pcie examples include transmitting transactions outside the FPGA? how to interface the pcie hard ip to define transactions' attributes (destinations address, size, ...)? i added a nios2 processor to the avalon bus as well, is there a software library that interface the pcie ip ? thanks!