- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello All,
I am implementing PCIe Hard-IP Endpoint with Avalon-ST interface, Gen2.0, X1, with only BAR0 activated with 64-bit pre-fetchable memory and a size of 64KBytes. I am working with Intel Cyclone 10GX dev kit connected to an Nvidia Jetson TX1. Nvidia Jetson is running ubuntu 18.04.5 LTS.
I am currently in process of developing a simple driver for PCIe where I can read/write from the FPGA via PCIe. Upon boot up, the Nvidia system correctly detects the implemented PCIe device as a Gen2.0 X1 with a link speed of 5GT/s. I start with a simple memory write instruction to write 32-bit words. I transfer a 32-bit magic word (0x12345678) to addresses:
PCIE_Base + 0x0000, 0x0004, 0x0008, 0x000C, 0x0010, 0x0014 ... and so on.
I capture the incoming TLPs via signal tap and I can see the magic word with the respective addresses however, the TLP size and data format is what I am unable to understand.
Each TLP for addresses 0x0000, 0x0008, 0x0010, 0x0018... and so on have 6 DWs with data as "0x78563412"
And each TLP for addresses 0x0004, 0x000C, 0x0014, 0x001C.. and so on have 4 DWs with data as "0x12345678"
The timing diagram for the same is shown in the attached PDFs. The PDF contains 6 timing diagrams for address 0x0000 through 0x0014.
Can someone please explain what am I doing wrong or lacking in PCIe TLP understanding. Any pointers to relevant reading material is highly appreciated!
Thanks a lot in advance to all community members for their inputs!
Sid
Link Copied

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page