- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've cobbled together a PCIe driver for u-boot that supports the Cyclone V. I started with the Series 10 FPGA driver already included and using the linux kernel module as an example hacked together a driver that configures properly for Cyclone V.
Here is an example output:
=> run loadfpga
7007204 bytes read in 346 ms (19.3 MiB/s)
=> pci long
PCI Autoconfig: pci class = 1540
PCI Autoconfig: Found P2P bridge, device 0
PCI Autoconfig: BAR 0, I/O, size=0x4, PCI: Failed autoconfig bar 10
PCI Autoconfig: pci class = 264
PCI AutoConfig: falling through to default class
PCI Autoconfig: BAR 0, Mem, size=0x4000,
Scanning PCI devices on bus 0
Found PCI device 00.00.00:
vendor ID = 0x1172
device ID = 0xe000
command register ID = 0x0006
status register = 0x0010
revision ID = 0x01
class code = 0x06 (Bridge device)
sub class code = 0x04
programming interface = 0x00
cache line = 0x08
latency time = 0x00
header type = 0x01
BIST = 0x00
base address 0 = 0xffffffff
base address 1 = 0x00000000
primary bus number = 0x00
secondary bus number = 0x01
subordinate bus number = 0x01
secondary latency timer = 0x00
IO base = 0x00
IO limit = 0x00
secondary status = 0x0000
memory base = 0xc000
memory limit = 0xc000
prefetch memory base = 0x0000
prefetch memory limit = 0x0000
prefetch memory base upper = 0x00000000
prefetch memory limit upper = 0x00000000
IO base upper 16 bits = 0x0000
IO limit upper 16 bits = 0x0000
expansion ROM base address = 0x00000000
interrupt line = 0x00
interrupt pin = 0x01
bridge control = 0x0000
=> pci header 1.0.0
vendor ID = 0x15b7
device ID = 0x5003
command register ID = 0x0006
status register = 0x0010
revision ID = 0x01
class code = 0x01 (Mass storage controller)
sub class code = 0x08
programming interface = 0x02
cache line = 0x08
latency time = 0x00
header type = 0x00
BIST = 0x00
base address 0 = 0xc0000004
base address 1 = 0x00000000
base address 2 = 0x00000000
base address 3 = 0x00000000
base address 4 = 0x00000000
base address 5 = 0x00000000
cardBus CIS pointer = 0x00000000
sub system vendor ID = 0x15b7
sub system ID = 0x5003
expansion ROM base address = 0x00000000
interrupt line = 0xff
interrupt pin = 0x01
min Grant = 0x00
max Latency = 0x00
As you can see, the Cra bus is working and both the root port and the NVMe endpoint are being detected.
The FPGA works in linux. I get about 90MB/s read and 50 MB/s write from and to the NVME device.
I use the u-boot command, "enable bridges 7" to enable all the bridges and that gets the pcie code to initialize and detect the root port and endpoint with the examples shown above.
However, when I attempt to initialize the NVMe, access to BAR0 by the pci auto config code hangs until watchdog, which implies that the Txs bus is not programmed. I have moved it around to see if there is any difference, no matter where I put the Txs (h2f or lwh2f) the hang occurs.
Here's what that looks like, with a little debug output included to see where the hang occurs.
=> nvme scan
Entering nvme_probe, name is: nvme#0
Entering INIT_LIST_HEAD
Entering readl:
trying to readl from 0xc000001c
That is a BAR0 access and for some reason it hangs.
If I simply try to md.l from 0xc0000000, I get the same reaction, a hang.
I have a feeling I'm missing something important that I don't know about. I'm currently figuring out how the WRITE ONLY 0xFF800000 register works. It seems to be set correctly. I have to figure out how the fpga2dram interface is enabled, I'm wondering if that has something to do with it. That has to be enableable somewhere.
If anyone has done this and knows what I'm missing, I'd really appreciate a pointer in the right direction.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Apologies for the late respond, there was an issue when trying to access this link.
Could you share your full bootlog including when enabling all the bridges?
Which Quartus/SoC EDS version are you using?
Is the same issue seen when in a Linux environment? If so which version of Linux/kernel was you using?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for replying.
The Cyclone V design is fully functional in linux. Although it took way too long to get a working kernel configuration, device tree description, and fpga image. I finally managed to do it a couple of weeks ago.
The NVMe performs fine. Copying large files to a RAM drive yields about 90MB/s read speed and copying large files from RAM drive to the NVMe yields about 50 MB/s write speed.
I'm using u-boot-socfpga 2020.10 and will switch to 2021.04 when it becomes available as there are nvme enhancements that may help.
I have spent about a week digging into the u-boot issue and the problem appears to be device-tree and u-boot related.
Does intel have a working 10 series device tree for u-boot which includes pcie that I can see? I'd like to see how the pcie module is described in socfpga.dtsi. Specifically I'd like to see how the bridges are described and how the device ranges are set in the PCI module.
Using the linux socfpga.dtsi in u-boot does not work. From what I have been able to determine, u-boot does not support hierarchical device trees. But even with a flattened tree using the proper range description:
ranges = <0x82000000 0x00000000 0xc0000000 0x00100000>;
u-boot does not grok that the starting address can be different for PCI than it is for the CPU. The above line specifically states that the 32 bit, non-prefetch memory is mapped to 0x0 for PCIe devices and to 0xC0000000 for the CPU, but u-boot does not program BUS 1 BAR0 correctly and even if I hack it to do so, it fails to recognize that there is memory in CPU address space 0xC0000000 and produces this error message: (Note: I have added printfs to multiple pci and nvme driver codes to produce extra output)
pci_bus_addr = 0xc0000000
_dm_pci_bus_to_phys: hose->region_count = 2
Looking for address: 0xc0000000
loop: 0:
Section: 0, bus_start = 0x0, size = 0x100000
loop: 1:
_dm_pci_bus_to_phys: hose->region_count = 2
Looking for address: 0xc0000000
loop: 0:
Section: 0, bus_start = 0x0, size = 0x100000
loop: 1:
Section: 1, bus_start = 0x0, size = 0x40000000
pci_hose_bus_to_phys: invalid physical address
I tried to fake out u-boot by adding more "memory" at 0xc0000000, but apparently it only pays attention to main memory and the pcie ranges variable. If I add the memory to pci ranges, it does not appear to like when the memories collide, which makes perfect sense.
If intel has a working dts for u-boot to detect the pcie interface and utilize it properly, I would very much appreciate seeing that file.
In the meantime I will continue to try to appease the u-boot gods with my offerings of printf's and dtsi files.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
would You mind explaining how You get u-boot into FPGA Cyclone V.
Me struggling on 'buildroot' compiling vmlinux and images for Nios2, but having difficulties getting entry point into u-boot linker script for Cyclone II, that has no 'backup' ARM system for providing connectivity beside programming/debugging devices previously to enabled accessible boot loader or real time OS.
qemu-system-nios2
(gdb) info files
Symbols from "/dev/shm/buildroot/output/images/u-boot".
Local exec file:
`/dev/shm/buildroot/output/images/u-boot', file type elf32-little.
Entry point: 0x0
0x00000000 - 0x00041e77 is .text
0x00041e78 - 0x00042ab0 is .u_boot_list
0x00042ab0 - 0x000446f8 is .data
0x000446f8 - 0x00044780 is .sbss
0x00044780 - 0x00048b6c is .bss
Thx for Your efforts and time, best regards,
beyondTime
( nios2eds 13.0sp1, Xenial 16.04 )
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
beyondTime,
Unfortunately I don't have any NIOS2 experience.
Have you seen this guide?
https://rocketboards.org/foswiki/Documentation/NiosIILinuxUserManualForCycloneIII
That might get you moving in the right direction.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yes and now understanding, that on newer Cyclone V it is the ARM A9 cpus, that provide Cortex-A9 Soc bootloader (and Linux OS) before accessing the fpga's configuration (and bootloader), what provides the PCI-e (rev. 2.x) interface then for a computing system.
Sorry for interrupting Your workflow through my misunderstanding. If getting information about device tree files for configuring Cyclone V fpga interfacing to PCI-e I will remember Your interest.
Thx for responding and best regards
beyondTime
(nios2eds 13.0sp1, Xenial 16.04)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Unfortunately, we only have for Stratix 10 SoC which has been tested working on our Dev Kit, we do not have it for older devices such as Cyclone V SoC:
https://rocketboards.org/foswiki/Projects/Stratix10SoCDesignExampleFor10GbeWithIEEE1588PTPCapability
Hope this would help, though the architecture is different.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Apologies, here is the S10 PCIe design:
https://rocketboards.org/foswiki/Projects/Stratix10PCIeRootPortWithMSI
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I got it working through a series of hacks. I'm trying to clean everything up and make it presentable. I'll then release a patch file here or on rocketboards.org.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Glad that you got it sorted out. Let me know if you need additional help.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page