FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6359 Discussions

NVMe / PCIe storage on Arria10

rienk_dejong
New Contributor I
2,251 Views

Hi all,

I have been working on a system with an Arria10 SoCFPGA with a M.2 SSD connection on the hard PCIe controller. This design is based on https://rocketboards.org/foswiki/Projects/A10AVCVPCIeRootPortWithMSILTS 1

As far as I can see this is all implemented as the example.

We are running based on a vanilla kernel V4.15.

The PCIe controller is detected correctly and the output of lspci is:

00:00.0 PCI bridge: Altera Corporation Device e000 (rev 01) 01:00.0 Non-Volatile memory controller: Intel Corporation Device f1a5 (rev 03)

Only during the startup of the NVMe drivers a timeout occurs during detection.

Has anyone had any experience with NVMe or other PCIe devices on an Arria10?

Kind regards,

Rienk de Jong

The relevant kernel logs:

[ 0.100998] OF: PCI: host bridge /sopc@0/pcie@0xd0000000 ranges: [ 0.101034] OF: PCI: MEM 0xc0000000…0xdfffffff -> 0x00000000 [ 0.101227] altera-pcie d0000000.pcie: PCI host bridge to bus 0000:00 [ 0.101243] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.101257] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xdfffffff] (bus address [0x00000000-0x1fffffff]) [ 0.103448] PCI: bus0: Fast back to back transfers disabled [ 0.106569] PCI: bus1: Fast back to back transfers disabled [ 0.106844] pci 0000:00:00.0: BAR 8: assigned [mem 0xc0000000-0xc00fffff] [ 0.106861] pci 0000:01:00.0: BAR 0: assigned [mem 0xc0000000-0xc0003fff 64bit] [ 0.107027] pci 0000:00:00.0: PCI bridge to [bus 01] [ 0.107084] pci 0000:00:00.0: bridge window [mem 0xc0000000-0xc00fffff] [ 61.290164] nvme nvme0: I/O 3 QID 0 timeout, disable controller [ 61.450409] nvme nvme0: Identify Controller failed (-4) [ 61.455625] nvme nvme0: Removing after probe failure status: -5

 

 

0 Kudos
1 Solution
rienk_dejong
New Contributor I
1,489 Views

Hi All,

We have fixed our issue.

The problem was not related to the kernel drivers or settings of the PCIe IP itself.

The example on the rocketboards page is correct ( as I expected ).

Our problem was in the memory assigned to the Kernel and mapped to the PCIe core.

 

The board we have has a total of 4GB of ram for the HPS, which is effectively 3GB for the HPS because the HPS to FPGA bridges and other IO claims the upper 1GB of the available memory space.

The Avalon master port of the PCIe core must be connected to the memory through a address expanded and the window this provides to the RAM can be set in power of two increments so 1GB, 2 GB or 4GB.

 

In our case we had our boot-loader configured to give the Linux kernel 3GB of memory, and the PCIe core had a 2GB window to the RAM.

My guess is that the kernel had memory allocated for PCIe above the 2GB window.

The solution for our system was to limit the memory for the kernel to 2GB by configuring this in the boot-loader.

 

So for us the issue is solved, I hope these posts can help someone else that is trying to get PCIe working on there own custom Arria 10 HPS designs.

 

Regards,

Rienk

View solution in original post

0 Kudos
4 Replies
Nathan_R_Intel
Employee
1,489 Views
The issue seems related to the NVME driver which is not provided by Intel. The PCIe device is detected correctly showing Arria 10 PCIe IP is functioning correctly. For NVME driver issue, please refer to the source where you obtained the driver. This driver was not created or provided by Intel. Regards, Nathan
0 Kudos
rienk_dejong
New Contributor I
1,489 Views

Hi Nathan,

Thanks for the response.

I get what your saying, but I feel your answer is a bit short sighted.

First that the NVME driver is at fault because it reports an error. This is the NVME driver in the mainline Linux kernel, it is used by many different devices. So I don't expect a bug in that part of the kernel.

Second, the NVME driver is actually from Intel, it is in the 3rd line of the core.c driver source. I know that Intel is big and it is form a completely different division. But a comment like this feels dismissive "Not our bug".

 

Sorry for these comments, but that bugged me a bit today.

Back to a constructive discussion:

 

I am pretty sure the problem is in either our HDL / Qsys design or the configuration of the kernel through the device tree.

Below is a the PCIe part of the device tree.

We have the PCIe Txs port connected to the high performance master of the HPS at offset 0x10000000

The PCIe Cra is connected to the low performance port at offset 0x0

The MSI interrupt controller is connected to the low performance port at offset 0x4080 and 0x4000

 

Attached is the archive of our project that we use now and it only contains the HPS and the PCIe core.

 

I hope you can help me figure out where we made our mistake.

 

Kind regards,

Rienk

pcie_0_pcie_a10_hip_avmm: pcie@0xd0000000 { status = "okay"; compatible = "altr,pcie-root-port-16.1", "altr,pcie-root-port-1.0"; reg = <0xd0000000 0x10000000>, <0xff210000 0x00004000>; reg-names = "Txs", "Cra"; interrupt-parent = <&hps_arm_gic_0>; interrupts = <0 25 4>; // irq6 interrupt-controller; #interrupt-cells = <1>; device_type = "pci"; /* embeddedsw.dts.params.device_type type STRING */ bus-range = <0x00000000 0x000000ff>; ranges = <0x82000000 0x00000000 0x00000000 0xc0000000 0x00000000 0x10000000 0x82000000 0x00000000 0x10000000 0xd0000000 0x00000000 0x10000000>; msi-parent = <&pcie_0_msi_to_gic_gen_0>; #address-cells = <3>; #size-cells = <2>; interrupt-map-mask = <0 0 0 7>; interrupt-map = <0 0 0 1 &pcie_0_pcie_a10_hip_avmm 1>, <0 0 0 2 &pcie_0_pcie_a10_hip_avmm 2>, <0 0 0 3 &pcie_0_pcie_a10_hip_avmm 3>, <0 0 0 4 &pcie_0_pcie_a10_hip_avmm 4>; }; //end pcie@0x010000000 (pcie_0_pcie_a10_hip_avmm) pcie_0_msi_to_gic_gen_0: msi@0xff214080 { status = "okay"; compatible = "altr,msi-1.0", "altr,msi-1.0"; reg = <0xff214080 0x00000010>, <0xff214000 0x00000080>; reg-names = "csr", "vector_slave"; num-vectors = <32>; /* embeddedsw.dts.params.num-vectors type NUMBER */ interrupt-parent = <&hps_arm_gic_0>; interrupts = <0 24 4>; // irq5 clocks = <&h2f_lw_clk>; msi-controller = <1>; /* embeddedsw.dts.params.msi-controller type NUMBER */ }; //end msi@0x100014080 (pcie_0_msi_to_gic_gen_0)

 

 

 

0 Kudos
Nathan_R_Intel
Employee
1,489 Views
Hie Rienk, My apologies for making you feel frustrated with my initial response. I should have provided more clarity on the support Intel could effectively provide to assist you. Typically, Quartus Prime allows users to auto-generate PCIe Example Design which include basic functional driver. This Quartus generated design will include basic functional drivers that can allow MWr transactions about 100 words. Hence, if there is any issues, observed using Example Design, Intel applications engineers will deep dive to debug and resolve the issue. As for custom design, typically we will provide suggestions and debug guidelines. For this case, the reference design used was from RocketBoard which have been heavily modified, hence thats why I directed you to get back to Rocketboard to the source. I should have offered additional help to resolve your issue, which I did not. Anyway, moving forward, I will be more specific to mention what help Intel can assist with this case. Firstly, I will analyze your design and how your have specified the master ad slave addresses if there could be potential contention. Let me check if there is any clue from there. I will get back to you on this by tomorrow. Regards, Nathan
0 Kudos
rienk_dejong
New Contributor I
1,490 Views

Hi All,

We have fixed our issue.

The problem was not related to the kernel drivers or settings of the PCIe IP itself.

The example on the rocketboards page is correct ( as I expected ).

Our problem was in the memory assigned to the Kernel and mapped to the PCIe core.

 

The board we have has a total of 4GB of ram for the HPS, which is effectively 3GB for the HPS because the HPS to FPGA bridges and other IO claims the upper 1GB of the available memory space.

The Avalon master port of the PCIe core must be connected to the memory through a address expanded and the window this provides to the RAM can be set in power of two increments so 1GB, 2 GB or 4GB.

 

In our case we had our boot-loader configured to give the Linux kernel 3GB of memory, and the PCIe core had a 2GB window to the RAM.

My guess is that the kernel had memory allocated for PCIe above the 2GB window.

The solution for our system was to limit the memory for the kernel to 2GB by configuring this in the boot-loader.

 

So for us the issue is solved, I hope these posts can help someone else that is trying to get PCIe working on there own custom Arria 10 HPS designs.

 

Regards,

Rienk

0 Kudos
Reply