Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
15774 Discussions

Intel PCIe Hard IP Trouble With EFI

MZORAN
Beginner
321 Views

I have a fpga design using the Intel Hard IP for PCIe on the Intel Cyclone 10GX development board.  The design works well when using traditional BIOS booting, but I'm seeing a very large number of Avalon MM bus issues when booting with EFI. Reads from fpga registers often return trash in the EFI case. Any ideas what the problem may be?

 

Quartus 22.1 Pro.  I've also tried Quartus 21.4 and have the same issue. In both the traditional BIOS and EFI cases I'm seeing the physical address of the PCIe card being mapped in the 32bit address space of the PC host.  The PC host is an Intel PC running 64 bit Linux.  Same FPGA image in both cases.

 

 

0 Kudos
1 Solution
skbeh
Employee
266 Views

Avalon busing appears broken, then you can use the linux 'lspci -vvv' command to verify if the host has correctly set the PCIe endpoint Control register.

As example below, the software driver at host side should set the Memory Space Enable bit[1] and Bus Master bit[2] in Command register to 1.

bit[1] Memory_Space_enable must set (Mem+)

bit[2] Bus_Master must set (BusMater+)



View solution in original post

5 Replies
skbeh
Employee
288 Views

Hi Sir

Please double check everything is implemented correctly in the IP and Quartus design.

1) nPERSTL0 is prepoerly connect to PCIe core's pin_perst

2) refclk usually comes from the pcie edge connector from host, ensure this clock always stable before FPGA configuration.

3) CLKUSR - ensure CLKUSR is properly driven, range between 100Mhz-125Mhz, it should also stable before device configuration starts. 

Page 28 of below user guide describe about CLKUSR:

(https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/dp/arria-10/pcg-01017.pdf)

  

This pin is used as the clock for transceiver calibration, and is a mandatory requirement when using transceivers. 

This pin is optionally used for Hybrid Memory Cube (HMC) calibration, as well as a configuration clock input for synchronizing the initialization of more than one device. 

This is a user-supplied clock and the input frequency range must be in the range from 100 MHz to 125 MHz.


4) Next, to rule out design issue, you can refer/use the C10 PCIe example design available at below link to test in your setup.

https://www.intel.com/content/www/us/en/docs/programmable/683504/current/pci-express-high-performanc...


MZORAN
Beginner
282 Views

I went through steps 1 through 3 and everything appears fine.  This is in the Intel Cyclone 10GX demo board and all of those look find according to the schematic.   For step 4, I copied most of the reference settings into my design but I can't use it directly for what I'm doing since I want to use Avalon-MM rather then Avalon-ST.

 

Like I said, everything work fine with conventional BIOS boot, but Avalon busing appears broken after EFI boot when the card is used(Registers read and set). 

 

This puzzles me because except for the ability to physically map the device into the address space above 32bit(in my case it's still below 32bits) and the boot code on the card such as video or PXE(which I'm not using and have no plan to use), I didn't think conventional vs EFI made any difference.

 

skbeh
Employee
267 Views

Avalon busing appears broken, then you can use the linux 'lspci -vvv' command to verify if the host has correctly set the PCIe endpoint Control register.

As example below, the software driver at host side should set the Memory Space Enable bit[1] and Bus Master bit[2] in Command register to 1.

bit[1] Memory_Space_enable must set (Mem+)

bit[2] Bus_Master must set (BusMater+)



MZORAN
Beginner
253 Views

Yes, it appears that the registers are slightly different when linux hands over the device to my pci driver between the EFI and conventional BIOS bases.

 

diff efi.txt bios.txt

< Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
---
> Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
3a4
> Latency: 0, Cache Line Size: 64 bytes

 

I attached the full lspci -vvv -d 1234:0000 dump for both cases.

 

As you point out the BusMaster flag is not set with EFI booting and also a setting related to caching is not set with EFI.

 

This is after my linux driver loads, which like I said is failing very badly in the EFI case.   I know this isn't a linux driver community and I can take my grips over to the linux e-mail lists, but I don't understand why my driver needs to work differently between BIOS and EFI.    Or why bus mastering needs to be turned on even through my design uses no bus mastering.

 

I'm a bit concerned about the caching flag difference as well.

MZORAN
Beginner
249 Views

I manually fixed the settings in my pci driver and all in well now.

Reply