FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6462 Discussions

Error 14566 when using the PCIe Hard IP Core next to the Custom Transceiver IP Core on a Cyclone V GX

FHint
New Contributor II
1,272 Views

The device is the 5CGXFC9C7F23C8 and the Quartus version 17.1 is used.

The Avalon-MM CV Hard IP for PCIe is configured as Gen1 x4.

The Custom PHY component has one lane in Duplex mode enabled. (The goal is to use two lanes in Duplex mode)

The two IP Cores share one Reconfiguration Controller.

 

The analysis & synthesis is running without any errors but the fitter gives me following error message:

Error (14566): The Fitter cannot place 1 periphery component(s) due to conflicts with existing constraints (1 Channel PLL(s)).

 

Looking in the Resource Usage Summary of the a&s this makes sense, because it shows that 7 Channel PLLs are being used. When trying without the Custom PHY it shows that 5 Channel PLLs are used. (The device has 6).

 

As far as I understand, the PCIe Core uses 5 becuase of: "one for the TX PLL and one for the channels" (from this document ).

Why is that? The figures 7-37 and 7-38 in the document above make it seem as if one additional channel PLL (1 or 4) was used for clock generation (TX PLL) for the other transceiver channels and their local PLLs. Is that correct?

 

Further reading revealed that the Custom PHY Core has the same needs in regard of PLLs.

 

My conclusion to this is, that I need a FPGA that has more Transceiver channels available (at least 8, so probably one that has 9)

Is this correct or is there any other way to implement even one of the two Duplex lanes of the Custom PHY Core? Is it possible to somehow share some resources or place the TX PLL "inside" the FPGA (for slower interfaces) ?

 

As you have probably mentioned I am pretty new to using Intel FPGAs transceiver interfaces, but I am doing my best trying to understand them.

0 Kudos
1 Solution
Nathan_R_Intel
Employee
967 Views
Hie, My apologies for the delayed first reply to your forum questions. I had another transceiver IP question and mistaken that I have answered the following question. Basically, the device 5CGXFC9C7F23C8 used has 6 transceiver channels. As you have used PCIe Gen1x4; it means you are using up 5 transceiver channels. For Cyclone V, we only offer fPLL and CMU PLL as transmitter PLL. CMU PLL uses up one transceiver channel resource. fPLL does not sacrifice any transceiver channels. For PCIe, it is a requirement to use CMU PLL as the fPLL cannot meet the jitter requirements. Hence, to enable PCIe Gen1x4, you will use up 5 transceiver channels. Please check my replies to your questions: Question: Why is that? The figures 7-37 and 7-38 in the document above make it seem as if one additional channel PLL (1 or 4) was used for clock generation (TX PLL) for the other transceiver channels and their local PLLs. Is that correct? Response: Yes, you are correct. Channel PLL 1 or 4 is used for clock generation. Question: My conclusion to this is, that I need a FPGA that has more Transceiver channels available (at least 8, so probably one that has 9) Is this correct or is there any other way to implement even one of the two Duplex lanes of the Custom PHY Core? Is it possible to somehow share some resources or place the TX PLL "inside" the FPGA (for slower interfaces) ? Response: Yes, you are correct. To implement PCIe Gen1x4 and two Duplex lanes of the custom phy core, you will need at least 7 channels. The duplex channels can use the fPLL as transmitter PLL which does not sacrifice one transceiver channel. fPLL has higher transmitter jitter and currently can support till 3.125Gbps. We do not have a 7 channels device, hence you will need to use a 9 channel device. I am afraid, there is no other option to enable PCIe without using 5 channels. However, if you only using one Duplex channel, you can still use the existing 6 transceiver channel device. Regards, Nathan

View solution in original post

0 Kudos
5 Replies
Nathan_R_Intel
Employee
968 Views
Hie, My apologies for the delayed first reply to your forum questions. I had another transceiver IP question and mistaken that I have answered the following question. Basically, the device 5CGXFC9C7F23C8 used has 6 transceiver channels. As you have used PCIe Gen1x4; it means you are using up 5 transceiver channels. For Cyclone V, we only offer fPLL and CMU PLL as transmitter PLL. CMU PLL uses up one transceiver channel resource. fPLL does not sacrifice any transceiver channels. For PCIe, it is a requirement to use CMU PLL as the fPLL cannot meet the jitter requirements. Hence, to enable PCIe Gen1x4, you will use up 5 transceiver channels. Please check my replies to your questions: Question: Why is that? The figures 7-37 and 7-38 in the document above make it seem as if one additional channel PLL (1 or 4) was used for clock generation (TX PLL) for the other transceiver channels and their local PLLs. Is that correct? Response: Yes, you are correct. Channel PLL 1 or 4 is used for clock generation. Question: My conclusion to this is, that I need a FPGA that has more Transceiver channels available (at least 8, so probably one that has 9) Is this correct or is there any other way to implement even one of the two Duplex lanes of the Custom PHY Core? Is it possible to somehow share some resources or place the TX PLL "inside" the FPGA (for slower interfaces) ? Response: Yes, you are correct. To implement PCIe Gen1x4 and two Duplex lanes of the custom phy core, you will need at least 7 channels. The duplex channels can use the fPLL as transmitter PLL which does not sacrifice one transceiver channel. fPLL has higher transmitter jitter and currently can support till 3.125Gbps. We do not have a 7 channels device, hence you will need to use a 9 channel device. I am afraid, there is no other option to enable PCIe without using 5 channels. However, if you only using one Duplex channel, you can still use the existing 6 transceiver channel device. Regards, Nathan
0 Kudos
FHint
New Contributor II
967 Views

Hello,

 

no problem; I appreciate your response anyway!

 

The observations I made in the meantime together with your reply have arisen a few new questions for me:

 

1) You mention, that - with the given configuration - it should be possible to place the PCIex4 together with one duplex channel in the FPGA. I could not find a suitable placement for the pins to work out this way. Are there any more constraints regarding the pins being used here?

 

2) When defining a PLL as a fPLL, how does one enable the Custom PHY to use that fPLL's clock as the transmitter clock? The PHY does not seem to have a option regarding this.

 

At the moment I am most likely to use a small Arria10GX device (12 transceiver channel) for the application. I hope to be able to put at least a PCIe Gen3 x4 (best case x8) and three duplex channels in that device, but I have to read into that more to come to a decision or ask detailed questions.

 

Best regards,

Florian

0 Kudos
Nathan_R_Intel
Employee
967 Views
Hie Florian, Please check my response to your following questions: Question: 1) You mention, that - with the given configuration - it should be possible to place the PCIex4 together with one duplex channel in the FPGA. I could not find a suitable placement for the pins to work out this way. Are there any more constraints regarding the pins being used here? Response: Yes its possible to use PCIe Gen1x4 with one custom transceiver channel using fPLL. The custom transceiver channel should be placed in GXB_CH5. PCIe channels should be placed in GXB_CH0 till GXB_CH3. For the custom transceiver channel, you will need to use the Transceiver Native Phy IP instead of Custom Phy IP. Question: 2) When defining a PLL as a fPLL, how does one enable the Custom PHY to use that fPLL's clock as the transmitter clock? The PHY does not seem to have a option regarding this. Response: Use Transceiver Native Phy IP instead. Under "Tx PMA" tab select "use external Tx PLL". This will change the tx_pll_refclk to ext_pll_clk. Search for PLL Intel FPGA IP. Select PLL mode "fractional-N PLL" Specify the correct refclk frequency and desired frequency. Connect the outclk0 output port to ext_pll_clk of the Transceiver Native Phy IP. This should enable the custom Phy to use fPLL as Transmitter PLL. Question: At the moment I am most likely to use a small Arria10GX device (12 transceiver channel) for the application. I hope to be able to put at least a PCIe Gen3 x4 (best case x8) and three duplex channels in that device, but I have to read into that more to come to a decision or ask detailed questions. Response: Sure, do let me know if you have any further questions. I can point you to the correct materials. Regards, Nathan
0 Kudos
FHint
New Contributor II
966 Views

Hello again,

 

1) + 2) Thank you very much, that cleared a lot things up for me! Where does one find the information you provided? I feel like I went through the whole internet.

 

3) Do you know where I can read about the PLLs needed for PCIe Gen3 x8? Does this configuration need 9 Channel PLLs or 1 extra per 4 Lanes, which would sum up to 10?

 

4) Does the fPLL data rate (3.125 Gbps) you mentioned above also apply to the Arria 10 devices? Based on that information it should be possible to use it for 2125 Mbps links but not for interfaces like 10 GigE.

 

Best Regards,

Florian

0 Kudos
Nathan_R_Intel
Employee
966 Views
Hie Florian, For item (1) and (2) the information was not documented as it was not a common use case to use fPLL. I will create a documentation for this in our Knowledge database. For (3), Arria 10 has improved fPLL and additional ATX PLL to be used with PCIe. The information is available in the Arria 10 Transceiver Phy User Guide (PLL chapter). Hence for PCie Gen3x8; you only need to use 8 channels. Arria 10 has enough ATX PLL and fPLL per transceiver bank, that you dont need to use CMU PLL and sacrificing a transceiver channel. https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/hb/arria-10/ug_arria10_xcvr_phy.pdf For (4). The fPLL in Arria 10 has been improved in terms of jitter performance. It can support till 12.5Gbps (check Figure 171 in user guide link above). Regards, Nathan
0 Kudos
Reply