Hi everyone!We have a system with multiple boards connected together on a backplane. We currently use a proprietary parallel bus interface between the boards, and I am trying to evaluate the impact of changing it to PCI Express. In our most basic setup, we would have one of the boards with a Nios CPU, and the other boards with simpler designs, written entirely in HDL. From what I understand, the board with the Nios would need to be configured as a root port on the PCI express bus, and the other boards as endpoints. I like the way the PCI Express IP from Altera is using the Avalon MM interfaces to have an almost transparent link from the CPU to the peripherals, being able to map the components directly in the CPU's address space. But if I read correctly table 1-2 in the PCI Express compiler datasheet, I can't have the root port with the MM interfaces, I need to use the Megawizard flow instead, and the Avalon streaming interfaces. Does it mean that if I want to use the Nios as root, I need to make myself a translator between the MM requests and the stream interface on the PCI express IP? This looks quite complicated. I wonder if I shouldn't choose a hard core CPU for my main board instead, such as a Freescale processor. Or am I missing something? Thanks!
You read that correctly - Altera does not support MM Root Complexes.Generating your own memory mapped to streaming bridge will be very complicated, depending on how much of the spec you wish to use and conform to. I think Xilinx has a memory mapped root complex, so you may want to look into that as well.
Not entirely related but ...We seem to be hitting a problem with the PCIe 'slave' side and bus widths. A 32bit write cycle coming into the board ends up as two write cycles (for adjacent addresses, but with the same data) by the time it hits our custom Avalon MM slave. It seems to me that the SOPC builder has dropped in a bus width adapter somewhere that is converting from 64bit to 32bit - although why there is anything 64bit in there seems rather unexpected to me. This particular peripheral ignores the byte-enable lines, but the hw guy here says he saw all the byte enable lines asserted for another peripheral. In any case the bus width adapter will seriously slow down the cycles (the two write cycles are 8 clocks apart (100MHz) so the 400MHz ppc that is mastering the cycle will be spinning for a long time).
Altera's PCIe Core always functions on 64-bit address alignment, and if you access something that isn't 64-bit aligned (say, byte address 0x04), it will add padding to make it 64-bit aligned.What comes out of the ST-fifos isn't the "real" TLP, it's padded version. See the section "Mapping of Avalon-ST Packets to PCI Express" in the PCI Express Compiler User's Guide.
--- Quote Start --- http://alterawiki.com/wiki/modular_pcie_sopc_builder_bridge_example --- Quote End --- That certainly is interesting. I wish it was available when I made my bridge :) Still, it's unfortunate that there is not an official, Altera supported Root Complex SoPC bridge.
What software are you talking about? We were just discussing the hardware part.I've decided to go for RapidIO eventually. It fits my needs better. The Avalon master in the RapidIO IP can use a 32-bit bus so I shouldn't have the 64-bit problem. The only drawback I found for now is that it can't be used on the smallest Cyclone IV FPGAs, it requires at least the 484 BGA package.