The design was generated and tested with the following software tools and hardware platforms.
Note: This Wiki entry is a short summary of the design. Please refer to the full application note document located in the downloads section for more details.
Figure 2.1 High Level Block Diagram
Figure 2.1 shows a high level block diagram of the design. The design instantiates the following Altera MegaCores:
The PCIe MegaCore is instantiated as part of a QSYS subsystem. Using QSYS to build a PCIe subsystem allows for the generation of a root port bus functional model (BFM) which can be used to bring up the PCIe link in a simulation environment.
100G Interlaken Cores (2 instantiations)
100 Gbps Ethernet (100GbE)
PCIe Transceiver Reconfiguration Controller
The PCIe serial transceiver reconfiguration controller is instantiated as part of the QSYS subsystem. It must be generated to support four PCIe serial transceiver channels. Interlaken and Gigabit Ethernet Transceiver Reconfiguration Controller This transceiver reconfiguration controller supports 48 serial transceiver channels organized as follows.
The number of channels required of the transceiver reconfiguration controller is explained in Section 2.6 of the full Application Note document located in the downloads section of this page
Figure 2.2 Packet Flow Block Diagram
The pkt_gen_ilk block will start to generate packets when ilk_0 and ilk_1 both assert their “itx_ready” input signals. The pkt_gen_ilk block will generate continuous packets. The default length of each packet is 1024 bytes each following an incrementing data pattern. The ilk_tx_ctl module selects the source of the traffic, pcie_pkt_buf or pkt_gen_buf, and selects the target Interlaken interface, ilk_0 or ilk_1. In the testbench the Interlaken transmit serial links are connected to the receive serial links, performing an external serial loopback. On hardware the loopback is accomplished by the use of AirMax connector loopback cards at the Interlaken ports. The traffic is looped back to ilk0_eth_buf from the ilk_0 or back to ilk1_eth_buf from the ilk_1.
These two buffers will source traffic for the 100GbE interface. The eth_tx_ctl module will select the packets from each of the two buffers. In the simulation testbench, the 100GbE transmit serial links are connected to the receive serial links, performing an external serial loopback. On hardware the loopback is accomplished by the use of a fiber loopback cable with a CFP optical module inserted into the Statix V GX 100G Development Board’s CFP cage.
The received 100GbE frames are pushed into the eth_chk0_buf or eth_chk1_buf buffers by control logic which alternates between them. Each one of the two buffers is read by a corresponding packet checker, pkt_chk0 and pkt_chk1. This completes the full packet flow for traffic generated by the internal packet generator, pkt_gen_ilk.
The other source of traffic is through the PCIe Gen3 x2 interface. Via the PCIe interface, packets can be assembled in the pcie_pkt_buf buffer. This is done by performing PCIe write transactions into the pcie_pkt_buf buffer. The following table shows the address map of the PCIe Packet Buffer and its control.
|10’h000 – 10’h0FF||Packet Buffer||4k byte buffer accessible on 64-byte word basis|
|10’h100||Control Register||[31:16] – size of packet in bytes|
[15:1] – Unused
The PCIe Packet Buffer is organized as a 512x4k ram. Each word is 512 bits (64 bytes) wide. Every PCIe write into the buffer causes the 4-byte word to be replicated 16 times assembling a 64-byte word. This 64-byte word is then written into the PCIe Packet Buffer on 64-byte address boundaries. This was done to simplify the functionality of the demonstration design. In a real application, the PCIe Packet Buffer would provide a true byte address allowing the user to write into any individual byte of the 4096 bytes of the buffer.
Once a packet is assembled in the PCIe Packet Buffer, the Packet_Ready bit in the control register 0x100 is set to 1’b1. The flow of the packet will be controlled by the ilk_tx_ctl controller, enabling the packet to be transmitted out of one of the two Interlaken interfaces.
Figure 2.3 Control Flow
The PCIe interface supports the control and configuration of the example design. The PCIe core is instantiated as a component in a QSYS subsystem. This subsystem has 6 bridges which export their Avalon-MM Master interfaces. The following is the breakdown of the six bridges and their address mapping.
|Bridge||Base Address||Target Avalon-MM Slave|
|mm_bridge_3||0x0200_0000||PCIe Packet Buffer|
|mm_bridge_5||0x0000_0000||xcvr reconfig controller|
Using the bridges, you can access each of the IP’s address space. Please refer to the individual IP user guides for a complete list of their respective address space. Links to the IP User Guides are provided in Sections 10, References. As discussed above, the PCIe Packet Buffer is used for the assembly of packets via the PCIe interface. The other module which has an Avalon-MM slave is the stats module. The stats module collects all of the individual IP statuses which can be accessed using the PCIe interface. Please refer to the stats.v file for a description of the statistics and their address mapping.
The design is packaged in the file, “multi_ip_core_example_df.zip”. After extracting the design, you will notice the following directory structure.
Figure 3.0 Design Organization
Following is a brief description of each of the folders.
Contains the top level design, top.v, as depicted in Figure 2.2, which includes the instantiation of the different IP cores, the QSYS subsystem Control_Unit, the Packet Generator, the Packet Checker, the Stats module, as well as the bridging logic.
This folder contains the RAM and FIFO MegaCores used in the buffers of the bridging modules.
This folder contains the top-level wrapper of the 100GbE IP core and all of its supporting files.
This folder contains the top-level wrapper of the 100G Interlaken IP core and all of its supporting files.
This folder contains the top-level wrapper of the Transceiver Reconfiguration Controller and all of its supporting files.
This folder contains the QSYS sub system, Control_Unit.
This folder contains the testbench, tb.v.
This folder contains the simulation scripts that execute the testbench using ModelSim.
This folder contains the simulation scripts that execute the testbench using Synopsys VCS.
This folder contains the simulation scripts that execute the testbench using Cadence NCSIM.
This is the project folder for Quartus II. It contains the top.qpf, top.qsf, and top.sdc files for the design. From this folder Quartus II is launched to perform the full compilation of the design.
Using the Clock Control utility, program the on-board clocks as follows:
If you do not have the Clock Control utility, you must install the Altera 100G Development Kit version v11.1.2.
Verify that the following packet counters are incrementing.
|1.0||September 2014||Initial Release|