Showing results for 
Search instead for 
Did you mean: 
Intel Support hours are Monday-Fridays, 8am-5pm PST, except Holidays. Thanks to our community members who provide support during our down time or before we get to your questions. We appreciate you!

Need Forum Guidance? Click here
Search our FPGA Knowledge Articles here.




This application note describes a design which instantiates multiple Altera IP MegaCores. The design is used to demonstrate how to generate, organize, simulate, and compile a full design using various Altera IPs. The main focuses are to demonstrate the design flow and note specific instances to watch out for when designing with multiple IP cores. For this reason, the bridging logic between the Altera IP MegaCores is not representative of any real application, but you can use this design flow and methodology with your specific application IP between these IP cores to build your own system design in Altera FPGAs.

The design was generated and tested with the following software tools and hardware platforms.

  • Quartus II software version 14.0
  • 100G Interlaken IP version 14.0
  • 100GbE IP version 14.0
  • Gen3 PCIe x2 Hard IP version 14.0
  • Synopsys VCS, version 2013.06-SP1
  • Cadence NCSIM, version 13.10-s012
  • Mentor Graphics ModelSim-SE, version 10.2c
  • Altera Stratix® V GX 100G Development Board with device 5SGXEA7N2F45C2

Note: This Wiki entry is a short summary of the design. Please refer to the full application note document located in the downloads section for more details.


High level description

1/19/Multi_ip_block_diagram.png ( Multi ip block diagram.png - click here to view image )

Figure 2.1 High Level Block Diagram 

Figure 2.1 shows a high level block diagram of the design. The design instantiates the following Altera MegaCores:


The PCIe MegaCore is instantiated as part of a QSYS subsystem. Using QSYS to build a PCIe subsystem allows for the generation of a root port bus functional model (BFM) which can be used to bring up the PCIe link in a simulation environment.


  • Gen 3 (8 GT/s) x2 (Link Layer and Transaction Layer controller and PCS IP)
  • Hardened protocol stack – TL and DLL
  • Reference Clock: 100 MHz
  • Avalon-MM Application Interface
  • Endpoint configuration

100G Interlaken Cores (2 instantiations)


  • 125 Gbps data throughput rate
  • Soft MAC IP and Hard PCS+PMA (PHY) IP
  • 12 Lanes X 10.3125 Gbps External Interface
  • Reference Clock: 412.25 MHz
  • System Clock (User Interface) : 312.50 MHz
  • Packet Mode
  • In-Band Flow Control
  • MetaFrame length: 2048 x 8-byte words

100 Gbps Ethernet (100GbE)


  • 100Gb Ethernet Soft MAC and soft PCS + hard PMA (PHY) IP
  • 10 Serial Lanes @10.3125 Gbps Exnternal Interface
  • Avalon-ST® MAC to user interface

PCIe Transceiver Reconfiguration Controller

The PCIe serial transceiver reconfiguration controller is instantiated as part of the QSYS subsystem. It must be generated to support four PCIe serial transceiver channels. Interlaken and Gigabit Ethernet Transceiver Reconfiguration Controller This transceiver reconfiguration controller supports 48 serial transceiver channels organized as follows.

  • 20 channels for 100GbE
  • 14 channels for Interlaken Instance 0
  • 14 channels for Interlaken Instance 1

The number of channels required of the transceiver reconfiguration controller is explained in Section 2.6 of the full Application Note document located in the downloads section of this page

Packet Flow

b/b9/Multi_ip_packet_flow.png ( Multi ip packet flow.png - click here to view image )

Figure 2.2 Packet Flow Block Diagram

The pkt_gen_ilk block will start to generate packets when ilk_0 and ilk_1 both assert their “itx_ready” input signals. The pkt_gen_ilk block will generate continuous packets. The default length of each packet is 1024 bytes each following an incrementing data pattern. The ilk_tx_ctl module selects the source of the traffic, pcie_pkt_buf or pkt_gen_buf, and selects the target Interlaken interface, ilk_0 or ilk_1. In the testbench the Interlaken transmit serial links are connected to the receive serial links, performing an external serial loopback. On hardware the loopback is accomplished by the use of AirMax connector loopback cards at the Interlaken ports. The traffic is looped back to ilk0_eth_buf from the ilk_0 or back to ilk1_eth_buf from the ilk_1.

These two buffers will source traffic for the 100GbE interface. The eth_tx_ctl module will select the packets from each of the two buffers. In the simulation testbench, the 100GbE transmit serial links are connected to the receive serial links, performing an external serial loopback. On hardware the loopback is accomplished by the use of a fiber loopback cable with a CFP optical module inserted into the Statix V GX 100G Development Board’s CFP cage.

The received 100GbE frames are pushed into the eth_chk0_buf or eth_chk1_buf buffers by control logic which alternates between them. Each one of the two buffers is read by a corresponding packet checker, pkt_chk0 and pkt_chk1. This completes the full packet flow for traffic generated by the internal packet generator, pkt_gen_ilk.

The other source of traffic is through the PCIe Gen3 x2 interface. Via the PCIe interface, packets can be assembled in the pcie_pkt_buf buffer. This is done by performing PCIe write transactions into the pcie_pkt_buf buffer. The following table shows the address map of the PCIe Packet Buffer and its control.

10’h000 – 10’h0FFPacket Buffer4k byte buffer accessible on 64-byte word basis
10’h100Control Register[31:16] – size of packet in bytes

[15:1] – Unused
[0] – Packet_Ready for transmission; self-clearing

The PCIe Packet Buffer is organized as a 512x4k ram. Each word is 512 bits (64 bytes) wide. Every PCIe write into the buffer causes the 4-byte word to be replicated 16 times assembling a 64-byte word. This 64-byte word is then written into the PCIe Packet Buffer on 64-byte address boundaries. This was done to simplify the functionality of the demonstration design. In a real application, the PCIe Packet Buffer would provide a true byte address allowing the user to write into any individual byte of the 4096 bytes of the buffer.

Once a packet is assembled in the PCIe Packet Buffer, the Packet_Ready bit in the control register 0x100 is set to 1’b1. The flow of the packet will be controlled by the ilk_tx_ctl controller, enabling the packet to be transmitted out of one of the two Interlaken interfaces.

Control Flow

2/2a/Multi_ip_control_flow.png ( Multi ip control flow.png - click here to view image )

Figure 2.3 Control Flow

The PCIe interface supports the control and configuration of the example design. The PCIe core is instantiated as a component in a QSYS subsystem. This subsystem has 6 bridges which export their Avalon-MM Master interfaces. The following is the breakdown of the six bridges and their address mapping.

BridgeBase AddressTarget Avalon-MM Slave
mm_bridge_30x0200_0000PCIe Packet Buffer
mm_bridge_50x0000_0000xcvr reconfig controller

Using the bridges, you can access each of the IP’s address space. Please refer to the individual IP user guides for a complete list of their respective address space. Links to the IP User Guides are provided in Sections 10, References. As discussed above, the PCIe Packet Buffer is used for the assembly of packets via the PCIe interface. The other module which has an Avalon-MM slave is the stats module. The stats module collects all of the individual IP statuses which can be accessed using the PCIe interface. Please refer to the stats.v file for a description of the statistics and their address mapping.

Design Organization

The design is packaged in the file, “”. After extracting the design, you will notice the following directory structure.

6/60/Multi_ip_design_organization.png ( Multi ip design organization.png - click here to view image )

Figure 3.0 Design Organization

Following is a brief description of each of the folders.


Contains the top level design, top.v, as depicted in Figure 2.2, which includes the instantiation of the different IP cores, the QSYS subsystem Control_Unit, the Packet Generator, the Packet Checker, the Stats module, as well as the bridging logic.


This folder contains the RAM and FIFO MegaCores used in the buffers of the bridging modules.


This folder contains the top-level wrapper of the 100GbE IP core and all of its supporting files.


This folder contains the top-level wrapper of the 100G Interlaken IP core and all of its supporting files.


This folder contains the top-level wrapper of the Transceiver Reconfiguration Controller and all of its supporting files.


This folder contains the QSYS sub system, Control_Unit.


This folder contains the testbench, tb.v.


This folder contains the simulation scripts that execute the testbench using ModelSim.


This folder contains the simulation scripts that execute the testbench using Synopsys VCS.


This folder contains the simulation scripts that execute the testbench using Cadence NCSIM.


This is the project folder for Quartus II. It contains the top.qpf, top.qsf, and top.sdc files for the design. From this folder Quartus II is launched to perform the full compilation of the design.

Download design

Link to full application note

File:Multi ip core example

Run the design

Prepare Hardware

  • Connect the AirMax loopback cards to the AirMax connectors on the board.
  • Insert the CFP module into the CFP cage.
  • Insert the loopback fiber in the CFP module.
  • Connect the USB cable to your PC (Make sure that you have the USB-BlasterII driver installed).
  • Connect the power supply to the board.
  • Set the DIP switches to the positions shown in the diagram.
  • Power ON the board.

Program the on-board clocks

Using the Clock Control utility, program the on-board clocks as follows:

  • U22: CLK1: 322.265625 MHz (100GbE Reference Clock)
  • U44: CLK2: 412.50 MHz (100G Interlaken 0 Reference Clock)
  • U44: CLK1: 412.50 MHz (100G Interlaken 1 Reference Clock)
  • U53: CLK2: 100 MHz (PCIe Reference Clock)
  • U53: CLK3: 312.50 MHz(Sysclk clock)

If you do not have the Clock Control utility, you must install the Altera 100G Development Kit version v11.1.2.

Program the FPGA

  • Open Quartus II 14.0
  • Open the project, top.qpf which is located in the par folder.
  • Open SignalTAP.
  • Program the FPGA with the top.sof

Verify that the following packet counters are incrementing.

  • ilk0_rx_pkts
  • ilk1_rx_pkts
  • eth_tx_pkts
  • eth_rx_pkts


100G Interlaken MegaCore Function User Guide

100-Gbps Ethernet MAC and PHY MegaCore Function User Guide

Stratix V Avalon-MM Interface for PCIe Solutions User Guide

Stratix V Transceiver User Guide

Known Issues

  • Traffic generation via PCIe is currently not supported
  • Stats gathering and reporting via PCIe is currently not supported


1.0September 2014Initial Release

Version history
Last update:
‎06-27-2019 04:59 PM
Updated by: