Programmable Devices
CPLDs, FPGAs, SoC FPGAs, Configuration, and Transceivers
20763 Discussions

Clear explaination about system interconnect fabric

Honored Contributor II



I'm newbee in altera board and using DE1-SoC. 


And I'm having confusion of system interconnect fabric. 


As far as I know, FPGA and HPS are connected via 'bridge' (HPS to FPGA, FPGA to HPS, HPS to FPGA lightweight).  


and also, according to the HPS's block diagram from 'Cyclone V Hard Processor System manual 1-4), bridges are connected to L3 interconnect(NIC-301). 


And I found that NIC-301's role is sort of routers or switches.  


Besides, Avalon MM peripherals are connected via 'system interconnect fabric'  


So this is what causes confusion, if I make Avalon MM slaves such as PIO be connected to HPS, what is the relationship between 'system interconnect fabric' , 'bridges', and L3 interconnect?? 


According to the HPS's block diagram from 'Cyclone V Hard Processor System manual 1-4), I don't know where to add 'system interconnect fabric' in that block diagram. 


thanks in advance
2 Replies
Honored Contributor II



I guess, we are talknig about a QSys generated system? 


If so, the system interconnect fabric is added by Qsys atomatically in order to meet your system specifications, that you can change in all sort of places (e.g. you can add pipeline stages in order to get a faster Fmax, or when your custom components use wait states, etc. all this is handled in the automatically generated system interconnect farbric). 

Additionally, in an HPS based device, all the bridges you are talking about are natively ARM AXI based. That means, if your custom component in the FPGA logic uses an Avalon interface, you can add it to the HPS anyway and the system interconnect logic takes care on creating the translation from AXI to Avalon.  


Over the last 2 years, I build a pretty big system in a Cyclone V device. 

All my modules are "custom components" in QSys that I almost completely connect to the LWHPS2FPGA bridge. 

All these components (except for one) are Avalon MM Slaves. So I have a lot of interconnect logic generated by QSys. 


I never took care about the NIC-301 at all. In the beginning I went through all the block diagrams myself, and I'm aware of its presence, but for a functioning system that is e.g. Linux based on the HPS, you do not have to take care for it yourself. 

If I would like to acces my custom components from a Linux command line (in order to test the component) I just use the tiny little tool devmem2 and use the "LWHPS2FPGA Address" + "my custom component base address" + "my custom components register offset". 

Later on, I wrote Linux device drivers for all of my custom components. 


So basically, in order to get things to work, you just have to concentrate on your custom componets. Provide it with either an Avalon or AXI interface, and connect it to the HPS bridge of your choice in QSys. 

After that, you only need to know the base address of the used HPS bridge and your components addresses and you are good to go. 


After two years of sometimes frustrating experiences (because of the huge Altera documentation that (in my opinion) tells you almost nothing in the end), I figured out that it is indeed that easy and that it all works pretty damn good in the end. 


To answer your last question: Since the system interconnect fabric is in the FPGA (Qsys generated) you will not find it in the HPS system manual. 





I am trying to connect my Verilog modules to HPS.

Can you explain me or suggest resources how to do that?


Also how to create custom components in Qsys that follows the Avalon Interconnect protocols ?

0 Kudos