- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have a DDR3 memory with internal interface width 512 bits running at 200 MHz Avalon bus speed. We want to connect several master components to this memory using the one Avalon slave port. We are using bursting at both the memory and all master interfaces.
The individual master components have slower bandwidth requirements than what the memory can handle, we are fine running the master components at e.g. 64 bits and 200 MHz Avalon bus speed.
What we do not know, is how the fabric behaves in terms of bandwidth - do the master devices have to run at the same bandwidth (width * frequency) as the memory interface to utilize the DDR3 efficiently (at full speed), or are the slow bursts from slower master interfaces converted in interconnect to bursts for the memory, which are transferred to and from the memory at the full memory interface bandwidth? If the latter is the case, we can run our components at the speed we need for each of these slower interfaces, otherwise we need to run everything at full speed, even when we do not need to necessarily.
Could you please state what is the real interconnect behavior?
Thank you
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sir,
For the data width, you have to use full 512bits to fully utilize the bandwidth. The interconnect will not help to expand the data width to adapt memory local interface data width.
This mean if you use 64bits, you only use 1/8 of the memory from the DDR3 device. I believe the interface data width is 64 bits. So, if you running the simulation, you will only see 8 bits of the DQ is working.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
The Intel "Introduction to Platform Designer", slide 4.7 provides following partial answer to the question:
'If a 16-bit component is connected to the 32-bit component, a width adapter is automatically added and data is handled appropriately." It is from the example so I assume, that it holds true for other width sizes as well.
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are aware of the fact that width is converted automatically; what we do not know, is whether the interconnect locks the slave device for the entire duration of the master device transfer or the slave is locked only for the time necessary for the transfer of data at slave interface? In other words, does the burst transaction at the fast slave interface take the same amount of time as the transaction on the slow master interface? There are some QSYS settings (Per-burst-type converter, Platform Designer Guide, 3.1.9.1. Burst Adapter Implementation Options) looking relevant to this situation, could you please explain the usability of these on our scenario as well?
BCT_Intel: Your answer does not make much sense to us, could you please elaborate? Connecting a 64-bit master to a 512-bit (internal) width DDR will likely not result in only 8 DQ signals toggling(?)
F.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
FRoth, I'm sorry I'm not aware that the interconnect have such feature to convert the width. My previous understand is that interconnect is 1:1 connection for its data. Thanks JKastil for correction and I learned new thing about interconnect today.
FRoth, regarding your usability scenario, we do not have an accurate answer for it. I will suggest you run a simulation on it and check the outcome.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page