FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6669 Discussions

Address range confusion for AVMM Host to multiple AVMM Agent.

UserID4331231
New Contributor I
907 Views

UserID4331231_0-1752862002615.png

UserID4331231_1-1752862305464.png

 

Attached is the screen shot from Platform designer, this is example design from BAS+MCDMA and I am trying to add AVMM FIFO for my usecase.

 

I have a AVMM Host from BAR interpreter module which is connected to two AVMM agent ports on  BAM Memory and BAS TGC module in original example design. For my usecase i want to connect AVMM FIFO as third AVMM agent. My FIFO "depth" is 16 and "data width" is set to 256

I was able to make connection in Platform designer as you can see in the 1st screen shot. While analyzing address range in second screen shot, I see FIFO AVMM agent port is assigned 32bytes of address range.  

 my questions are Assuming I am writing my own logic to drive this AVMM Host interface 

  1. AVMM BAM Master has 512 bit write data bus Vs my FIFO width is 256 bit. the other AVMM agent interface has writedata of 512 bits. is this valid configuration? i mean there is writedata width mismatch across agents.
  2. To issue writes to FIFO, what address AVMM Master need to drive ? i see 32 byte address range 0x0 to 0x1F.
    • can I use any offset between 0x0 to 0x1F and drive data on [255:0] writedata  with proper byte enables?  or does offset need to be 0x0.?
    • AVMM agent port on FIFO is 256 bit wide,  and AVMM master writedata is 512 bit wide; so I am thinking if use offset 0x0, drive 512 bits of data but byteenable as 32{1'b1}; it would work ; is that correct understanding? 
  3. I am curious how AVMM Host handles Waitrequest from multiple AVMM agents? 
    • I mean Host interface only have one waitrequest input.
    • and there is one wait request from each agent.  if a waitrequest is asserted from an agent it only means that agent is busy, other agents are free and can serve AVMM Host requests.
    • Is Quartus auto generating such bus management logic for each AVMM bus (one AVMM Host and One or more of AVMM agents)?

 

 

 

0 Kudos
2 Replies
RichardTanSY_Altera
615 Views

1. Yes, it is valid for Avalon MM agents to have different data widths when connected to a single Avalon MM host in Platform Designer. The interconnect logic generated by Platform Designer automatically handles data width adaptation between the host and each agent.

https://www.intel.com/content/www/us/en/docs/programmable/683609/25-1/width-adaptation-84420.html


2. can I use any offset between 0x0 to 0x1F and drive data on [255:0] writedata with proper byte enables? or does offset need to be 0x0.?

You can use any aligned offset within this range to issue writes. Check the Avalon MM Agent Addressing

https://www.intel.com/content/www/us/en/docs/programmable/683091/22-3/mm-agent-addressing.html


AVMM agent port on FIFO is 256 bit wide, and AVMM master writedata is 512 bit wide; so I am thinking if use offset 0x0, drive 512 bits of data but byteenable as 32{1'b1}; it would work ; is that correct understanding? 

The width adapter will handle this conversion for you. The master can still issue 512-bit writes, and the adapter will split them into two 256-bit writes internally. So yes, your conceptual understanding is correct, but you don’t manually control the two 256-bit transactions; PD-generated logic does that. During host write transfers, the interconnect automatically asserts the byteenable signals to write data only to the specified agent byte lanes.


3. Yes, Platform Designer automatically inserts arbitration logic, which grants access in fairness-based, round-robin order. You do not need to implement this logic yourself. https://www.intel.com/content/www/us/en/docs/programmable/683609/25-1/arbitration.html


For further details, checkout the Avalon® Memory-Mapped Interfaces 3.5 Transfers

https://www.intel.com/content/www/us/en/docs/programmable/683091/22-3/transfers.html


Regards,

Richard Tan



0 Kudos
RichardTanSY_Altera
384 Views

We noticed that we haven't received a response from you regarding the latest previous question/reply/answer, and will now transitioning your inquiry to our community support. We apologize for any inconvenience this may cause and we appreciate your understanding.


If you have any further questions or concerns, please don't hesitate to reach out. Please login to https://supporttickets.intel.com/s/?language=en_US, view details of the desire request, and post a feed/response within the next 15 days to allow me to continue to support you. After 15 days, this thread will be transitioned to community support. The community users will be able to help you on your follow-up questions.


The community users will be able to help you on your follow-up questions.


Thank you for reaching out to us!


Best Regards,

Richard Tan



0 Kudos
Reply