Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
16556 Discussions

Platform designer interconnect: unexpected behavior

Schroeti
New Contributor I
2,726 Views

Hi.

Now i have a strange problem within the memory mapped domain of a platform designer system design. The system comprises 2 masters and 4 slaves.

After one of the master issues it's first write command, the interconnect fabric responds way too much writeresponsevalid counts.

The same applies to the readdatavalid signal.

Furthermore, after the first read goes to the interconnect, the addressed slave gets too much reads.

The interconnect read and write responses are asserted before the slave responds.

 

I'm using Quartus 20.3 pro.

Why is this and how can i change it to the right behavior?

Thanks
Philipp

0 Kudos
1 Solution
Schroeti
New Contributor I
2,650 Views

Finally i found the bug. It was an error in the _hw.tcl script of the new custom agent: the writeresponsevalid was declared as writeresponsevalid_n.

@sstrell : Thanks for your effort!

View solution in original post

0 Kudos
8 Replies
sstrell
Honored Contributor III
2,716 Views

(Host = master, agent = slave)

I'm not sure if this is part of the problem and it's a little tricky to see, but in the first two waveform pictures (issue 0 and issue 2), it looks like your host is not honoring waitrequest correctly.  The write_o enable signal, along with the address and write data, should be held as long as waitrequest is high.  Are both the host and agent your own custom components?

0 Kudos
Schroeti
New Contributor I
2,712 Views

In issue1 you can see that the write signal is sampled high one clock period before the waitrequest is sampled high. The avalon-mm specification states that the waitrequest is asserted asynchronously with read or write. That means in case of a waitrequest they should be samled high at the same time, right? This would also correspond to the waitrequest allowance setting of 0.

Yes, both the host and agent are custom components.

The host is functioning as follows: There is a look-ahead FIFO issuing data which contains address, byteenable, writedata, read and write signals. Additionally, the read and write signals are ANDed with the valid output of the FIFO. The FIFO is read if the waitrequest is low.
To my understanding this should be proper. The host runs error-free in another system.

0 Kudos
sstrell
Honored Contributor III
2,703 Views

Very odd.  At first I was thinking this was perhaps a pipelining issue, but you're only issuing a single read command from the host. In the issue 0 diagram, is waitrequest from the agent correct?  It keeps going high and low while the interconnect continues asserting the read enable signal to it.  Each time it releases waitrequest, the interconnect would think that valid read data was available.

0 Kudos
Schroeti
New Contributor I
2,694 Views

Yes, that's correct. The interconnect issue pictures are showing the complete memory mapped interfaces, except reset, from host and agent. The host is at the top and the agent at the bottom.

After the host tries to read from the agent the interconnect asserts its read all the time. After 64 read transactions the FIFO inside the agent is full and waitrequest_o is asserted. Every time a read command is taken from FIFO, the waitrequest_o is deasserted. Meanwhile the processed write commands are reported by the agent.

0 Kudos
sstrell
Honored Contributor III
2,686 Views

You say there are 2 hosts and 4 agents.  The only thing I can think of at this point is interference from one of those other components.  Is this a direct host to agent connection or are there multiple host/agent connects and there's arbitration taking place somewhere?

0 Kudos
Schroeti
New Contributor I
2,679 Views

One of the agents is connected to two hosts, the other ones, such as the shown one, are connected only to the host shown in the signal pictures. So yes, there is arbitration taking place in this memory mapped domain to one of the agents.

Next Monday I will apply the signal tap to all hosts/agents in this system.

0 Kudos
Schroeti
New Contributor I
2,658 Views

It is probably also important to mention that there are 2 clock domains within the memory mapped domain.

Here are the signal tap pictures of all hosts and agents. I have divided them into two different signal tap instances, since there are two different clocks. The SystemClock tap triggers the EmifClock tap.

For me it seems that there is no interference.

0 Kudos
Schroeti
New Contributor I
2,651 Views

Finally i found the bug. It was an error in the _hw.tcl script of the new custom agent: the writeresponsevalid was declared as writeresponsevalid_n.

@sstrell : Thanks for your effort!

0 Kudos
Reply