- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi there!
I have a Stratix III FPGA connected to a DSP, and I'm trying to simulate (via the Quartus simulator) the latency of a read command from the DSP. The FPGA is connected to a 64 bit bidirectional data bus, address bus, CS and read enable (the interface is synchronous). Whenever a specific address appears on the address bus, along with a CS and read activity, the FPGA outputs data to the data bus (when one of the terms do not apply, the FPGA switches the data bus to be input). The logic inside the FPGA is asynchronous to the interface clock (the clock is 120MHz). What I have noticed is that the simulation shows me that when I issue a read to the FPGA (address, CS and read enable) for a time of 1 clock cycle, the FPGA outputs the data after a time which is 2 interface clocks cycles (although it is coded asynchronously, meaning the delay is unrelated to the clock). Is it intentional? Can I control this delay? (I'd obviously want this delay to be known and similar for every compilation). I'd also like to add that im using the classic timing analyzer.Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've made some additional progress, or so I believe :)
What I think I need to do is to set a minimum Tpd and a maximum Tpd between the control lines (address, CS, read enable) and the data lines. If I understand correct, the Tpd stands for the delay from input, through the async logic, and to the output. I believe this should be, for example, between 1 clock cycle duration (minimum) and 2 clock cycles duration (maximum). This way I should get persistent timing through PVT. Or am I mistaken?
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page