FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6060 Discussions

Stratix 10 : Critical Warning: DDR Timing requirements not met

Renardo18
New Contributor I
414 Views

I have two DDR4 ('DDR A' and 'DDR B') in my project, and I am using two DDR4 EMIF controller for Stratix 10.

My problem is the 2 of them don't meet timing, the compilation report gives this kind of messages for both of them (setup and hold timing values change between them).

 

Info: Core: emif_fpga_b_emif_s10_0_altera_emif_arch_nd_191_y322eui - Instance: memory_ddr4x72_wrapper_b|u1|emif_s10_0|emif_s10_0 


Info:                                                                                                             setup  hold                                            
Info: Address/Command (Fast 900mV 0C Model)                       |  0.176  0.176

Info: Core (Fast 900mV 0C Model)                                                  |  0.762 -5.928

Info: Core Recovery/Removal (Fast 900mV 0C Model)               |  0.827  1.703

Info: DQS Gating (Fast 900mV 0C Model)                                      |   0.53   0.53

Info: Read Capture (Fast 900mV 0C Model)                                   |  0.036  0.036

Info: Write (Fast 900mV 0C Model)                                                  |  0.058  0.058

Info: Write Levelling (Fast 900mV 0C Model)                                |  0.141  0.141

Critical Warning: DDR Timing requirements not met

 

1st question : How can I solve that?

 

My other problem is that when I run timing analyzer GUI I have 29 failing path only in DDR A. They are all from :  {memory_ddr4x72_wrapper_a|u0|emif_s10_0|emif_s10_0|ecc_core|core|ecc|internal_master_wr_data[xxx]}

and -to  :{memory_ddr4x72_wrapper_a|u0|emif_s10_0|emif_s10_0|arch|arch_inst|io_tiles_wrap_inst|io_tiles_inst|tile_gen[xxx].lane_gen[xxx].lane_inst|lane_inst~phy_reg1}

 

With different values of xxx.

 

Why do I see failing path only in DDRA and not in the 2 DDRs when I report timing ?

Does anyone have a solution? How can I do to meet timing within the EMIF IP Core? Are the timing related to the board and package skews settings that I define in the IP ?

 

Thanks 

 

 

0 Kudos
8 Replies
AdzimZM_Intel
Employee
389 Views

Hi Renardo18,


You can try to set min delay like below:

if { ! [is_post_route]} {

set_min_delay -from [get_keepers {path] -to [path] value

}


value is 10% more than the data delay of the path.


Also can you enable the fast forward recompile feature?

https://www.intel.com/content/www/us/en/docs/programmable/683729/current/fast-forward-compile.html


I'm not sure why the timing violation is occurs in DDRA only.

Maybe you can share the timing report or even the design so that I can check it from my side.


Thanks,

Adzim


Renardo18
New Contributor I
376 Views

Hi Adzim,

Thank you for you answer.

 

Isn't there a missing '}' after the first 'path' of the command you want me to run?

 

You mention "value is 10% more than the data delay of path", do you mean I need to run this command for each and every failing path?

In my first question, I mentioned that there were 29 failing paths, but today, after I relaunched the build yesterday, I have 75 failing paths in the DDR.. So more than yesterday, and probably some different ones.

 

Also, I did run fast forward recompile, I obtained these weird results : see the joined files.

 

1_redblack.png = you can see that the last red path is ddr_b one. The very next one "ddr_a" is black, so meaning it meets timing. Which doesn't reflect what I see in timing analyzer : ddr_a fails timing, and ddr_b does meet timing.

ddra.png and ddrb.png = When I click on each of these paths, it says "meets timing requirements: no further analysis performed".

 

I am a bit confused here. Do you have a solution?

 

AdzimZM_Intel
Employee
340 Views

Hi Renardo18,


"Isn't there a missing '}' after the first 'path' of the command you want me to run?"

  • Yes you can run for first path one time and see if it taking effect for testing purpose. After recompiling, you can see in the timing report for this path whether fail or not. If not, then you can proceed with the remaining path.


"You mention "value is 10% more than the data delay of path", do you mean I need to run this command for each and every failing path?"


"In my first question, I mentioned that there were 29 failing paths, but today, after I relaunched the build yesterday, I have 75 failing paths in the DDR.. So more than yesterday, and probably some different ones."

  • Is this occur after applying the constraint in your SDC?
  • Any other changes that you have made?



In general, Fast Forward is use to analyze on how fast the design can go and could be useful to provide recommendation in timing issue particularly in setup.

In this case seems your setup is all good, fast forward is not applicable.


Another thing is you can run a few seeds maybe around 10 to 20 seeds and see if you can find the best timing among them.


Just asking, what is your memory frequency that you run on this IP?


Have you tried to use different compiler setting? Do some tweaks on the mode setting.




Renardo18
New Contributor I
329 Views

Hi Adzim,

 

  • "Yes you have to run for each and every path. But in the Quartus, you can use wildcard character.

I know that I use wildcard character, but what about the value of 10% more than the data delay of path, that means I need to use a specified value for every path, and thus, I can't use the wildcard isn't it?

 

  • Is this occur after applying the constraint in your SDC?
  • Any other changes that you have made?

I did not apply any constraint in my SDC, and I did not make any change to this part. I might have just cleaned timing on unrelated parts of the design and just relaunched the build.

 

So the only thing you are telling me to do at the end of the day is : 

You can try to set min delay like below:

if { ! [is_post_route]} {

set_min_delay -from [get_keepers {path}] -to [path] value

}

value is 10% more than the data delay of the path.

 

Please see the } [IN RED] that I added to your code and tell me if I am right?

Also, am I right saying that I can't use the wildcard character because every path will need a different value value ?

 

Thanks

 

SyafieqS
Moderator
307 Views

Hi Renardo,

 

1. Please see the } [IN RED] that I added to your code and tell me if I am right?
- Yes you are right, and if you are unsure about the syntax, you can always use constraint menu in TA, just select any constraint and locate the node in the node finder to make sure right collection is specified, then change to your desired constraint type, e.g set_max_skew or set_min_delay for your case. This is just for the sake of verification of the nodes and tcl syntax. Let me know if you are not understand.

2. Also, am I right saying that I can't use the wildcard character because every path will need a different value value ?
- You can still use wildcards for that, in this case wildcard, the value specified will be normally used referred to the data delay of the worst negative slack path (usually the first path in column when report timing). Thus you no need to specified every path a value.

 

Cheers

Renardo18
New Contributor I
287 Views

Hi SyafieqS,  

I am sorry I didn't have time to test it this week, please let me test this now and I'll tell you if anything changes

 

 

Thank you

AdzimZM_Intel
Employee
294 Views

We do not receive any response from you to the previous question/reply/answer that have been provided. This thread will be transitioned to community support. If you have a new question, feel free to open a new thread to get the support from Intel experts. Otherwise, the community users will continue to help you on this thread. Thank you.


Renardo18
New Contributor I
253 Views

Hi AdzimZM and SyafieqS,

 

I have tried the following command :

if {![is_post_route]} {
set_min_delay -from [get_keepers "memory_ddr4x72_wrapper_a\|u0\|emif_s10_0\|emif_s10_0\|ecc_core\|core\|ecc\|internal_master_wr_data\[*\]*"] -to {memory_ddr4x72_wrapper_a|u0|emif_s10_0|emif_s10_0|arch|arch_inst|io_tiles_wrap_inst|io_tiles_inst|tile_gen[*].lane_gen[*].lane_inst|lane_inst~phy_reg1} 4.114
}

 

The path with the worst negative slack of my paths had a Data Delay of 3.74. I then added 10% to this value as you told me : 3.74*1.1 = 4.114

 

The problem is that this made the timing analysis of my design way worse : now I have 500+ of this failing paths in my DDR. (It used to be 30 path only).

 

Am I doing something wrong?

Reply