Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Valued Contributor III
723 Views

Correct way to instantiate an SDRAM ?

Hi all, 

 

Got a bit stuck, here, hoping someone can help me out... I'm trying to work through creating a Nios2 system from basics up to something a bit more complex, and the SDRAM is being unreliable. I think I know why, but I don't know how to fix it... 

 

Here's the top-level component: 

 

module toplevel ( /******************************************************************** |* Clock \********************************************************************/ CLOCK_50, // Input clock @ 50MHz /******************************************************************** |* i/o on the board \********************************************************************/ LEDG, // Green LED bank KEY, // Pushbuttons /******************************************************************** |* SDRAM Interface \********************************************************************/ DRAM_DQ, // SDRAM Data bus 16 Bits DRAM_ADDR, // SDRAM Address bus 12 Bits DRAM_LDQM, // SDRAM Low-byte Data Mask DRAM_UDQM, // SDRAM High-byte Data Mask DRAM_WE_N, // SDRAM Write Enable DRAM_CAS_N, // SDRAM Column Address Strobe DRAM_RAS_N, // SDRAM Row Address Strobe DRAM_CS_N, // SDRAM Chip Select DRAM_BA, // SDRAM Bank Address 0|1 DRAM_CLK, // SDRAM Clock DRAM_CKE // SDRAM Clock Enable ); /*********************************************************************** |* Declare the inputs/outputs direction and widths \***********************************************************************/ input CLOCK_50; output LEDG; input KEY; inout DRAM_DQ; output DRAM_ADDR; output DRAM_LDQM; output DRAM_UDQM; output DRAM_WE_N; output DRAM_CAS_N; output DRAM_RAS_N; output DRAM_CS_N; output DRAM_BA; output DRAM_CLK; output DRAM_CKE; /*********************************************************************** |* Set up the GPU system \***********************************************************************/ wire cpuClock, ramClock; xlgpu gpu ( .clk_0 (CLOCK_50), .cpuClock (cpuClock), .ramClock (ramClock), .out_port_from_the_ledg (LEDG), .reset_n (KEY), .zs_addr_from_the_sdram_0 (DRAM_ADDR), .zs_ba_from_the_sdram_0 (DRAM_BA), .zs_cas_n_from_the_sdram_0 (DRAM_CAS_N), .zs_cke_from_the_sdram_0 (DRAM_CKE), .zs_cs_n_from_the_sdram_0 (DRAM_CS_N), .zs_dq_to_and_from_the_sdram_0 (DRAM_DQ), .zs_dqm_from_the_sdram_0 ({DRAM_UDQM, DRAM_LDQM}), .zs_ras_n_from_the_sdram_0 (DRAM_RAS_N), .zs_we_n_from_the_sdram_0 (DRAM_WE_N) ); assign DRAM_CLK = ramClock; endmodule  

 

... and the system has an on-chip RAM which I'm loading the program into. The code looks like: 

#include <stdio.h> int main() { int i; int mem; int size=1024; printf("Input the address to test (hex) : "); scanf("%x", &mem); unsigned char *ptr = (unsigned char *)mem; printf("Starting test\n"); for (i=0; i<size; i++) { printf("."); *ptr = 0xff; if (*ptr != 0xff) printf("address %p failed to read 0xff (%x)\n", ptr, *ptr); *ptr = 0x00; if (*ptr != 0) printf("address %p failed to read 0x00 (%x)\n", ptr, *ptr); ptr ++; printf(","); } printf("test complete\n"); return 0; } ... and when I type in '0x02000000' (the start of SDRAM) the results in the nios2_terminal look like: 

 

http://www.0x0000ff.com/imgs/fpga/commas.png  

 

... where it hangs. I'm assuming there is an exception on read/write to the SDRAM, and the program just dies. The question is, why ? :) Looking at the warning output when creating the system, I see: 

 

 

 

--- Quote Start ---  

Warning: PLL "xlgpu:gpu|altpll_0:the_altpll_0|altpll_0_altpll_5pa2:sd1|pll7" output port clk[1] feeds output pin "DRAM_CLK~output" via non-dedicated routing -- jitter performance depends on switching rate of other design elements. Use PLL dedicated clock outputs to ensure jitter performance 

--- Quote End ---  

 

 

... so I'm assuming there was too much jitter, the SDRAM didn't respond in time, I got a bus-error, and boom! What I'm not sure of is how to write the verilog that forces the DRAM_CLK output to use the dedicated clock routing. I *thought* I had already done that by setting up the system as in: 

 

http://www.0x0000ff.com/imgs/fpga/clocks.png  

 

... where I specifed the c1 output (ramClock) to be -3ns from the cpuClock output. So, I'm not sure what to do at this point... Thanks in advance for any help :) 

 

Cheers 

Simon
0 Kudos
3 Replies
Highlighted
Valued Contributor III
2 Views

Ok, replying to my own post... I hadn't altered the 'linker script' in the BSP editor, so the program was being compiled to start in SDRAM, which I was promptly trying to overwrite. Changing the address being tested solved the problem, and let the program finish. 

 

The original question still remains though - how do I get the clock out of the generated system, and onto a pin (DRAM_CLK in this case) using dedicated clock-routing ?  

 

Cheers 

Simon
0 Kudos
Highlighted
Valued Contributor III
2 Views

Did you get the pin assigment correct from fpga to sdram clock?  

 

Sean
0 Kudos
Highlighted
Valued Contributor III
2 Views

Yep - I'm using the board DE0.qsf file that came with the board, and I haven't redefined any of the DRAM_... or CLOCK_50 lines.  

 

It actually works in hardware - I've run the memtest.c program, and (after realising that both top and bottom of memory are used, one for the heap, one for the stack) I can happily memtest my SDRAM. I'm just curious as to how I *ought* to be doing it so the clock goes by the correct routing. 

 

[aside: I can't get DMA to work, but I haven't put too much effort into that yet. Still looking at it :)] 

 

Cheers 

Simon
0 Kudos