Nios® V/II Embedded Design Suite (EDS)
Support for Embedded Development Tools, Processors (SoCs and Nios® V/II processor), Embedded Development Suites (EDSs), Boot and Configuration, Operating Systems, C and C++

SDRAM Interface to SO-DIMM?

Altera_Forum
Honored Contributor II
1,544 Views

I'm planning a NiosII design and we need a pile of RAM but don't need the high bandwidth of SRAM. It's hard to find SDRAM chips in stock, given their commodity nature, and so I was looking at using an SO-DIMM site instead of a particular SDRAM chip. (That also deals with the problem of someone later deciding we don't have enough memory.) 

 

Hardware-wise, an SO-DIMM looks great (about 3 by 1.5 inches of board space, $30-40 for 128MB 64-bit wide PC133, and $7 or less for the socket). What I'm wondering about is the SDRAM interface SOPC component; can it deal with a memory module? I'm guessing I'll have to hook up an I2C interface to the SPD port and have the bootloader compare the timing info to the parameters the SDRAM interface was built with. It would be nice if we could add a slave port to the SDRAM interface component and make the timing parameters BIOS-settable, but it's not a requirement. 

 

So, any thoughts, anyone?
0 Kudos
13 Replies
Altera_Forum
Honored Contributor II
735 Views

Hello Mike, 

 

Of course you can use SO-DIMM modules for your design but it is not possible to change the timing behaviour during run-time with the SDRAM controller of the Nios II. Perhaps there are other vendors which provides SDRAM controller to do such things. 

One solution is to set worst-case timings in SOPC builder and use these for all modules. Or you do the settings for a specific module and sell the board only with theses modules. 

Using SO-DIMM modules is a good idea but it is also possible that you won’t get these modules after some years because all new notebooks are using DDR or the price is very high (like with the normal SDRAM modules now). But that is only my opinion. Perhaps some others have more detailed information. 

 

Bye, 

niosIIuser
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

I think you'll be able to get SDRAM SO-DIMM's for a very long time. Common sense says they would be available much longer than any single chip you'd pick. Our board that's been shipping for 4+ years (design started 5+ years ago) still uses 72 pin EDO SIMMs.  

 

It's not that much trouble to get them and they are $5-$10 for 16-32MB stick. 

 

I agree that DDR would be an even better choice if you can develop the controller. The Twister Cyclone dev board on www.fpga.nl has DDR, but not in a socket. It may come with a controller that is close? 

 

I know I&#39;d like to have either one for a future board spin http://forum.niosforum.com/work2/style_emoticons/<#EMO_DIR#>/smile.gif  

 

Ken
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

Slightly OT, but I downloaded the Twister QII and SOPC files and built them. 

 

They have a very professionaly integrated DDR component that you can add to your design. It has choices for a couple of different Cyclone chips and a couple of different DDR chips. 

 

QII Timing analyzer says its good for close to 100MHz. They have the CPUCLK set at 66. 

 

Not sure what the licensing is but I&#39;ll bet you can use it if you purchase their devkit. 

 

Ken
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

From one of the engineers: 

We completely support SO-DIMM&#39;s, but do not at all support I2C -- our SDRAM controller only does static SO-DIMM configurations (set up in the wizard, pre-quartus compile). This should be reasonably well described in the SOPCBuilder SDR-SDRAM documentation. 

 

I&#39;m guessing that the user wants to have their system dynamically self-configure SDRAM at power-up, via I2C, to be able to talk to whatever (potentially new) SO-DIMM it happens to find. I believe that (aside from the fact that the controller can&#39;t do it) putting in a different SO-DIMM might change the address space requirements of the system, which the CPU and other system components can&#39;t deal with anyway. Since a rebuild would be required to change memory size/capacity anyway, you can reconfig your controller to match the new SO-DIMM at the same time. 

 

 

If this doesn&#39;t help, please reply. Heck, reply if it does help!
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

 

--- Quote Start ---  

originally posted by kerri+jul 14 2004, 11:56 am--><div class='quotetop'>quote (kerri @ jul 14 2004, 11:56 am)</div> 

--- quote start ---  

we completely support so-dimm&#39;s, but do not at all support i2c -- our sdram controller only does static so-dimm configurations (set up in the wizard, pre-quartus compile).  this should be reasonably well described in the sopcbuilder sdr-sdram documentation.[/b] 

--- quote end ---  

 

yes, it was quite obvious in the documentation. i was just wondering if you guys were thinking of adding such a feature. 

 

<!--quotebegin-kerri@Jul 14 2004, 11:56 AM 

i&#39;m guessing that the user wants to have their system dynamically self-configure sdram at power-up, via i2c, to be able to talk to whatever (potentially new) so-dimm it happens to find.  i believe that (aside from the fact that the controller can&#39;t do it) putting in a different so-dimm might change the address space requirements of the system, which the cpu and other system components can&#39;t deal with anyway.  since a rebuild would be required to change memory size/capacity anyway, you can reconfig your controller to match the new so-dimm at the same time. 

--- Quote End ---  

 

The user just wants a lot of RAM. We could design around some SDRAM chips, but then we&#39;re at the mercy of some company keeping those chips in production. An SO-DIMM would be easier to deal with over the long haul, plus they&#39;re easier to route. 

 

Static worst-case timings are acceptable, as long as I can determine what such timings should be. Or I&#39;ll just spec some timings and see how well it does. I&#39;m running PC133 memory at PC66 speeds anyway, so it should be a bit more lenient. 

 

As for memory size detection, I figured we&#39;d just set up SOPC builder for the maximum and then detect how much is really there (if any) at boot time. Will this approach fail?
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

(slightly off-topic) 

 

Mike, if you&#39;re interested in I2C we&#39;ve "ported" the opencores.org I2C. The component will be included with our uKit. The uKit has a connector that can be used for I2C (it has the neccessary pull-up resistors [1.5K], refrence power is available [3.3V] etc as per section 17.3 of the Phillips I2C spec). We have wired an EEPROM and a Real-Time clock to this connector and verified, under Linux, these device operate. The device drives are also included. 

 

mike
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

From an engineer (again) 

Q) "set up SOPC builder for the maximum and then detect how much is really there (if any) at boot time. Will this approach fail? " 

 

A) Running code out of SDRAM *could* fail, unless you were very careful. If your system was just using the SDRAM as buffer type space, and was fully aware of the actual memory capacity (rather than the available max capacity) it *should* work, as long as timing and init sequences were compatible between dimm&#39;s.
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

 

--- Quote Start ---  

originally posted by kerri@Jul 15 2004, 06:16 PM 

running code out of sdram *could* fail, unless you were very careful.  if your system was just using the sdram as buffer type space, and was fully aware of the actual memory capacity (rather than the available max capacity) it *should* work, as long as timing and init sequences were compatible between dimm&#39;s. 

--- Quote End ---  

 

Well, the execution sequence would look like this:[list][*]Start from internal-memory booter 

 

[*]Booter detects and initializes SDRAM. 

 

[*]Booter loads real program from Flash to SDRAM. 

 

[*]Booter jumps to real program. 

 

[*]Real program uses rest of memory for buffer/heap/whatever. 

[/list]Timing and init sequences... sounds like I need to look up some JEDEC docs just to be sure.
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

Hi Mike, 

 

Just to clarify: I think the danger here is that in conventional Nios II applications, your code must be linked against a known memory map (that is how the linker figures out where to start the stack). If you suddenly change the amount of memory in your SODIMM, the system will no longer function (specifically, if the memory is smaller than what you linked against).  

 

More complex systems, employing an MMU, operating system with loader, and bios of some sort, can allow a dynamic system where memory size & type are detected at boot-up (in this case, the application code is not linked against a hard-coded memory map)... we are not there yet with Nios II, but keep your eyes open.....
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

Jesse, 

 

Good points. This leads to another question: How would one go about setting up a project with a memory map (for the RAM area, at least) as follows: 

 

(Low Memory) 

vectors 

text segment 

data segment 

bss segment 

stack (fixed maximum size, grows down) 

heap 

(High memory) 

 

In this case, could the heap size be set at runtime? Failing that, could the upper heap limit be initialized to a point representing minimum-installed memory, which could then be moved up when more memory was detected? 

 

Just some thoughts.
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

Mike, 

 

Will your instruction source require all of this ram or could you get away with say a 16MB chip on board for code and use the SODIMM for data buffer? 

 

That&#39;s kind of what we did with our last coldfire board. It has enough sram on board to run, but detects and uses whatever is in the EDO SIMM slot. So even if the stick goes bad the board still functions. 

 

What mechanism if any (no MMU) in the Nios stops you from setting a pointer to any address you want and writing or reading? (as long as it exists) 

 

May not apply to your case. 

 

Ken
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

 

--- Quote Start ---  

originally posted by kenland+jul 21 2004, 03:23 pm--><div class='quotetop'>quote (kenland @ jul 21 2004, 03:23 pm)</div> 

--- quote start ---  

will your instruction source require all of this ram or could you get away with say a 16mb chip on board for code and use the sodimm for data buffer? 

 

that&#39;s kind of what we did with our last coldfire board.  it has enough sram on board to run, but detects and uses whatever is in the edo simm slot.  so even if the stick goes bad the board still functions.[/b] 

--- quote end ---  

 

we thought of that, but the goal for using an sodimm is to simplify things by having just one memory bank (hence, one memory bus). in our case, memory failure is pretty much system failure, so there&#39;s no benefit there, either. we may have the fatal exception vectors point to internal memory, where it will report the error to the outside world and then hit an infinite loop. 

 

we&#39;re doing what you&#39;re talking about on a ti dsp board, which has internal and external ram banks. the code lives internally along with data/bss/stack/heap, and we have our own memory allocator that deals with the external memory for bulk use. 

 

<!--quotebegin-kenland@Jul 21 2004, 03:23 PM 

what mechanism if any (no mmu) in the nios stops you from setting a pointer to any address you want and writing or reading? (as long as it exists) 

--- Quote End ---  

 

You have to tell the SDRAM interface SOPC component what the maximum size is so it generates the proper addressing hardware and I/O pins. But if the software side assumes that you always have that much memory, there will be trouble. All I need is a mechanism to tell the software side to use only a minimum amount of the RAM, then I can manage the rest myself, but it would be easier if I could just tell the thing to move the top of the heap up and just use malloc/free. There may be a mechanism to do this; I haven&#39;t gotten that far yet.
0 Kudos
Altera_Forum
Honored Contributor II
735 Views

Mike, 

 

You&#39;re on the right track. You&#39;re right about the HW constraints (For the maximum size memory anyways). I believe what is necessary in your case is to diverge from the fully automatic SW dev flow and play with a linker script. If you make a SW project in the IDE, there will be a file called "generated.x", in the syslib project system description folder (I think). This file is what tells the linker where to put text, exception, bss, stack, heap, etc. Now I am no linker script expert and I hear fiddling with these can be &#39;fun&#39; so to speak, so I&#39;d recommend small incremental changes to meet your goals... however, it should be pretty straight forward: the sizes of memory (SDRAM for example) are defined here and they will control what the linker is allowed to use.  

 

If you were to, for example, define a "minimum" SDRAM size that is linked against, and then# defined this into your application code, you might be able to do a walking memory test at boot up to figure out how big the SDRAM really is, and then use this information to know how much is available for your application to use.
0 Kudos
Reply