Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Honored Contributor I
1,898 Views

Usage of DMA from PIO to SDRAM shared by NIOS

Hello everybody, 

 

I was wondering if someone could respond to my following question connected with DMA: 

 

I have functional NIOS II system with TSE and SDRAM as a main data and instruction memory. I am working with the uCOS II. Now in the Qsys I have put PIO and classic DMA controller for the ADC data sampling. I would like to use this DMA to copy some data block, let's say 2MB from the PIO (synchronously with the system clock frequency) to some section in the SDRAM (obviously somewhere outside the code and data size).. I have used this configuration previously but my old NIOS was working with the onchip memory as a data and instruction memory and the DMA was accessing another memory.  

 

But now I am using that SDRAM also for code/data. Is it possible to do such a DMA transaction when the SDRAM is also shared by the processor instruction and data? 

 

Thanks in advance, 

 

Jan
0 Kudos
11 Replies
Highlighted
Honored Contributor I
30 Views

Yes, the Avalon fabric will share the SDRAM between the processor and the DMA.

0 Kudos
Highlighted
Honored Contributor I
30 Views

 

--- Quote Start ---  

Yes, the Avalon fabric will share the SDRAM between the processor and the DMA. 

--- Quote End ---  

 

 

Thanks for the answer. Just for ensure myself. My system will get some commands from the TCP and in the message receiving task it starts the DMA transfer like this: 

 

My SDRAM is mapped in the address space from 0x08000000 till 0x0FFFFFFF ( 128 MB ). Now I want to start DMA transfer from the PIO which is directly connected to the dma read_port to the SDRAM somewhere for example from 0x08300000. The write_port of the dma is connected to the SDRAM. The SDRAM is also connected to the NIOS. Please see attached .qsys file (just to be sure that everything is OK). 

 

No I am writing to the DMA_0_BASE register number 2 the write address, I have used 0x00300000 - relative a dress in the SDRAM. Am i right?  

 

IOWR(DMA_0_BASE,1,0); IOWR(DMA_0_BASE,2,0x00300000); IOWR(DMA_0_BASE,3,20); IOWR(DMA_0_BASE,6,0x00000182); IOWR(DMA_0_BASE,6,0x0000018A);  

 

The end of the transaction is checking by reading the length register IORD(DMA_0_BASE,3) until it equals 0. Afterwards I am reading the data section by: 

 

for(i=0;i<20;i++) { sample_ddr = IORD(SDRAM_BASE,0x00300000+i); sample = sample_ddr&0x0000FFFF; sample = sample_ddr>>16; printf("%4d\n%4d\n",sample,sample); }  

 

But in the terminal the content of the SDRAM has not changed, it is the same one like before the transaction. I am not sure if I am handling the addresses of the write and read part correctly. Would you be so kind and check this think out? 

 

Thank you so much, you are so kind...
0 Kudos
Highlighted
Honored Contributor I
30 Views

Hi, 

 

What do you mean the terminal value hasn't changed? Is this in reference to what your print statement is outputting? 

 

Can you explain what this code is supposed to do?  

 

for(i=0;i<20;i++) { sample_ddr = IORD(SDRAM_BASE,0x00300000+i); sample = sample_ddr&0x0000FFFF; sample = sample_ddr>>16; printf("%4d\n%4d\n",sample,sample); } 

 

Also, how have you declared "sample_ddr" and "sample[]"? Since your right shifting bits, hopefully sample is an array of unsigned ints :) 

 

IORD returns an unsigned int, 32-bits. So, you're incrementing your memory reference by 4 bytes on every interval of your loop.  

 

If you're debugging what's being read back from IORD, I'd suggest using a debugger to view the memory. If you want to stick with printing the values, I'd at least print the entire 32-bit read and would personally output it in hex, if you know how the data is supposed to be formatted in the register.
0 Kudos
Highlighted
Honored Contributor I
30 Views

Okay thanks for the answer about the correct outputting. It could be issue of the terminal output format.. But what I mean with the "terminal output does not change" is that the content of the SDRAM is still unchanged after DMA transacation.. like the DMA did not write anything new.

0 Kudos
Highlighted
Honored Contributor I
30 Views

 

--- Quote Start ---  

No I am writing to the DMA_0_BASE register number 2 the write address, I have used 0x00300000 - relative a dress in the SDRAM. Am i right? 

--- Quote End ---  

No, the address on an Avalon master is always absolute, do you need to put 0x08300000 there. Be careful too if you want to write to the beginning of the SDRAM, as usually you have the reset and exception vectors there. Any modification of this area will make the CPU crash immediately. 

Try to use a malloc() function instead and use the address returned with the DMA. It is a lot safer. 

To debug this kind of problems I always recommend to use signaltap on the different Avalon interfaces. It lets you see what the IPs are doing and if you are using them correctly.
0 Kudos
Highlighted
Honored Contributor I
30 Views

Ok thanks, I will try to get 100% correct address with the malloc() function, it came to my mind also :)... But one question about the avalon fabric. The avalon fabric is natively used, i do not need any special Qsys component to use, am i right?.. So for example if I will start the DMA transcation of the 1million bytes with the speed of 100MHz for instance. It should take 10ms, so for this amount of the time the avalon fabric arbiter will stop the NIOS for accesing the shared SDRAM?

0 Kudos
Highlighted
Honored Contributor I
30 Views

You don't need any special component. The fabric will connect the masters and the slaves together. 

The data bus is usually 32 bits, so the DMA can transfer 4 bytes per cycles. Your calculation is assuming that the ram can take one word per cycle, and this of course depends on the type of RAM and the kind of transfer. A DMA transfer can use burst transfers, so most DRAMs will be very fast with this kind of transfer. 

If the Nios CPU is trying to access the SDRAM, it will be shared with the DMA master, so the DMA throughput will decrease. IIRC you can adjust priorities for both masters, or if you design your own component with an Avalon master, you can lock the arbiter to be sure you have exclusive access to the memory during the transfer.
0 Kudos
Highlighted
Honored Contributor I
30 Views

 

--- Quote Start ---  

You don't need any special component. The fabric will connect the masters and the slaves together. 

The data bus is usually 32 bits, so the DMA can transfer 4 bytes per cycles. Your calculation is assuming that the ram can take one word per cycle, and this of course depends on the type of RAM and the kind of transfer. A DMA transfer can use burst transfers, so most DRAMs will be very fast with this kind of transfer. 

If the Nios CPU is trying to access the SDRAM, it will be shared with the DMA master, so the DMA throughput will decrease. IIRC you can adjust priorities for both masters, or if you design your own component with an Avalon master, you can lock the arbiter to be sure you have exclusive access to the memory during the transfer. 

--- Quote End ---  

 

 

Thank you for your kind answer... Now it is much more clear... One last question If I will use another SDRAM chip with dedicated SDRAM controller. So in other words one SDRAM for NIOS and one SDRAM for samples I can achieve absolutely background transfer of the samples without any sharing.. Am I right?
0 Kudos
Highlighted
Honored Contributor I
30 Views

Yes you are right. If you have a transfer from two different masters to two different slaves, they can happen at the same time with no interference.

0 Kudos
Highlighted
Honored Contributor I
30 Views

Using malloc() might not be such a good idea - it won't allocate cache-line allocated memory and could well 'poison' the data cache with addresses inside the allocated area. 

Might be safer to get the linker script to allocate the buffer.
0 Kudos
Highlighted
Honored Contributor I
30 Views

There is also an alt_uncached_malloc() function in sys/alt_cache.h . I didn't have a look at the code to check if it would align the buffer with cache lines or not though.

0 Kudos