FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6359 Discussions

Avalon-to-PCIe Address Translation

Altera_Forum
Honored Contributor II
1,135 Views

Hi, 

 

I am trying to transfer data from PCIe endpoint's DDR2 to host's mem via DMA. I can setup DMA and initiate a transfer; I can also see DMA performing DDR2 'reads' on SignalTap-II; but DMA write operations (to PCIe Tx slave) are not going through. My PCIe core has dynamic address translation table enabled and has two pages, 128MB each.  

 

Let's say my virtual address for large chunk of mem on host (win32) side is  

0x9344_7000. Following the Avalon-to-PCIe address translation description from PCIe core's manual, I perform the following ops: 

a. Write (9344_7000 & FFFF_FF00) at offset 0x1000 (Dynamic translation table entry) on BAR0 (PCIe's CRA slave). 

b. Write 2000_0000 (DDR2 start on avalon) as DMA's read address. 

c. Write 0344_7000 as DMA's write address.  

d. Write 0 to DMA's status reg 

e. Write 0x1000 to DMA's length reg - to test small chunk of data first. 

f. Write 0x488 to DMA's control reg to start a doubleword transfer and continue until length is 0.  

 

I do see my DDR2 being read on SignalTap-II. And the DMA status reads back 0x11, which seems to make sense (Transfer complete). 

But I don't see any changes in the content of my host (win32) mem where the DMA writes are supposed to go.  

 

Any pointers would help. Thanks!  

 

regards,
0 Kudos
4 Replies
Altera_Forum
Honored Contributor II
334 Views

What's the PCI Tx Slave address? Is 0x0? If not, you have to write (PCI_TX_ADDR + 0x344_7000) on dma write address. 

The problem can be on Host side? Are you using Jungo Windriver? Did you make the DMA Lock? Take care with cache flush question on host... and.... you have to pass the physical address of win32 host, generated by the Dma Lock function...
0 Kudos
Altera_Forum
Honored Contributor II
334 Views

Hi, thanks for the reply.  

 

I was using virtual address from my process, instead of physical. Upon using the physical addr of the returned virtual addr, the transfers began to work.  

 

We're using real-time extension of Windows called RTX, which has APIs to allocate contiguous mem backed by physical mem from windows' non-paged pool. This apparently preserves the deterministic behavior when mem routines are called. Not using Jungo for device driver dev.  

 

So I'm back on track. This forum has good information and great contributors. Regards.  

 

--------------
0 Kudos
Altera_Forum
Honored Contributor II
334 Views

Hello, 

have you solved your problem yet? Because now I also get a similar problem like what you met, I wonder if you know the solution to that. 

Best regards.
0 Kudos
Altera_Forum
Honored Contributor II
334 Views

 

--- Quote Start ---  

Hello, 

have you solved your problem yet? Because now I also get a similar problem like what you met, I wonder if you know the solution to that. 

Best regards. 

--- Quote End ---  

 

 

 

Yes, to elaborate what I had mentioned in my last post: The host PC talks to the FPGA via RTX OS (RTOS running along with Windows). This RTX RTOS has an API to get logical as well as physical addresses for the memory set aside for DMA operation. To get the whole DMA operation to to work, I used the API to get physical mem location in the ddriver code, and then started the DMA operations based on that physical address.  

 

If you're using RTX, I can tell you the name of API, but for other OSes, you'll have to dig the API ref which allows you to translate logical-to-physical translation. Hope this helps. 

-----
0 Kudos
Reply