FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP

Simple PCIe DMA

Altera_Forum
Honored Contributor II
2,352 Views

I guess I could try to find the appropriate documentation, but I'll cheat and ask here! 

 

I need to find which DMA block to ask our HW team to add to qsys in order to do burst PCIe transfers from fpga locations to arbirtary physical addresses (ideally 64bit ones). 

I don't want anything clever from the DMA engine: Avalon address, PCIe address, transfer length, command and status registers are enough. I'd even 'suffer' a tranfser length limit of one TLP. 

 

What is a requirement is to have several (less than 10) such DMA engines to let different bits of logic do PCIe master transfers. 

 

I don't remember seeing any DMA engines that were adequately integrated with the PCIe block. I think the one we are currently using maps a chunk of Avalon address space to PCIe physical addresses by internally generating the high address bits - I don't see how this is of any use at all!
0 Kudos
7 Replies
Altera_Forum
Honored Contributor II
1,551 Views

The only two 64-bit DMA engines that I know of are these: 

 

http://www.altera.com/support/refdesigns/ip/interface/ref-pciexpress-avalonmm-hp.html 

 

http://www.alterawiki.com/wiki/modular_sgdma 

 

That first one comes with a Linux driver and the second one has a Linux driver up on the wiki here: http://www.alterawiki.com/wiki/linux_pcie_driver
0 Kudos
Altera_Forum
Honored Contributor II
1,551 Views

I looked around for some third-party IP at one point and came across a PCIe IP core called "Lancero" 

 

http://www.lancero.eu.com/lancero-pcie-sgdma/ 

http://www.logicandmore.com/lancero.html 

http://www.microtronix.com/ip-cores/lancero-sgdma 

 

I *have not* looked at this core or evaluated it ... but it looked interesting ...  

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
1,551 Views

 

--- Quote Start ---  

I think the one we are currently using maps a chunk of Avalon address space to PCIe physical addresses by internally generating the high address bits - I don't see how this is of any use at all! 

--- Quote End ---  

 

 

Why not? 

 

If you're using the memory mapped interface then DMA target addresses use the high bits of the local address to determine which 

page of the address translation table to lookup to get a host address. It's simple enough and works well, all you have to do is set up  

the address translation table the way you want it. 

 

If you need multiple sources you'll just have to multiplex them somehow so that transfers are done in whatever sequence you need. 

 

The logic to do this can be pretty trivial. 

 

 

Nial.
0 Kudos
Altera_Forum
Honored Contributor II
1,551 Views

Hi Nial, 

 

--- Quote Start ---  

 

It's simple enough and works well, all you have to do is set up the address translation table the way you want it. 

 

--- Quote End ---  

 

 

The original poster wanted to be able to DMA to an arbitrary 64-bit address, and he wanted 10 different DMA engines. As far as I am aware, the TXS port on the Qsys PCI IP core would only allow one value for the 32 MSBs of the address, so the 10 DMA controllers would be restricted to a 4GB region of the PCIe memory map. Its pretty common now for host PCs to have in excess of 4GB, so a driver on the host could easily provide PCIe DMA target page addresses that cross a 32-bit address boundary. 

 

Bottom line is that the Qsys PCIe core PCIe bus master interface is too simplistic. 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
1,551 Views

I could manage with physical addresses below 4G. 

But my attempts to configure the PCIe avalon MM master that way fail - unless I set the master interface to 64bit - and the dma controller doesn't want to generate 64 bit slave addresses. 

 

I can't actually see how the dma controller can actually generate long TLP for reads at all.
0 Kudos
Altera_Forum
Honored Contributor II
1,551 Views

 

--- Quote Start ---  

I could manage with physical addresses below 4G. 

 

--- Quote End ---  

 

Ok (your device driver on the host end would have to deal with it). 

 

 

--- Quote Start ---  

 

But my attempts to configure the PCIe avalon MM master that way fail - unless I set the master interface to 64bit - and the dma controller doesn't want to generate 64 bit slave addresses. 

 

I can't actually see how the dma controller can actually generate long TLP for reads at all. 

--- Quote End ---  

 

 

The Qsys PCIe core (which it sounded like you were talking about) does not have a DMA controller. If using the Qsys PCIe core, you would then use a regular Avalon-MM DMA controller on the Avalon bus, which would use 32-bit DMAs to the TXS port of the Qsys PCIe core. 

 

This thread has links to documents and code on how that core can be configured: 

 

http://www.alteraforum.com/forum/showthread.php?t=35678 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
1,551 Views

To answer myself 4 years later.... 

We set the Qsys avalon to PCIe bridge for 512 16k pages and allocate non-contiguous kernel buffers that don't cross 16k boundaries. 

These can me mmap()ed into contiguous user addresses ('interesting' on windows!). 

All the used entries point to a 'dummy' page 'just in case'. 

 

Any Avalon master has to do burst transfers to generate reasonable sized TLP. 

I ended up writing a suitable DMA controller.
0 Kudos
Reply