- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I've just learned that the intel new BSPs support streaming data from host to FPGAs and vice-versa over the PCIe. I'm still wondering what are the use cases or specific applications that can benefit from such feature! Is there any specific scenario, either in Deep Learning, Reinforcement Learning, Big-Data, or Streaming Processing that can take advantage of such technology?
Thanks
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
for the A10_ref, there is a BSP with suffix "hostch". In this mode we have a direct channel from the host to the device, which bypasses the FPGA main memory.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I found the paper below may be helpful for you to further understanding the streaming interface:
Regards -SK Lim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SK Lim,
Thanks much for sharing the paper with me. Unfortunately the link does not work. Could you please share the title of the paper with me?
Thanks,
Saman
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Saman,
Here is the title: Host Pipes: Direct Streaming Interface Between OpenCL Host
and Kernel
Regards -SK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To answer the original question, this feature is very useful for "out-of-core processing"; i.e. processing data that is too big to fit on the FPGA external memory but can fit on the host memory. There is a large body of work in HPC and Big Data using GPUs where overlapping/pipelining of compute and PCI-E transfer is implemented using double buffering on the GPU memory. For applications that can be "streamed", host channels on FPGAs can be used to efficiently implement out-of-core processing without the need for double-buffering. However, for applications that cannot be streamed, this feature is not applicable and double buffering will have to be used as is done on GPUs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you provide a link to an example of this double buffering mechanism?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I do not know of any such example that you can directly use right now. You might be able to find something if you search in google, especially if you look for CUDA code used for out-of-core processing.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page