- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello felows,
I'm starting the implementation of a image processing pipeline with the Arrow SoCKit. I'll have the ARM capturing images from two fisheye cameras in the USB port and then the FPGA part should do some processing. My goal is to achieve at least 10fps. I had some problem with the cameras, because of the ARM processor speed, the cameras where to fast, but now I learnt how to configure them and get images at 10fps is the fastest I can go. I still have to work in the synchronization between them, but this is a specific issue with the cameras. My questions about the FPGA part are: which is the most efficient method to pass the data to the FPGA? I was thinking about a DMA here, as the images are to big to store in the internal memories of the FPGA. There is some reference design I could learn from? Or maybe a shared memory? I'm lost... The processing pipeline theoretical formulation is done, now I'm proceeding to the implementation, but the data transfer will be the stone in the way, as I have no experience in managing the external interfaces. Any help will be welcome! Best regards!- Tags:
- soc
Link Copied
3 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
you could have a look at the VIP-Demo from Terasic, which is based on the SoCKit. It can be downloaded from [1] after signing in. [1] http://www.terasic.com.tw/cgi-bin/page/archive.pl?language=english&categoryno=167&no=816&partno=4- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Taz,
Thank you for indicating the demo, it is very close to what I need to do! I'll study it and try to reproduce. Did you know if there is some documentation detailing how it works more deeply? In my case, I will not need displaying the video, just capturing the frames for USB cameras and processing them with a custom FPGA module. The camera management will be done in the ARM side by OpenCV and the camera specific libraries. Then, what I need in fact, is just to transfer the frames to the processing pipeline and get the results back. I'll see what I can use from the demo and adapt it to my needs.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
maybe you can ask your local terasic dealer, or FPGA-distributor?!
I know there was some block-diagramm available, but I did not find it any more... =(
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page