Hi,I want to take a camera... (IDS camera) and stream video into my FPGA. Camera interface will be GigE or USB3 depending on FPS i will decide. in the FPGA i would like to do (what i believe is) simple image processing of... lets say frame subtraction i.e. subtract frame k from frame k+1. resolution: ideally i would like to work with 640x512 with 300 FPS gray scale 8 bit. I know its a lot of data and eventually i will probably downgrade significantly the SPEC but this is the ideal parameters i would like to work with. I have a few questions: 1. Interface: how do stream the frames into the FPGA? what component should i look for in order to take the GigE or USB3 output interface of my camera and connect it to my board so i can do something with it? 2.I assume i will need an external DDR? can you give me some pointers on how to choose what i need and the main principle on how to use it? Maybe 2 DDR's and erasing the older frame every a new frame comes along? 3. can you give me an advice on which FPGA should i choose? i worked mainly with LATTICE so far and want to cross to ALTERA. 4. any suggestions/ reading material/ comment will be most welcome Thanks!
1. Forget about USB, especially USB3. It is very complicated to implement or the core would be expensive to purchase. Also, its not really a video interface. When you mention GigE, do you mean Ethernet or GigE vision? If it's ethernet, how are you expecting to stream the video into the FPGA? Any Particular video format via RTP? like SMPTE 2022-6? SMPTE 2110-20?You might be better off using a dedicated video link like HDMI 2. Possibly. It will depend on what processing you're doing. If you only need to store 1 frame then DDR will be overkill - you might be able to fit it into internal ram, depending on what device you chose. Otherwise a small SRAM should suffice. But it will depend how much buffering you really need. 3. Depends on your budget. 4. It really depends on your skill level.
1. Ethernet.well i am not really familiar with the streaming options. i would really like to get a basic idea, high level view of the flow i need to go through... this is the type of camera i want to use: https://en.ids-imaging.com/store/ui-3140cp-rev-2.html or equivalent to that.. now lets say the interface is GigE. whats next, what my next stop from the camera? I will dive into it, but first i need to understand the basics.. what am i dealing with. especially if i want high resolution and high frame rate. a note: please don't be discouraged from my knowledge in the subject. i really want to get into this world, and even if i'll need to outsource this time, part of the things, i need and want to understand what i am facing. 2. if i want to do image differentiation, i don't really need to store both frames as i can subtract the pixel as i overwrite the old ones. ( i think). 3. budget will be up to 200$ but preferably lower ofcourse. lets assume that all the FPGA needs to do is what is described here. what family should i look into? do i need something fancier then cyclon V? 4. used mainly FPGA as CPLD didnt use RAM's like DDR or high speed Thanks!!
1. Etherenet and USB are non-trivial and require packet based interfaces which add additional issues to the system. I highly suggest going HDMI or DVI first! Also, the video format you suggest is at least 786Mbps (without blanking, if present), which is getting close to 1Gbps - you may struggle to get this constantly over a gigabit ethernet link, especailly if it's not a dedicated link. Another reason to go HDMI or DVI.2. Yes, you only need 1 frame, but you would need to ensure the input and output are synchronised. Plus you need to think about internal buffers, especially if you have some packet based interface to worry about too. 3. You should be able to use a cyclone V. The pixel rate is only 100Mhz, and 200MHz internally shouldnt be a problem. 4. It's best to spec out what you need, the get into the datasheets to see if you can find anything suitable.
Sometimes, working with image sensors directly is easier than working with a camera packaged for PC use. (USB 3 vs. GigE) topic becomes "which PC-centric interface is easiest to do in FPGA" and the answer is GigE.(by the way, >900Mbps GigE Vision is no problem in point-to-point or lightly loaded switched network). My suggestion would be to look for a different camera that is easier to interface with. HDMI/DVI as Tricky said, but also CameraLink, Sony LVDS, HD-SDI, .... Cyclone (III/IV/V/10) is plenty for the task you're describing, but $200 budget is probably not enough. The BeMicro CV A9 board would have been close (GigE) but I believe it's discontinued, and most of the cheap SoC boards only bring the GigE PHY to the ARM HPS MAC. The Terasic VEEK is probably as-good as any starter kit for learning with video processing.
I take that back - $65 for Arrow DECA kit, which includes a MIPI camera module and has HDMI output. The MAX 10 is most likely adequate for your application, and in any case just running through the labs on the wiki page would be educational enough to make it worth the price.https://www.arrow.com/en/products/deca/arrow-development-tools http://www.alterawiki.com/wiki/deca
Thank you very much for all your inputs.Some clarifications:
1. GigE Vision is a UDP Packet based interface. So you will need to cope with the decapsulation to get the data out of the UDP packets. Something like DVI or Cameralink just gives you pixels directly on the bus with sideband sync signals, and so is much easier to handle. Note: I have never used GigE Vision. But from reading around it is not an open standard, and so you may have to pay fees. You can submit a request for the standard here: https://www.visiononline.org/form.cfm?form_id=7013. Its not that high. The pixel rate is lower than FullHD (1080p) so should be perfectly possible Bottlenecks are likely to occur in the interfaces - usually the memory. I am rather skeptical about making your own board getting the layout and masks for your own PCB will be thousands (or tens of thousands) of dollars. The Schematic tools are not cheap and layout even more expensive (plus the hourly rate of someone to do the layout as you will need someone with good skills with the high speed tracks). And then the board itself will be multi-layered and cost a lot to manufacture. If this is a hobby, stick with a dev board! if you were planning on selling this, I doubt you would be posting here.
Its a part of my theses.My thesis advisor has an idea for an algorithm and we want to do a proof of concept. I am a board design engineer at my work. I usually do my own electrical schematics and layout of multiple layers (usually up to 8 layers but a few times 12 were it was needed). If needed, i will outsource the layout to a layout company. I program the FPGA on the boards, as well as the micro-controller. I admit i have little experience with high speed designs, and little experience with camera handling and interface. i want to get into that. I would really love just to plug the camera to a board ( a small compact board :) ) and just write the verilog code to do what i want. Ok so i will try and find a camera to suite my needs which uses CameraLink interface
GigE Vision isn't that bad (I've used it, in Cyclone's). As Tricky mentioned, it's UDP based. Which means all you need is a board with the GigE brought to the FPGA fabric (not the ARM HPS side, if there is one), and then some elbow grease using the UDP Offload Example as a starting point.http://www.alterawiki.com/wiki/nios_ii_udp_offload_example You should be able to use PC software to e.g. configure the camera and command it to start transmitting to a destination IP address that the FPGA is listening on. Like I said, >900Mb/s is fine given no other network activity. Point-to-point between the camera and the FPGA, your 800Mb/s is very do-able. As Tricky mentioned, if you are buffering into DRAM then memory interface design may be a concern. "frame subtraction" in real-time requires 2x the pixel rate (storing frame (N), recovering frame (N-1)). The DECA board can be used to learn about video processing in FPGA, which you have some learning to do independently of USB3 vs. GigE Vision vs. CameraLink vs. MIPI; the kit looks "nice" because they give you MIPI camera in -> HDMI display out lab for you to work through. Unfortunately, the board doesn't have GigE so you can't also use the board for any GigE Vision work. The Enclustra Mercury SA1 might be a good fit, with their PCIe card carrier for your one-off design. https://www.enclustra.com/en/products/system-on-chip-modules/mercury-sa1/ Your project final board might reduce down to a custom carrier for the Mercury module, which just has one or two GigE PHY, power, etc. Your thesis algorithm can be roughed out entirely on the DECA board with whatever frame rate limitation it has, and from there you should be able to understand the FPGA requirements. After you have structured your IP block like (Avalon-ST VIP protocol input -> algorithm -> Avalon-ST VIP output) [or AXI-S] then moving your work to another board with another camera interface isn't a big deal. FPGA component cost-wise, in your shoes I would probably target Cyclone 10 LP in -040 or -055 size grades which Digikey has for $40 to $70 qty 1. This is all assuming your algorithm isn't a can of worms and it's really as simple as "frame subtraction" as you mentioned in your opening post, and you might want to double check your estimated resources before buying anything.
Thank you so much for the elaborated response.Sorry, i didn't comment earlier, I was away for a few days. You guys are really helping me out, I really appreciate it. A question: "Enclustra Mercury SA1 might be a good fit" how does this board provide me the ability to work with GigE?
Heads up: I just received the DECA kit, and it doesn't include either the camera or the BLE/WiFi cape referenced in the user manual. Just the board itself and cables. I'm pretty bummed.
Ok, so i really liked this board.Thank for the recomendations. Still exploring it. Do any of you guys have a recomendation for a board with usb3 module (like this one has for mipi2) and/or Gige (ethernet)? Also, as an ALTERA newbe, can you explain to me why do i need the nios 2? Didnt really understand why do i need it for? I am used to work with lattice fpga and pic from microchip
Search for terasic d8m and cameravision on github if this can help. Not pure mipi interface, use a mipi bridge. Starting with still images is better to begin than video.