- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I'm planning to do a Picture-In-Picture application using 2 Video ADC and a Cyclone FPGA with some SDRAM attached.
I know how in principle stuff is digitized, sync codes extracted and colorspace converted so that one can store a frame in the external SDRAM. I'm not shure how to syncronize two video streams, because the framerates will have a slow drift because the 2 cameras attached are free running and not synchronized. So once a while I have to drop a frame when e.g. CAM1 "overtakes" CAM2, do I? Is the mixing normally done on a line base or is it better to do it frame by frame? How do I resize a picture? Does anyone have good literature about scaling algorithms and / or video processing in general? MarkusLink Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Past analog video systems always used a central sync supplied to each camera. With digital video, frame buffering may be used instead. Image scaling is applied to full frames anyway.
Altera, by the way, has a collection of Megafunctions for digital image processing, e. g. a scaler.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page