Hello All,I've been working with the VIP video processing suite for the past few weeks with varying degrees of success. Using a custom PCB, I'm attempting to bring in analog NTSC video (via an Analog Devices ADV7403 A/D) to my Cyclone-III (EP3C40), and then simply spit the video out to an LCD display (an NEC NL6448BC20-21C). I started my project based on a similar project that works on the NEEK, the main difference is the A/D I'm using. So, I'm able to get some test patterns on the display, and some nearly-OK video (it flickers at what seems like every-other frame). The problems/questions I'm still having concern: 1) The deinterlacer doesn't seem to be operating correctly, the control packets coming out of it still say the frame is interlaced, 2) one of the clippers in the chain doesn't seem to be clipping at all, the control packets coming out of the clipper still state the frame coming out of the clipper is the same size as it came into the clipper, and 3) the control packets I'm seeing don't match the format described in the VIP user guide, I'm only seeing 3 nibbles of height information, instead of the 4 nibbles described in the user guide. I've attached a word document with several screen captures that show the problems. The first page shows my SOPC builder system. You can see from the SignalTap captures, the control packets coming out of the Deinterlacer still have the last nibble saying that the next frame is interlaced...why would that be? The next 3 pages show SignalTap outputs of things that actually look correct. Page 5 shows the clipper not working. Page 6 shows the deinterlacer not working, and the settings I have for the deinterlacer. Everything downstream from the Deinterlacer seems to be working properly, I can get beautiful test patterns from a progressive test-pattern-generator. The project is constrained using TimeQuest, and it meets timing requirements. The thing that bothers me the most is that the control packet formats seem screwed up. Any help or insight would be greatly appreciated. Thank you! Ted
Hello,I believe the deinterlacer and the clipper are operating as they are supposed to in your design. The deinterlacer propagates the interlaced control packets it receives and adds a valid progressive control packet into the stream just before sending an image packet. The last packet is the packet that counts so this is fine. The same applies to the clipper; the valid packet is sent just before the image. Try to trigger on the sop of type 0 and hopefully you will be able to see the control packet you are interested in just before that. The flickering might be related to throughput issues (overflowing CVI or starved CVO). It looks like there is no frame buffering in your system. Even if your input and output frame rates are the same you may have built a system that cannot possibly work with a genuine video input if you are doing a lot of clipping. What is Out_Clip doing? Do you get a stable output when using the interlaced TPG? You may want to have a look at the Video IP example design if you have not already: http://www.alteraforum.com/forum/showthread.php?t=19710
Hello VGS,Thank you very much for your reply! You hit the nail right on the head on both accounts! I really appreciate your input; I've been fighting with this issue for the past couple weeks, and today I finally got it all working. I triggered SignalTap on a type-0 packet, and sure enough the preceding control packet had all the expected correct information coming out of the deinterlacer and the clipper; and the control packet was the correct size. And you were right about the throughput issue creating the flickering. The input into the CVI block is 720x506, and the output of the CVO is 640x480...a lot more pixels coming in than going out, and the relatively small FIFO I had in the CVO wasn't big enough to compensate; the CVO was being starved of pixels to spit out to the screen. Unfortunately in my system I'm pretty memory limited, so it's going to prove difficult to add a frame buffer. Instead, I changed the deinterlacer to output frames at the input field rate instead of the input frame rate...that doubled the output rate, and as soon as I did that the video showed up solid as a rock. I'm now filling up the FIFO in the CVO, so I'm still going to have to figure out a way to instantiate a frame buffer, but this at least gets me something to show the boss and I think it proves that it is a throughput problem. Again, thanks for your prompt and insightful post! Take care.