FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6356 Discussions

CVI won't output more than one frame, eop

Altera_Forum
Honored Contributor II
1,171 Views

Hello Forum, 

 

(sorry in advance for such a long post) 

 

I'm experiencing some issues with a video processing system that uses the VIP blockset. I think the issue I'm seeing can be attributed to the Clocked Video Input (CVI) block. The issue I'm seeing is that the CVI outputs a single frame of video, and then never outputs another one...no end of packet (eop), no nothin', even then the subsequent block says it's ready, and there are pixels coming in from the outside world. 

 

A description of my system: 

I'm attempting to blend/mix video from two different external video sources; one is an NTSC video A/D (ADV7403), the other source is a separate processor (OMAP) that is generating graphics that I want to overlay on top of the live video. I started my design using the files from AN427 (dated July 2010) as a basis. However, I'm using a custom board, and only have 2MBytes of RAM for buffering...so, I removed the triple buffer in the deinterlacer, and the double-buffer before the Alpha Blender. I also removed the second (progressive) clipper and scaler from the AN427 files. However, I did need some buffering for ADV7403 pipiline, so I added a double buffer after the initial clipper. I can get this system running just fine, with a TPG as the background layer, and my live video as layer-1 (no layer-2 yet). 

 

The problem arises when I attempt to add the graphics layer. The graphics layer pipline is: CVI-->Clipper-->Triple Frame Buffer-->Layer 2 of Alpha Blender-->CVO 

So, the only blocks I added to the system described in the previous paragraph were the CVI, Clipper, and Triple Buffer, plus I added an extra input layer to the blender. Mixer layers are now: (0) TPG, (1) Live video, and layer (2) is graphics. 

 

I'm also using a NIOS to control nearly all of the blocks that have the option of presenting a MM interface. The problem I'm seeing is that when I start these blocks, I get a single frame out of the graphics CVI, and then I never get another one. The system is instrumented with SignalTap. Using SignalTap, I can see that the CVI outputs a single frame, and signals ONE TIME on the "is_eop" line that it has reached the end of a frame. However, after that it never outputs another eop signal. I've also looked at the signals feeding back from the subsequent clipper, and I see that it continuosly signals that it is ready to receive more data. I could understand if the CVI wouldn't output any more frames if it was getting backpressure from the downstream blocks, but that isn't the case here...the downstream blocks are ready for more data. 

 

Some specifics of the graphics pipeline: the incoming video is 640x480, external sync, ~73frames/sec, progressive. The clipper outputs only a small portion of the incoming pixels: it outputs 640x32. Thus, the triple frame buffer only requires ~300KB of memory. 

 

The SOPC system is running at 100MHz. 

 

I've done the memory bandwidth calcs, and I'm only using about 1/4 of my available bandwidth. The design is constrained using the sdc files that come with the VIP blocks. 

 

The thing that really makes this confusing is that I can generate an SOPC system with only the following blocks: CVI-->Clipper (set to 640x480 out) -->CVO, and the graphics displays just fine. So, why is it, when connected to the other system, and no signals from downstream blocks indicating a stall condition, doesn't the CVI output more frames? 

 

Any thoughts or suggestions would be greatly appreciated. 

 

Again, sorry for the book. 

Thanks! 

 

EDIT: I added a few pictures of the system.
0 Kudos
5 Replies
Altera_Forum
Honored Contributor II
411 Views

Hi, 

 

I cannot see a "smoking gun", but do have a few questions: 

I pressume you are monitoring the status of the graphics CVI via the NIOS? Do you see any FIFO overflow or loss of synchronisation? 

This may sound like a stupid question, but are you sure the SW on the NIOS is not unintentionally disabling the CVI core? Have you verified (by reading the control register) that the core remains enabled? One way in which the core can become (unintentionally) disabled is if the SW is writing to an unimplemented register address - the address decoding produced by SoPC is not complete and writes to unused addresses can alias into used addresses. This can easily happen in debugging software where fragments of old or temporarily unused code starts lying around. 

 

Regards, 

Niki
0 Kudos
Altera_Forum
Honored Contributor II
411 Views

Hello Niki, 

 

Thanks for your response. We've got the system working...and the problem seems to be something with how we had the CVI setup in SOPC builder. A colleague of mine was trying some different things to get it to work, and for the heck of it he just decided to choose the "DVI" default on the right side of the CVI setup window (he did change the frame size output from the DVI default). We compiled that, and it worked. 

 

Right now I'm in the process of changing the DVI defaults back to what I had previously, parameter by parameter, and will try to figure out the point where it doesn't work anymore. 

 

To answer your questions: yes, I'm monitoring the CVI status via a NIOS, and nope, I didn't see a FIFO overflow or loss of sync. I was using the software project that came with AN427, and I'm fairly certain I wasn't accidentally disabling the core. 

 

Thanks again for your response. I'll post again when I figure out the CVI parameter that is causing issues.
0 Kudos
Altera_Forum
Honored Contributor II
411 Views

I've been taking the graphical-overlay CVI-block back to the configuration that didn't work, parameter by parameter, to try and figure out where the system "breaks." I changed the parameters in this order: 1) Field order, changed from "any field 1st" back to "field 0 1st"; the system still worked, 2) default frame size, changed from 800x600x32 back to 720x480x32; the system still worked, 3) FIFO size, changed from 3840 pixels back to 100 pixels; the system still worked, 4) (not directly related to CVI block) Global-clock parameter, I removed the "Global" directive on my top-level diagram that forces the main 100MHz clock to be global; the system still worked after I removed this. 

 

So, after doing all this, the only difference between the system that works, and the system that didn't work is that the OMAP (graphical overlay source) has been reconfigured to output frames at a slower rate. It was initially outputting frames at ~73Hz (this is when the system didn't work). It was changed to output frames at 30Hz. 

 

The explanation I've come up with for why the system didn't work is: When the system didn't work, and the OMAP was outputting frames at 73Hz, the FIFO size in the CVI was only 100 pixels. I imagine the FIFO was constantly being overflowed, and was causing issues. When the "DVI-Default" was loaded into the CVI, this kicked the FIFO size up to 1920 pixels, and the system began to let more frames through, although there were still some flickering problems, which led us to reduce the frame rate of the OMAP output. 

 

The question I still have is: if the above is really true, why wouldn't the CVI constantly send out an EOP signal if the FIFO was always being overrun? When it wasn't working, I only ever saw a single EOP signal.
0 Kudos
Altera_Forum
Honored Contributor II
411 Views

Glad you got it to work! Some time ago I ran into similar problems with the CVI. Taking away the input source, or changing the input so that it does not contain SAV/EAV codes anymore (I use BT656 / BT1120 with embedded syncs) resulted in the current ST packet just being "left open" with now EOP until the core properly synchronizes again. I did some simulations with the core and could reproduce the results there. But I did not have enough time to document these and raise the issue with Altera. It is possible that this is what you have seen - once the FIFO overflows, the core never fully resynchronizes and the current ST packet is left open with no more EOPs. But then you should have seen a FIFO overflow in the status register. 

 

Both the CVI and CVO, in my opinion, are not robust enough to handle all real world situations gracefully. The CVO has a similar problem: if you take away the video output clock (which is an input to the CVO) then its AvalonMM slave interface (which is on a different clock domain) asserts the WaitRequest forever, freezing the whole Avalon bus if it is accessed! Not very robust. I rewrote the CVO some time ago for this reason - I could not guarentee that the video output clock will always be present. I am seriously contemplating doing the same for the CVI as well.  

 

Regards, 

Niki
0 Kudos
Altera_Forum
Honored Contributor II
411 Views

Hello Niki, 

 

Thanks! Yeah, I'm pretty psyched to finally have things working! 

 

Yes, I remember reading your thread about the CVI getting stuck when you removed the input; I think you were de-asserting the data_valid and the CVI kind of froze, right? 

 

You are a braver soul than I...trying to write my own CVI/CVO sounds quite intimidating to me. 

 

With regards to your comment about seeing the FIFO overflow...when I had the graphics overlay generating frames at 73Hz, and a tiny 100-pixel FIFO in the CVI, I was printf-ing out the used FIFO words to the console every couple seconds, and I never saw it go above 1. However, I also later added a check to the "sticky" overflow bit, and it was almost always asserted...so the FIFO really was overflowing. The thing that initially made me believe the printf-statements was that I never saw an EOP come out of the CVI, which is what the manual says it's supposed to do when it overflows. 

 

I'll be more wary next time!
0 Kudos
Reply