Hi,I have a video stream (BT1120) entering an FPGA through the CVI. The stream can be interrupted at any time (someone pulling out the plug). I have an internally generated "blue-screen" that I need to switch to if the input video is interrupted. So I have taken the CVI output and the BS output to a Switch component (form the VIP suite). If I detect that the input video is missing, I command the switch to switch to the blue screen. Simple. However, this does not always work - sometimes the switch never switches over from the CVI input to the BS input. It just remains stuck there. The switch only switches on ST frame boundaries and it seems as if the CVI does not always end its current ST frame. I have done a couple of simulations of the CVI alone and have found the following puzzling behavour: 1. If you take the vid_locked input low at an arbitrary point in the input stream, while the CVI is streaming video on its ST output, it simply takes the st_valid output line low, but does not end the current video frame. The video frame remains "open" indefinitely, until vid_locked goes high again. The CVI then re-syncs with the incoming video (while still keeping the previous video frame open), and once synced again, starts to output new video data. But this data is still part of the previous video packet. The result is an invalid video packet. (See CVI_1.jpg attached). Surely, if the input has lost lock, you would want to end the current video frame, resync and then start a new frame? 2. If you command the CVI to stop streaming video on its ST output (clear bit 0 of the control register), the CVI continues until the end of the current video frame, ends the frame correctly, BUT, then outputs another control frame and starts a new video frame by outputting just the header and then nothing else. This video frame remains open as long as the CVI is stopped. (See attached CVI_2.jpg). A Switch component would sit forever and wait for the current video frame to end in this situation. Both of these cases seem like bugs. Or am I missing something? Regards, Niki PS: Forgot to mention that I was using Quartus 9.1SP2.
Hello,Is the simulation that you have done is the simulation of your system with the Switch? I am asking this because the Switch behavior could be tricky to understand, so it is better to connect to the CVI a simpler component when checking it in simulation. BR,
Hi,Thanks for the response! The simulation was with the switch, so, yes, the switch could influence the results. But since the Avalon ST inteface gives a full "picture" of what the CVI sees, it does not really matter what is connected to the CVI. The switch may cause a certain pattern of back-pressure that causes the CVI to fail, but that is exactly what I would like to investigate. If you look closely at the CVI_1 picture, you will see that just before the vid_locked signal goes low, the CVI is outputting valid data (you see the pulses on is_valid). The moment the vid_locked signal goes low, the CVI stops outputting valid data, while the ready line (is_ready) remains high - so there is no backpressure from the downstream switch. The is_valid output remain low until the is_locked line goes high again. As soon as new video data enters the CVI (the first 2/3 of the data entering the CVI is verticla sync data - I have reduced the active data to 32 lines in order to make the simulation run faster), the CVI just continues outputting valid data (see the pulses on is_valid) again. The error here is that the resulting video frame is much larger than the preceding control frame indicates, and contains data from two different video frames. And, if there was no more input data, the output of the CVI would still be halfway through the video frame, and this stalls the switch (and basically the whole video pipeline). The reason I am investigating this is because I am actually seeing it in my hardware - very reproduceable. I have already had to write my own switch so that I can implement work-arounds in the switch for this to force a switchover. If anybody can verify (or contradict) this behavour of the CVI in simulation I would be very grateful! Regards, Niki
Hello again,I did a quick simulation: CVI -> CVO both at PAL. The result is different then yours, as Lock signal goes low there is EOP. See image "eop". But I think that about a year ago I was working on a product and I had the same problem as you have. I could not use the Switch because the Lock signal going down ended in a deadlock. I do not remember the VIP version I was working on but this simulation is with 10.1. Can you try the 10.1? If not I can try to send you the *.vho of the CVI and CVO I am using for the simulation. BR,
Niki,I have seen this post, as well as one other from you. We are using the Altera VIP Suite to implement a video processing system. We accept several different video input formats, and the processor card that feeds us the video stream can change formats without notice, and mid frame (we have no control over this). When we switch between formats, we get the CVI module to hang. This problem is fairly regular, and we have a script that can reproduce it by repeatedly changing the input source every few seconds. The Avalon bus registers indicate that a FIFO overflow has occurred. After this condition occurs, we are unable to get any video through the pipeline, and the system sits there until a hard reset is applied. We have even tried resetting only the CVI module, but this does not seem to work. It seems to us that the issue is the transition from one format to another which is abrupt, and often accompanied by a frequency change, depending on the input video format. I was wondering if you ever resolved this issue posted above, or your other related issue. We are looking for a work around to this CVI issue, and any help would be greatly appreciated. Thanks in advance, Thomas
Hi Thomas,I was able to circumvent my problems in one way or another, but I was never able to solve the core problem. What I had found was that if you remove the video clock input to the CVI, then the CVI internal logic stops working correctly - the status register does not indicate a loss of lock and it is impossible to disable the core (writing a 0 to the control register never caused bit 0 of the status register to be cleared). Also, if the input clock "glitched", the core could become stuck and I had to apply the global reset to recover. My guess is that there is significant logic / state machines that run of the video clock and without a valid clock the core does not function. I solved this by using the CVI core in "single clock mode" and adding my own dual clocked FIFO between the video input and CVI. One side of the FIFO runs from the video input clock and the other from the system clock. The system clock then becomes the CVI video clock input. The FIFO output goes directly into the CVI and the inverted FIFO_EmptyFlag is basically the data_valid input to the CVI (add small amount of logic to account for pipeline delay). The advantage is that the CVI always sees a nice clock. I then also added some logic to detect a "loss of clock" condition, which then drove the vid_locked input. I can send you the code if you want (VHDL). This solved most of my problems. The one remaining problem was that if I interrupted the video stream, the CVI would not terminate the current video frame until I applied a new video stream and it had re-synced. In my case this was a problem since I had the Switch component which I wanted to switch from the video source to the blue screen in just such a case and it would only switch on a frame boundary. So to solve this I wrote my own switch component which would normally wait for a frame boundary to switch, but if after a timeout no end-of-frame was received, it would force a switch to the internally generated blue screen. This solved all my problems (although not very elegantly!) I must add that this was done on version 9.1SP2. I have sinve moved to version 11, but I have not checked to see of the CVI behaviour is still the same. Also, the source code of the CVI (in Verilog) is available in the Quartus/IP/Altera folder, so it would be possible to fiddle with it (or at least look at it). You do not say anything about the "downstream" path. Have you checked in what state the Avalong-ST output of the CVI is once it gets stuck? What are the levels of the Valid and Ready lines? I have written a simple "pass-through" component that you can instantiate to monitor the Avalon-ST link. The component has an Avalon-MM interface through which you can read back current status and some statistics (like number of video packets / control packets, last resolution, etc.). I can send this to you as well if you think it will help. Regards, Niki
Hello Niki,In my system the SDI IP is connected to the CVI. I had similar problems with source disconnection but I suspected the SDI core and not the CVI. Finally I have concluded to the exact solution as yours "single clock mode", this solution solved my problems (that I thought was SDI core problems). Now after I read your post I get a much better understanding, and I agree with your conclusions that this is CVI problem. I was working on version 10.1. Thank you for your detailed explanation. BR,
Niki,I am responding now to say thank you for getting back to me, and explaining your approach. Once I read it, it made perfect sense to me, and seems like it is what they should have done in the CVI to begin with. We implemented this approach, and it solved our issues too. The input async FIFO shields the CVI from the input clock, and the clk_vid (our equivalent of your system clock) is a constant clock that runs the CVI at a fixed rate. I looked into the CVI code, and there is quite a bit of logic and a state machine running off of the pixel clock that comes in with the input video. The CVI also uses synchronous resets, so it is not possible to reset the front end logic without video (and the associated clock) present. Perhaps this is why the logic seems to be unresponsive to a reset after video loss. The way that the CVI is implemented seems very poor to me from a fundamental design perspective, because all these clock crossing and clock loss scenarios are inherent to many video designs. Also, in our system, we do not transition to an output pattern in the event of loss of video. We essentially have a background pattern that is constantly fed into the chip through another dedicated path, and we switch to that in the event of video loss. So, the fact that the eof does not occur until new video is detected is not an issue. However, I believe that the eof is only written in once the FIFO overflows, which occurs on a video switch. The last old video line sits in the fifo and then the new full video line is written in, creating the overflow scenario we each observed. At any rate, thanks for all the assistance, it was invaluable. Thomas