Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Valued Contributor III
705 Views

VIP suite - how can I make my Y channel into 3xY in parallel in SOPC ?

hi,  

 

In my design I want to mix two streams with the alpha blending mixer.  

 

One stream, the background layer is 800x600 represented in RGB, three color planes in parallel.  

 

The other stream, which is comming from a camera source (which is clipped and scaled downto 640x480) is represented by a 8 bit Y (luminance) value.  

 

Since the background layer is using three color planes is parallel, the Y values from the 640x480 stream also need to be presented as 3 * Y in parallel for the alpha blending mixer. 

 

In what fashion should I duplicate the Y values in SOPC builder, so that the mixer receives the correct stream ?  

 

I tryed using the color plane sequencer, but it says that the number of occurances of channel Y on din0 and dout0 must be the same.
0 Kudos
7 Replies
Highlighted
Valued Contributor III
6 Views

Hi, 

 

The easy hack would be to install a custom block on the sequential stream to triplicate each luma sample inside image packets. 

...,Y_10, Y_11, Y_12,... -> ...,Y_10, Y_10, Y_10, Y_11, Y_11, Y_11, Y_12, Y_12, Y_12,... 

 

You can then use the color plane sequencer to do the transformation: 

Ya, Yb, Yc in sequence -> Ya:Yb:Yc in parallel 

 

The reason I am suggesting to do it this way is to save you the trouble of switching the format of the control packet from sequence to parallel and to let the color plane sequencer handle that for you. 

 

The main issue with this suggestion is that it is only applicable if the input pixel rate is less than a third of the clock rate. If not, your custom block will have to apply too much back-pressure and will overflow the input. If this is the case then writing your own custom color plane sequencer may be the easiest route.
0 Kudos
Highlighted
Valued Contributor III
6 Views

hi again, 

 

I'm clocking the camera stream into the SOPC system with a 27MHz clock while the system itself is using a 40MHz clock for the VIP chain, 

so I probable wont be able to solve this using your initial suggestion of triplication.  

 

Also, woudn't I have to change the control data packet information if I was to change the width of the VIP frame from 640 pixels to 640 X 3 pixels inorder to get the Y_10, Y_10, Y_10, Y_11, Y_11, Y_11..sequence ?  

 

I'm not sure if this is the right solution, but what would happen if I create a custom module that receives the camera stream from a source(deinterlacer) and place the 8bit Y values in a 24 bit signal(23..16)(15..8)(7..0) which is then streamed along with the valid, sop, eop signals to the alpha blending mixer (sink) ? Would this create an missmatch in the control data packet ?
0 Kudos
Highlighted
Valued Contributor III
6 Views

> Also, woudn't I have to change the control data packet information if I was to change the width of the VIP frame from 640 pixels to 640 X 3 pixels inorder to get the Y_10, Y_10, Y_10, Y_11, Y_11, Y_11..sequence ?  

 

No, the width in the control packets would still be 640 since the number of pixels would still be the same. The transformation brings you from 1 channel in sequence to 3 channels in sequence but this information is not carried over in control packets. It is a compile time parameter for most VIP cores. 

 

> I'm not sure if this is the right solution, but what would happen if I create a custom module that receives the camera stream from a source(deinterlacer) and place the 8bit Y values in a 24 bit signal(23..16)(15..8)(7..0) which is then streamed along with the valid, sop, eop signals to the alpha blending mixer (sink) ? 

 

I am not sure I understood what your source is but if you have control over the source and you are already generating control packets yourself then yes this is by far the easiest solution. If your source is the deinterlacer VIP core working with 1 color sample per pixel (1 channel in parallel and 1 channel in sequence) then you have to modify the control packet on the fly as described in: 

http://www.altera.com/literature/ug/ug_vip.pdf (Figure 4.10 to Figure 4.8). 

The data contained in the control packet would still be the same but it is transmitted differently.
0 Kudos
Highlighted
Valued Contributor III
6 Views

hi again, 

 

>I am not sure I understood what your source is but if you have control over the source and you are already generating control packets yourself then yes this is by far the easiest solution. If your source is the deinterlacer VIP core working with 1 color sample per pixel (1 channel in parallel and 1 channel in sequence) then you have to modify the control packet on the fly as described in: 

http://www.altera.com/literature/ug/ug_vip.pdf (http://www.altera.com/literature/ug/ug_vip.pdf) (Figure 4.10 to Figure 4.8). 

The data contained in the control packet would still be the same but it is transmitted differently.  

 

My video source(8bit Y) is provided from a camera, this source is clocked into my system, clipped, scaled and then deinterlaced using VIP cores.  

 

Your right about the deinterlacer using a single color plane. I placed a custom module between the deinterlacer and the alpha blending mixer and I tried changing the control data packet from the single color plane in sequence into matching 3 color planes in parallel. When the "startofpacket" signal is received and the "data" signal indicates an control data packet, I deassert the "valid" signal going to the alpha blending mixer, I then take three samples from the control data payload(which is received in sequence from the deinterlacer). After that I assert "valid" and send the control packet payload as three color planes in parallel to the mixer. I do this three times, to shift all the control data out to the mixer. For the last control data payload I assert "endofpacket", and after that I provide "startofpacket" for the video data and start shifting out the video data received from the deinterlacer as 3 colorplanes in parallel.  

 

What's happening now is that the output from the alpha blending mixer, which is clocked out to screen, is switching from on to off after every 1-2 second and also the sync signals seem to be going down with the video data. Also, I sometimes get some sort of sparkling noise (often green in color) spread across the output video data. This noise seem light sensitive as it moved across the camera videodata on my output screen. I would appreciate your thoughts on these matters
0 Kudos
Highlighted
Valued Contributor III
6 Views

Hi, 

 

>What's happening now is that the output from the alpha blending mixer, which is clocked out to screen, is switching from on to off after every 1-2 second and also the sync signals seem to be going down with the video data 

This looks like a thoughput issue. Your source may provide either too many data or not enough data for the sink. You may have all your sources and sinks running at the same rate (eg, 60Hz) but this does not means that they are perfectly in sync since you could be using slightly different clocks. This kind of problem is typically fixed by adding a triple buffer at a strategic position in the system. Perhaps you can simply switch triple buffering on in the deinterlacer but it is hard to say without seeing the full system. 

The mixer could also be the cause of this throughput issue but once again it is hard to say without a complete description of the system. 

 

> Also, I sometimes get some sort of sparkling noise (often green in color) spread across the output video data 

If the "noise" is consistent and stays with the light then it could be a range issue. The VIP cores tend to use the full range available for color samples from 0 to 2^(bits_per_samples) - 1. Some video sinks require a constrained range.
0 Kudos
Highlighted
Valued Contributor III
6 Views

hi, 

 

I'm now able to change the single Y color plane to 3 * Y color planes in parallel without getting the on/off switching on the output screen. 

It looks like the on/off switching on the output screen was due to characteristics in the videostream. 

 

The deinterlacer (which is providing the source to my custom module) is actually providing two control data packets(only the parameters from the last received control data packet is used by the VIP modules). One of the control data packets contain width and height payload as expected with width = 640 and height = 480 while the interlacing payload is "0000". The other control data packet contains width = 640 while the height is 240 and the interlacing payload is "1010", this control packet is the one that should be discarded. 

 

In my custom module, where I'm monitoring the input from the deinterlacer inorder to detect control data packets, the 640x240 control data packet was causing missmatch between what is received and what is sendt from my custom module(mainly due to different characteristics of the valid signal durring the control data packets. The startofpacket for the videodata packet going out from my custom module was set at the wrong time).  

 

Regarding the noise, I still havent been able to solve that issue. This noise can sometimes be present after I compile my design, while other times it's not. Could this be timing related ? 

I tryed placing some assignments in Quartus2, inorder to improve timing characteristics. For instance, I used the "Fast Output Register" assigment for the video data and sync signals going to the output screen and also I set "Fast Input Register" for the video stream received from the camera, however this only increased the noise.
0 Kudos
Highlighted
Valued Contributor III
6 Views

hi again, 

 

It looks like I'm not quite finished with the throughput issue yet. I did manage to get a steady video stream out to my output screen when placing my custom module between the deinterlacer and the clocked video out (CVO) module. However, when placing the alpha blending mixer infront of the CVO there is nothing displayed on the output screen. From this it looks like I'm getting some sort of deadlock, I'll try using signaltap inorder to see what's happening
0 Kudos