FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
6110 Discussions

CSC Settings - 24bit RGB - 16-bit RGB

Honored Contributor II


I am trying to use the CSC in VIP to convert a video data stream from 24-bit RGB (8-bits per color) to 16-bit RGB (565/RGB) for use with a custom display peripheral. I have set the CSC for runtime control so that I can manipulate the parameters to adjust the amount of red/green/blue so that I can finetune the output display. 


My problem is that I cannot understand from the docs how to configure the coefficients and summands! Each setting uses a 32-bit register but I fail to understand how the fraction setting works! Basically I want to reduce red and blue to 0.125 x color and green to 0.25 x color. I planned to then adjust the summand for each to increase/decrease the amount of each color in the output picture. This does not work the way I expected in that the output changes for blue and red but not for green and then sometimes it doesn't change for any color channel. 


I have also noted that the CSC does not stop when I use Stop(1) before trying to edit the registers. Is there something I need to do before issuing a stop() command? 


Does anyone have some simple description of how the CSC regsiters work, ie how do you specify that red output should be 0.125 times the red input? 


I know the algorithm is Dout1 = A0 x Din1 + B0 x Din2 + C0 x Din3 +S0, etc. 


Any help or information would be greatly appreciated. 


0 Kudos
2 Replies
Honored Contributor II

Why not drop the CSC and just use the upper 5 or 6 bits of each color component to drive the display? 


Multiplying by 0.125 or 0.25 is equivalent to shifting the bits right by 3 or 2. There is no need to perform a full multiplication.
Honored Contributor II

That was how I initially implemented the design but the colors are not mixing correctly. I need to be able to adjust the levels of red, green and blue independently. That is why I considered the CSC which gives me the opportunity to add a summand (+ or -) to the end result and to control this in runtime from SOPC. 


I have figured out the format of the 32-bit summand and co-efficient registers. In case it will be helpful to others, here is my understanding: 


The megafunction setup in SOPC defines by default 8 fractional bits. You can change this in the wizard. To calculate the value in the register each fractional bit continues the 8-4-2-1 sequence for the whole number part, i.e. msb of fractional bit = 1/2, next msb = 1/4, next msb = 1/8, etc. 


So for example, to write 1.5 in 32-bit reg is 0x00000180 


Negative numbers are the 2's complement of this value. Compute the 2's complement on the whole 32-bit reg. 


For example, -1 would be written as 0xFFFFFF00. 


I still have a problem with not being able to stop the CSC which I now believe to be a problem with my system design. 


I am using pipeline bridges between the DDR2 RAM and the DIL/VFB cores. Should these use the core clock or the DDR2 clock?