Intel® FPGA University Program
University Program Material, Education Boards, and Laboratory Exercises
1157 Discussions

D5M camera ambiguous Demo

Honored Contributor II


I have connected the D5M camera to the DE2-115 board and ran the demo . It worked fine but when I jumped into the code to understand what's happening I found no documentation and the code wasn't commented at all . I have many questions : 


1 - Are there any docs explaining the code and the system design ? 

2 - Does this demo include the IP Core ? Where is the IP Core ? 

3 - I don't understand what's the line buffer is doing especially this part ? What are those subwires doing : 

wire sub_wire0; wire sub_wire5; wire sub_wire3 = sub_wire0; wire sub_wire4 = sub_wire0; wire sub_wire2 = sub_wire0; wire sub_wire1 = sub_wire2; wire taps2x = sub_wire1; wire taps1x = sub_wire3; wire taps0x = sub_wire4; wire shiftout = sub_wire5; altshift_taps altshift_taps_component ( .clken (clken), .clock (clock), .shiftin (shiftin), .taps (sub_wire0), .shiftout (sub_wire5) // synopsys translate_off , .aclr () // synopsys translate_on );
0 Kudos
2 Replies
Honored Contributor II

I bought the D5M camera and i had the same problem you have... 

that code has no comments, no .doc explaining what's been done, really a shame for terasic
0 Kudos
Honored Contributor II

altshift_taps is an altera IP that goes with megawizard. 

It's being used for demosaicing (wiki demosaicing). To go from RGBG to RGB. 

In summary every line coming from the sensor contain whether RG (Red Green) pixels only or GB (Green Blue). 

The altshift_taps allows to buffer a full line and to address the same horizontal pixel but of the previous line. (I don't know if they're still wasting two line buffer while one is enough) 

Also, with this demosaicing IP for every four pixels of 8 bits {R,G1,B,G2} 4x8bit we get one RGB pixel of 1x24bits. (hint G = (G1+G2)/2) 

That's why the D5M is configured on 1280x960 /4 we gen VGA 640x480. 

Terasic are good at hardware development, yet the SOPC compatibility and embedded system designs are not that efficient. 

I started some drafts, yet a lot has to be done : 

- a C driver to configure the camera (size, clk freq, gain...) with an Avalon I2C Slave. 

- a D5M IP with an Avalon Streaming interface for intermediate AV-STream processing IP 

- stream buffering (Avalon-ST <>Avalon-MM) to interface external memories inside QSys 

- The same for the VGA IP to see the processed image on the VGA screen
0 Kudos