FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
Announcements
Intel Support hours are Monday-Fridays, 8am-5pm PST, except Holidays. Thanks to our community members who provide support during our down time or before we get to your questions. We appreciate you!

Need Forum Guidance? Click here
Search our FPGA Knowledge Articles here.
5954 Discussions

IO discrepancy between CIC 10.1 and 13.1

Altera_Forum
Honored Contributor II
1,115 Views

I'm upgrading IP and I see an IO mismatch between the output data widths of the CIC core generated in 10.1 versus the one from 13.1. The documentation for both shows the same formula for calculating output width: 

 

Bin + Nlog2(RM) - log2(R) 

 

For me, this comes out to 45.68. The 10.1 design generates 47 bits and the 13.1 design generates 46 bits. Although the 13.1 design seems more accurate, I've already verified the design with the 10.1 core. I've generated other cores and have noticed that the 10.1 compiler consistently adds 1 bit to the output. Can anyone tell me if I need to drop a bit or shift a bit to match the previous behavior? Or did the algorithm get updated, and I can no longer expect a bit accurate match to my previous design? 

 

Alex
0 Kudos
3 Replies
Altera_Forum
Honored Contributor II
73 Views

could it be the bitwidth you are referring to depends on cic being decimator or interpolator. 

The equation you have given is for interpolator. For decimator you don't subtract log2(R)
Altera_Forum
Honored Contributor II
73 Views

The design is interpolator for BOTH the 10.1 core and 13.1 core.

Altera_Forum
Honored Contributor II
73 Views

 

--- Quote Start ---  

The design is interpolator for BOTH the 10.1 core and 13.1 core. 

--- Quote End ---  

 

 

in that case 46 bits represents the maximum bitwidth required at last integrator stage. any extra bit can be ignored and should not affect the result as you will truncate LSBs 

to get your final gain.
Reply