- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm upgrading IP and I see an IO mismatch between the output data widths of the CIC core generated in 10.1 versus the one from 13.1. The documentation for both shows the same formula for calculating output width:
Bin + Nlog2(RM) - log2(R) For me, this comes out to 45.68. The 10.1 design generates 47 bits and the 13.1 design generates 46 bits. Although the 13.1 design seems more accurate, I've already verified the design with the 10.1 core. I've generated other cores and have noticed that the 10.1 compiler consistently adds 1 bit to the output. Can anyone tell me if I need to drop a bit or shift a bit to match the previous behavior? Or did the algorithm get updated, and I can no longer expect a bit accurate match to my previous design? AlexLink Copied
3 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
could it be the bitwidth you are referring to depends on cic being decimator or interpolator.
The equation you have given is for interpolator. For decimator you don't subtract log2(R)- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The design is interpolator for BOTH the 10.1 core and 13.1 core.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
--- Quote Start --- The design is interpolator for BOTH the 10.1 core and 13.1 core. --- Quote End --- in that case 46 bits represents the maximum bitwidth required at last integrator stage. any extra bit can be ignored and should not affect the result as you will truncate LSBs to get your final gain.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page