Hello,
It's my first time using intel's FIR filter IP and I'm having difficulties getting it working.
I basically want to low-pass filter the data coming from the ADC (LTC2387) using FIR II IP. The filter is fractional rate filter.
Here are some of the filter specs:
Input sampling rate: 15 MSPS
Filter clock : 60 MHz
Coeffs: 51 taps, 16-bit signed binary
Input 16-bit singed binary
Output: 36-bit signed binary.
The data coming from the ADC is in 2's complement, and therefore firstly I convert it to signed binary(bit magnitude) and feed the converted data to the filter's input.
To make sure that everything's working I make the input to the filter constant and observe the output, expecting that the output stays constant as well, but it doesn't.
The input is constant -256 (FF00h in 2s complement 8100h in signed magnitude) and the output looks quite random as you can see in the attachment.
I also attached the archieve file which include the filter I am using. The filter is in pre_process_20MSPS.vhd file along with the conversion from 2's comlement to singed magnitude process.
What am I doing wrong?
Link Copied
Hi,
As I understand it, it has been some time since I last heard from you. This thread will be transitioned to community support. If you have a new question, feel free to open a new thread to get the support from Intel experts. Otherwise, the community users will continue to help you on this thread. Thank you.
Hello CheePin,
I was going to ask another question about this, should I ask here or open another thread?
Hi,
Would you mind to help opening a new thread on this. Thank you very much.
For more complete information about compiler optimizations, see our Optimization Notice.