I am implementing a continuous data decimation (filtering using FIR filters) and down sampling using IPP primitives. Just wanted to confirm that I am using a correct approach. What would be the proper way to handle continuous data in order to get a gap less decimated data that would be acceptable for FFT processing?
Conceptually I do the following:
get a block of data
set a delay line to values from external delay line buffer
initialize FIR filter
filter the data
save delay line into external delay line buffer
down sample the filtered data, pass phase to the next iteration
copy down sampled data to external data buffer
go to the next block of data
The signal is a sine wave at 100Hz frequency, and the FIR filter cutoff is at around 2kHz
The problem is that when I piece together my decimated data, it doesn't always look continuous. What steps am I missing?
take a look at the thread http://software.intel.com/en-us/forums/showthread.php?t=79151&o=a&s=lr also I think you don't need to save delay line toan external delay line buffer for each block of data (if you are using ippsFIR_ function with the internal State structure) - FIR functionautomatically saves delay line in the state and uses it for the next block of data.Probably your sequence is wrong:
set a delay line to values from external delay line buffer initialize FIR filter
delay line must be set DURING FIR initialization or AFTER initialization.
I had to save the delay line values into external buffer for each block of data because I am using a dll wrapper with LabView as my prototype tool and after each pass I free the memory and destroy the filter state pointer because I couldn't pass it in and out from LabView. I realize that it is not the best way to do it, but it gives me great data graphing and debugging capability. In the final C++ code initialization and destruction will obviously be done once. The problem was in my dll wrapper code, after I got it fixed the things started to look much better