Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software Development SDKs and Libraries
- Intel® Integrated Performance Primitives
- How to prevent multi-rate filter from scaling output

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Jones__David

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-26-2015
01:58 PM

39 Views

How to prevent multi-rate filter from scaling output

I'm using a lowpass filter created with ippsFIRMRInit to upsample a signal by a user definable factor. The output of the filter is scaled roughly (not exactly) proportional to the upsample factor.

Is this expected behavior? If so, how do you determine the correction factor to make the upsampled data look like the original signal?

There's a "doNormal" option for ippsFIRGenLowpass, but there's nothing in the documentation that explains what it does. I tried setting it to both true and false, with no noticeable difference in the output.

Link Copied

6 Replies

Igor_A_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-27-2015
02:11 AM

39 Views

Hi David,

1) there is no any dependency between the output signal amplitude and upsample factor in the ippsFIRMR functionality - it works according to the manual: "The parameter upFactor is the factor by which the filtered signal is internally upsampled (see description of the function SampleUp for more details). That is, upFactor-1 zeros are inserted between each sample of the input signal. The parameter upPhase is the parameter, which determines where a non-zero sample lies within the upFactor-length block of the upsampled input signal. The parameter downFactor is the factor by which the FIR response obtained by filtering an upsampled input signal, is internally downsampled (see description of the function SampleDown for more details). That is, downFactor-1 output samples are discarded from each downFactor-length output block of the upsampled filter response. The parameter downPhase is the parameter, which determines where non-discarded sample lies within a block of upsampled filter response." The standard FIR formula/algorithm is applied for upsampled signal - convolution with the taps vector - so you see, that upsample factor can't influence the output signal amplitude - it depends on filter coefficients only. You can try Matlab (as standard de facto in this area) for control purpose and you'll see that its behavior is the same.

2) doNorm parameter, if it's > 0, means that FIRGen function generates normalized FIR coefficients - i.e. the sum of all coefficients is = 1.0, that means that the generated filter is non-reinforcing filter.

3) there are several more functions in IPP that are intended for tuning the frequency of some signal: ResamplePolyphase and ResamplePolyphaseFixed.

regards, Igor

Jones__David

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-27-2015
05:19 AM

39 Views

Thank you for your reply.

So, it must be the cutoff frequency that's causing the attenuation in the output. I'm using 0.5/upsample_factor as the cutoff. I believe that's the standard cutoff for a lowpass interpolation filter, but perhaps I misunderstand how the IPP filter function works.

We are seeing an attenuation that increases with the upsample factor. Other than that, the input and output differ only in their discrete-time resolution.

Should the cutoff be something other than 0.5/upsample_factor?

Igor_A_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-27-2015
11:30 PM

39 Views

Hi David,

I don't know why do you use cutoff_f = 0.5/upsample_factor. below is a link to Matlab approach for MR design:

regards, Igor

Jones__David

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-23-2015
12:52 PM

39 Views

The 0.5/L cutoff value is based on this:

http://en.wikipedia.org/wiki/Upsampling#Interpolation_filter_design

So if, for example, I have a 300 Hz tone, and I sample it at 1 kHz, I'll end up with a spike at 0.3 cycles/sample on the DFT magnitude plot. This assumes that 0.5 is the Nyquist frequency.

If I then insert a zero between every other sample, it will increase the data rate by a factor of 2. The DFT magnitude plot of the resulting time-series shows spikes at 0.15 and 0.35. The spike at 0.15 corresponds to 300 Hz now, since the Nyquist frequency has been doubled.

In order to get the original signal back, I need to remove the spike at 0.35 using a low-pass filter. To do this, I place the cut-off at 0.25, which corresponds to the Nyquist frequency (500 Hz) of the original data-series. It's also half way between the two peaks. So, in theory, everything within the pass-band is identical to the DFT of the original signal.

0.5/L will always correspond with the Nyquist Frequency of the original signal in DFT of the upsampled signal.

This works as expected in the frequency domain, but the resulting time series is attenuated.

It seems like this should be expected, though, because the DFT of the signal with inserted zeros has the same energy as the original DFT, distributed between two peaks. So you lose about half the energy after lowpass filtering. However, higher upsample factors don't result in a linear increase in the attenuation, so it's not as simple as scaling the output.

The documentation says that the cutoff must between 0 and 0.5. I'm assuming this means that 0.5 is the Nyquist frequency, but it doesn't explicitly say. Also, what does the cutoff represent? Is it the boundary between the pass band and the transition band, or the transition band and the stop band?

Jones__David

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-24-2015
12:48 PM

39 Views

The answer can be found here:

http://www.mathworks.com/help/dsp/examples/design-of-decimators-interpolators.html

"Notice that the filter has a gain of 6 dBm. In general interpolators will have a gain equal to the interpolation factor. This is needed for the signal being interpolated to maintain the same range after interpolation."

For some reason, simply scaling the output by L didn't work before, but it does now. I'm not sure what I changed.

So how do you mark threads as solved in here?

Igor_A_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-01-2015
01:32 AM

39 Views

Hi David,

I'm glad to see that you've dealt with the problem. No need to mark this thread as solved - this forum doesn't have such option.

regards, Igor.

For more complete information about compiler optimizations, see our Optimization Notice.