Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.
6704 Discussions

Multilevel streaming 1D Wavelet transform

soundclashgmx_de
Beginner
438 Views
Hallo,
I'm trying to implement a streaming multilevel 1D Wavelet transfrom using the WTFWD function, but I can't figure out how to set the delay lines in the right way for every decomposition level without having boundary effects for each frame and level.

I have streaming data coming in blocks of 4096 samples and use the coeffs of the matlab dmey Wavelet (102 coeffs). I want to decompose the signal into five levels. Now I don't know if I have to use one IppsWTFwdState object for each level to have appropriate delay lines for each level of the consecutive frames. Or do I have to use only one State object and set the delay lines manually for each decomposition level. The way I have it implemented right now gives me strong boundary effects because of the filtering and these effects are getting even stronger the deeper the decomposition level in the tree is.
Could someone please explain what's the right way to use the State objects and how to setup the delay lines for this kind of streaming multilevel decomposition having 102 coeffs for the low and the high filter each. Perfect reconstruction is not of interest, it is just for analyzing purposes.

Thanks in advance
Joe
0 Kudos
3 Replies
Mikhail_Kulikov__Int
New Contributor I
438 Views

Joe,
really you need to use separate IppsWTFwd(Inv)State_32f structure for each level of transform. It keeps delay buffer data in internal state between ippsWTFwd(Inv)_32f calls. If you don't do so, it will cause unexpectable effects on each processed block boundaries on any non-zero data.

Let me know will it correct your problem.

Regards,
Mikhail
0 Kudos
soundclashgmx_de
Beginner
438 Views
Hallo Mikhail,
thanks for your quick reply. One State object per decomposition seemed to be the right approach. I get much less boundary effects, but unfortunately there are still some left. So my question is, do I have to pad a frame at the end or is there something I have to set manually for the delay line. Right now I simply initialize the state object once with an offset of -1 and then do the ippsWTFwd_32f for the frame with no extra padding or putting values into the delay line manually, using one State object per level. If I understand this right, this should work like an ordinary FIR filter with the delay line values automatically updated, so there shouldn't be any boundary effects at all? Maybe I missed the point.

For example, if I have blocks of 4096 samples containing a 0dB sine wave of 800Hz at 44.1kHz and process only one level with ippsWTFwd_32f with the 102 coeffs of the digital meyer Wavelet taken from matlab and normalize the result by 1/sqrt(2), I still get some values of about -10dB in the upper half (detail level above 11kHz). In Matlab the separation with the same coeffs is about -70dB for the first details. Because the intention is to have a Wavelet analyzer of audio signals similar to an FFT, I need this separation. Do you have any idea what could be wrong with my way of using the delay line?

By the way, if I don't need any reconstroction capabilities, is there a reason why the ippsWTFwd_32f should need the coeffs of a Wavelet function or would it be possible in this case to use a normal FIR filter lopass/highpass combination with steeper roloff?

Thanks,
Joe
0 Kudos
Mikhail_Kulikov__Int
New Contributor I
438 Views

Hi,

it seemsyourmodel is right. I mean you don't need to do anything with delay lines between processing of blocks.
Note, it could be the problem with Tone (sin) generation. Please, check it first.
Than, I would suggest you to start from very simpleimplementation like:

IppsWTFwdState_32f *state;

Ipp32f src[..];
Ipp32f dst[..];

... //initialize src by some REFERENCE data, using, f.e., runtime sin()

ippsWTFwdInitAlloc_32f(...);

for(..)
{
ippsWTFwd_32f(...);
}

... //implement analizing, that check something wrong with result

If the problem still persist in such a simple case contact me here to discuss.

Regards,
Mikhail

p.s. FIRs in WT are multi-rate (resampling). During forward transform on the stage of resampling it "mirror" a half part of spectrum. For low-pass filter componentit's like "aliasing" of small amount of frequencies higher than resulting sampling rate. But for high-pass the result is drammatic, actually it mirrorsand scale all highestpart of sepctrumto the resulting (/2 sample-rate) spectrum. F.e. 1 0 1 0 1 0 high-frequency signal will be constant 0 or 1 on the output. It differs WT from just two FIRs and the key reason is resampling.

p.p.s. Iassume you are aware that output dataof each (L and H) componentare 1/2 size of original signal sizefor one level of transform. If not, you need to take it into acount indeed.
0 Kudos
Reply