FPGA Intellectual Property
PCI Express*, Networking and Connectivity, Memory Interfaces, DSP IP, and Video IP
6669 Discussions

Altera FFT Vs Matlab FFT. Strange Results

Altera_Forum
Honored Contributor II
4,855 Views

Hi, 

 

I’m testing some FFT Altera Core using those parameters: 

1)Single Streaming or Burst 16K length FFT 

2)12-14-16 bits input REAL data (pseudorandom code {-1,1}) 

3)same bits for twiddle (12-14-16) 

 

The input is a pseudorandom function of ONLY two symbol -1 and 1. I put 0 constant as Imag input. Using 16 bit the exponent is 0, with 14 bit the exponent is -1 and for 12 the exponent is -3.  

I tested the output both in DSP Builder environment (Simulink) and using pure Matlab script after generating the Core. The results are totally wrong respect the (correct) built in FFT in Matlab (FFTW). In attachment I put the input file, the Altera output and the Matlab output.  

 

Thanks for the support 

 

Gabriele
0 Kudos
18 Replies
Altera_Forum
Honored Contributor II
2,790 Views

I suggest you to use a simpler source signal in order to debug better. Many sources of problems may be hidden here. 

Use a single- or multi-tone test signal to compare the results. 

That should be easier to analyze.
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Thanks for the replay, but this is my real input. Not for testing. My trade of is to store in memory all the FFT from Matlab or every time generate using the FFT ip Core. Better results can be achieved using the Variable Streaming fixed point core. But the differeces are heavy though.

0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

OK, but what if you temporary replace your real input with a test input, just for debugging purpose?

0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Already done. 

 

My design has 3 IP cores: two FFT and one IFFT. One FFT has as input the pseudorandom code; the second fft some bpsk signals. After some products in frequency domain i come back in time domain to check data messages. Using a Lut instead of the FFT (with the matlab ffts with 12 bit fixed points) for the pseudorandom codes every things is ok. If i replace the LUT with the FFT from Altera for the pseudoradom codes the results are not correct. The second FFT and the IFFT are Altera IP FFT cores and work perfectly. The problem is the FFT for this particular input (only {-1,1}). 

 

An independent design check comes from the matlab test bench after generating the core (page 2-16 FFT MegaCore function V8.0 User Guide). 

 

Thanks
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

After plotting 

 

>> plot(real(ifft(out_Altera))) 

>> plot(imag(ifft(out_Altera))) 

 

it is evident that the imaginary part of the IFFT (which should be zero everywhere) is considerably different from zero.  

 

Also I tried to understand what kind of disturbance is imposed over the ideal result by: 

 

>> plot(abs(out_Altera./out_Matlab)) 

 

or 

 

>> plot(log10(abs(out_Altera))-log10(abs(out_Matlab))) 

 

if you prefer the log diagram. 

It is difficult to check the problem with this kind of pseudo-random signal, try to build a periodic signal (square wave, etc.) by using only +1 and -1 at the input and check the output. It should be easier to analyze.
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

I have several suggestions and also want to point out that it is not quite fair to compare Altera's FFT to Matlab's FFT.  

 

First of all, Matlab's FFT implementation (the fft function) is actually a floating point DFT implementation- and we don't really have control over twiddle precision within the fft function. So that wouldn't exactly be a fair comparison... 

 

Also one thing I found about Altera's FFT core (especially with the older architectures that utilize block floating point representation: i.e. streaming, burst and buffered burst), it is best if you scale up your data (so you are using all of the dynamic range). I presume it is due to the block floating point structure (the data is scaled after each stage of butterfly... therefore if you are not using enough precision, it is very possible that your data would completely disappear somewhere within the FFT core). 

 

Please see my attached .zip file for detail. In the zip file, you will find three files: 

- in.mat (this is your original input) 

- FFT_Altera_model.m -> this is the bit-accurate matlab simulation model generated by Altera's FFT megacore, version 8.0 (here I am using a streaming FFT core) 

- FFT_Altera_tb.m -> this is my testbench comparing Altera's result with matlab fft's result (using your input)... 

 

Two things to note: 

1. By scaling up the input prior to Altera's fft function and then scaling the output back, we see that Altera's fft generates similar output to matlab's fft function. The bigger the data and twiddle resolution, the smaller the difference between Altera's and Matlab's fft spectrum. The difference can be attribute to the fact that Matlab uses floating point representation.  

2. Also your Altera's fft output is really = data_out*2^(-exp_out) -> see appendix in FFT UG. Did you remember to do this? 

3. Without properly scaling the input data prior to Altera's fft function.... the difference is pretty big.... I imagine this is what you're seeing... 

 

Also if you are using the FFT core, you do need to pay attention to the sink_ready signal... make sure you don't input data while sink_ready is low... That gave me some trouble before...  

 

Anyhow, hope this helps...
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Thanks for the support. After some studies of the block floating point engine I have found the right way to use it. Now I obtain only quantization error with zero mean value ad a deviation <1. Perfect. 

 

Gabriele
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

I'm now studying fft core,but the output was still 0,anyone please tell me what's wrong?

0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Two possible errors: 

 

1. You are inputting data to the FFT core when it isn't ready. You should really wait for sink_ready signal to be asserted. 

2. You are using a streaming FFT (I can't tell) and that you are not using the full dynamic range for the input. As I explained in the earlier note (from the same threat), you should really use full dynamic range. Otherwise, it is possible that your input would disappear within the core (due to the block floating point -> data is rounded and truncated after each butterfly stage). 

 

Also... attach a bigger picture, so people can see what you're doing...
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Hi to all, 

 

I'm testing FFT core v8.1 with Quartus Simulator and the output is always 0. I attach a image of my input signals.  

 

http://www.alteraforum.com/forum/C:%5CUsers%5CDuarte%20Carona%5CDesktop%5CSimulator.jpg  

 

Thank you. 

Duarte Carona
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Sorry, i'm new here, and i couldn't insert the image..  

 

Anyway, can anyone find a reason for for the output be 0?? i think is because of input signals control, but i'm looking to FFT UG and i'm following from there but the output doesn't change..
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Hi, 

I faced that problem before. But for me, I was able to see outputs after i decrease my clock period from 10ns to 1ns. Maybe you could try that.
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Ok, i'll try that. 

 

Thank you
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

 

--- Quote Start ---  

Hi, 

I faced that problem before. But for me, I was able to see outputs after i decrease my clock period from 10ns to 1ns. Maybe you could try that. 

--- Quote End ---  

 

 

1ns clk period means 1000MHz(1GH). Too good to be true for any fpga. I am afraid you are on the wrong path.
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Hi, 

 

It seems that I am having the same issue.  

 

My fft_sink_ready signal (called out_sink_ready) is being set to 0 after being asserted to 1 a couple of cycles before. I do not know why this is happening. Any help with this matter will be greatly appreciated. 

 

Furthermore, since I am aware of the scaling issue in the streaming implementation, I ensured that the real input is FFFF (16 bits) - the largest number possible. 

 

Thank you!
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

 

--- Quote Start ---  

Hi, 

 

It seems that I am having the same issue.  

 

My fft_sink_ready signal (called out_sink_ready) is being set to 0 after being asserted to 1 a couple of cycles before. I do not know why this is happening. Any help with this matter will be greatly appreciated. 

 

 

--- Quote End ---  

 

 

I had this problem, too. Go to the wizard you created the IP-core with. Check the checkbox including the global clock enable signal. Apply a logical high level signal to the input. Then the IP-core should not stop working after the first cycles.
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

Hi,  

I am trying to use FFT Megacore function and got a different result from MATLAB 

Following is my code 

t=0:0.001:0.127; y=2*sin(2*pi*8*t); Yfft=fft(y); stem(abs(Yfft)/128); = MegacoreFFT_VHDL_model(y,128,0); ymega_change=abs(ymega)*2^2/128; %exponent = -2 and I normalize to 128 stem(ymega_change); Here are the 2 graphs: 1st one is MATLAB FFT, 2nd is MegaFFT 

 

Am I making any mistakes ? 

http://i111.photobucket.com/albums/n126/leejongfan/FFT.jpg  

http://i111.photobucket.com/albums/n126/leejongfan/MegaFFT.jpg
0 Kudos
Altera_Forum
Honored Contributor II
2,790 Views

 

--- Quote Start ---  

I have several suggestions and also want to point out that it is not quite fair to compare Altera's FFT to Matlab's FFT.  

 

First of all, Matlab's FFT implementation (the fft function) is actually a floating point DFT implementation- and we don't really have control over twiddle precision within the fft function. So that wouldn't exactly be a fair comparison... 

 

Also one thing I found about Altera's FFT core (especially with the older architectures that utilize block floating point representation: i.e. streaming, burst and buffered burst), it is best if you scale up your data (so you are using all of the dynamic range). I presume it is due to the block floating point structure (the data is scaled after each stage of butterfly... therefore if you are not using enough precision, it is very possible that your data would completely disappear somewhere within the FFT core). 

 

Please see my attached .zip file for detail. In the zip file, you will find three files: 

- in.mat (this is your original input) 

- FFT_Altera_model.m -> this is the bit-accurate matlab simulation model generated by Altera's FFT megacore, version 8.0 (here I am using a streaming FFT core) 

- FFT_Altera_tb.m -> this is my testbench comparing Altera's result with matlab fft's result (using your input)... 

 

Two things to note: 

1. by scaling up the input prior to altera's fft function and then scaling the output back, we see that altera's fft generates similar output to matlab's fft function. the bigger the data and twiddle resolution, the smaller the difference between altera's and matlab's fft spectrum. the difference can be attribute to the fact that matlab uses floating point representation.  

2. Also your Altera's fft output is really = data_out*2^(-exp_out) -> see appendix in FFT UG. Did you remember to do this? 

3. Without properly scaling the input data prior to Altera's fft function.... the difference is pretty big.... I imagine this is what you're seeing... 

 

Also if you are using the FFT core, you do need to pay attention to the sink_ready signal... make sure you don't input data while sink_ready is low... That gave me some trouble before...  

 

Anyhow, hope this helps... 

--- Quote End ---  

 

 

Hi, 

 

Could you please explain more about the scaling part. I am not very clear why does it helps  

In fact, it does ! If I scale up my input by multiply to 2^15-1, then the difference between Megacore and MatLAB are less than 10^-5 

 

Your support would be much appreciated ! 

 

Dennis
0 Kudos
Reply