Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- FPGAs and Programmable Solutions
- FPGA Intellectual Property
- help about fft code.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Altera_Forum

Honored Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

07-11-2007
07:47 AM

761 Views

help about fft code.

:) when i do fft project whose code is download from altera.com, there is piece of code like following. i can not understand why there is " >> PRESCALE ", can you explain it to me

TempReal = (( CosReal * reversed_RealData[butterfly_index + loop_element_div2] ) + ( SinReal * reversed_ImaginaryData[butterfly_index + loop_element_div2] )) >> PRESCALE; TempImaginary = (( CosReal * reversed_ImaginaryData[butterfly_index + loop_element_div2] ) - ( SinReal * reversed_RealData[butterfly_index + loop_element_div2] )) >> PRESCALE; // Perform the butterfly calculation and scale result for alt_16 type reversed_RealData[butterfly_index + loop_element_div2] = reversed_RealData[butterfly_index] - TempReal; reversed_ImaginaryData[butterfly_index + loop_element_div2] = reversed_ImaginaryData[butterfly_index] - TempImaginary; reversed_RealData[butterfly_index] += TempReal; reversed_ImaginaryData[butterfly_index] += TempImaginary;Link Copied

5 Replies

Altera_Forum

Honored Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

08-03-2007
04:38 PM

43 Views

I'm not familiar with this specific piece of code, but it looks like a common technique for fixed-point arithmetic.

Rather than using floating point, which is very expensive in gate count, values between zero and one (like the sine and cosine) are expressed as fractions. For convenience, all fractions have powers of two in the denominator. Also, since the denominator is the same for all fractions, adding and subtracting can just ignore it and use only the numerator. In floating point, that code would be: tmp = cos * data
Altera_Forum

Honored Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

08-20-2007
03:42 PM

43 Views

Imagine a Radix2 butterfly. For each real and imaginary part you have

z0 = x0 + x1 z1 = x0 - x1 and then you do a complex multiplication of z * (twiddle factor). If you have a input signal that you want to maintain full precision then x0 and x1 are hopefully bounded between -/+1 fully. Thus, the worst case result is when x0 and x1 both equal one, and the twiddle factor is 1+1j. In this case as you go through the equation z0 will be 2 (z1 will be 0) and then z0 * twiddle factor could result in 2*sqrt(2). The 'growth' of this value would obviously overflow and cause an error. The scaling in FFTs essentially accounts for this effect, and is why the SNR for an FFT is generally we'll below the 6*2^(bitlength) rule of thumb. Some of the megacores have a variety of schemes to account for this including block floating point. But thats a whole different beast. blk
Altera_Forum

Honored Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

08-30-2007
01:32 PM

43 Views

thank you first

and there is another question: as you said "denom is a power of two of the form" ,but I can not understand for example the real is 0.56 ,then my faction form is 56/100, but 100 is just be 10 power , how to transfer this , 0.56==56/100, I can not image there is a data can do this 0.56==56/ 2's power , that is mean the real data that I caculate will be changed?? sorry for my English and sorry for my poor FFT knowledge ;) --- Quote Start --- I'm not familiar with this specific piece of code, but it looks like a common technique for fixed-point arithmetic. Rather than using floating point, which is very expensive in gate count, values between zero and one (like the sine and cosine) are expressed as fractions. For convenience, all fractions have powers of two in the denominator. Also, since the denominator is the same for all fractions, adding and subtracting can just ignore it and use only the numerator. In floating point, that code would be: tmp = cos * data
Altera_Forum

Honored Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-02-2007
02:08 AM

43 Views

Altera_Forum

Honored Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-05-2007
03:40 AM

43 Views

Here is an example of an integer FFT that uses a 15 pre-scale factor that has been accelerated using the Nios II C2H compiler (perhaps this is the example you are referring to):

http://www.altera.com/literature/tt/tt_nios2_c2h_accelerating_tutorial.pdf http://www.altera.com/literature/tt/c2h_fft_cyclone_ii.zip http://www.altera.com/literature/tt/c2h_fft_stratix_ii.zip So in a nutshell this example shows fixed point math. The idea of a pre-scale operation is to increase the data resolution so that operations that degrade precision (like division, >>, or multiplication of small numbers) will have as little affect on the accuracy as possible on the final result. As Tom pointed out there is no difference between using a denominator of 100 or 1024. With that said the division (or multiplication) by 1024 is much better since that just involves a shift operation (hence why you have a >> in your FFT code) Whenever you divide binary numbers, you have no uncertainty when the denominator is a factor of 2 and the numerator contains a power of 2 (greater or equal power of 2 in the denominator). So by pre-scaling the numbers by a factor of 2, performing the calculation, then post-scaling by the same factor of 2, you minimize the amount of error in your calculation (often to 0% if you reserve enough bits). Remember when I said "greater or equal power than the denominator", well the whole idea behind a pre-scaling factor is that you remove that pre-scale from the result after the calculation is done (i.e. the numerator is a power of two, and the same power of two as the denominator). To learn more I recommend searching the web for information on "fixed point math". You should be able to find many examples that can also explain this concept.Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

For more complete information about compiler optimizations, see our Optimization Notice.