- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, everyone.
Referring to code below, I am trying to implement fixed point arithmetic to replace floating point arithmetic for my application. Surprisingly, the DSP usage increases tremendously(around 100%) from 30(floating point) to 64 (fixed point). However, all other resources: ram, logic and etc is reduced.
// Note variable A & B is 16 bit short, passed as argument from host
int sum;
short C;
for loop
sum += A + B;
end
C = 0xffff & (sum >> 8);
Regarding to the report, i have few doubts: - Does Fixed Point arithmetic helps to reduce the DSP usage?Because, as far as i know, dsp block only consumed by floating point operation(please correct me if i'm wrong)
- Am I doing the fixed point in a correct way ?
- And what approach I can take to further reduce the DSP usage?
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Technically DSP usage could be reduced by user lower precision; however, the compiler might not necessarily be able to correctly pack your lower-precision computation into DSPs. "Intel FPGA SDK for OpenCL Pro Edition Best Practices Guide, Section 3.3.1. Floating-Point versus Fixed-Point Representations" explain how you can infer fixed-point computation.
P.S. If you are using Arria 10 which has support for one FP32 Fused Multiply and Add per DSP, even in the best case you will be able to do one instance of (a * b) + (c * d) per DSP if all numbers are smaller than 18 bits, giving you one extra operation per DSP compared to FP32.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page