Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Altera_Forum
Honored Contributor I
793 Views

Difference float calculation on FPGA and CPU

Hi, 

I use float point calculation in convolution neural network (opencl). 

Result of calcultaion on FPGA and CPU is similar, but in low byte I get some difference. 

How can I fix this problem ? How can I get result on FPGA identical to result of CPU.
0 Kudos
3 Replies
Altera_Forum
Honored Contributor I
46 Views

As long as the floating point operations are carried out in the same order as the CPU code, the output will be exactly the same on the FPGA (a + b + c is not the same as a + c + b or any other permutation of adding these three numbers). Note that parallelizing (including using SIMD) a floating point reduction in an NDRange kernel, or optimizing such operation in a single work-item kernel using a shift register (as outlined in Alrtera's documents) will result in a slightly different output compared to sequential execution on a CPU.

Altera_Forum
Honored Contributor I
46 Views

 

--- Quote Start ---  

Hi, 

I use float point calculation in convolution neural network (opencl). 

Result of calcultaion on FPGA and CPU is similar, but in low byte I get some difference. 

How can I fix this problem ? How can I get result on FPGA identical to result of CPU. 

--- Quote End ---  

 

 

Unless the FP add/sub/mul/div algorithms are EXACTLY the same in each implementation (eg, how rounding is handled) you can reasonably expect a few bits of difference in the mantissa of the final result. This is an age old problem in software using floating point. You should compare the expected and received values to be equal within some error bound, and not expect them to be EXACTLY equal. IE, use: if (abs(a-b) < max_error) ... as opposed to: if (a == b) ... MAX_ERROR should be set to some small tolerance value, like 1e-6, or whatever makes sense for your application. If you go down the path of looking for EXACTLY EQUAL you will be chasing your tail for years.
Altera_Forum
Honored Contributor I
46 Views

Reply