I am using the following hardware/software:
1. Intel GPU Iris Pro Graphics 5200
2. C++ (Visual Studio 2017) with Intel OpenCL SDK 2.0
3. MATLAB 2018
I have a doubt about my precision limits using this hardware. I know from its documentation that it supports only Compute Capability 1.2, which has more errors rounding floating points than other versions of Compute Capability (eg.: 2.0).
When I compute a Covariance matrix inside GPU (using C++/OpenCL) and compare to the same computation, using the same data and equation, done inside the CPU (using MATLAB), I get a mean error of around 10^(-9).
But when I compute a Matrix Inverse inside GPU and compare to the same computation, inside CPU, the error is around 10^(-2). And this is too big to give the same result at the final end of all computations.
I am using a Gauss-Jordan method to invert a matrix of size around 10(^4) cells.
Anybody has any experience with this situation which could help on how to solve the floating point precision problem?
thank you very much,
Joao V. Dornas
Thanks for the question and the interest... Your question can branch off into a few different directions, so I want to add some heuristic approaches and references in a response here.
Floating point error can vary by:
I don't see a good order to address these bullet points... so I'll start from the top.