I use a code very similar to the example provided by link:
The optimisation works fine in 32bit, indeed by using 64bit the results become unstable. I found by using debugger that results are the same before the second call ofline:
if (dtrnlspbc_solve (&handle, fvec, fjac, &RCI_Request) != TR_SUCCESS)
where fvec and fjac are the same but the output results is unstable (the result are in the vector initialized in dtrnlspbc_init).
I was not able to found if this is a bug in the MKL or a problem with my configuration. If anyone has any idea about the problem you are welcome.
I have an issue that is very similar to this one. We build on one centos (5.5) machine and deploy on another centos (6.4) machine. We are using the dtrnlspbc_solve routine to find a fit to image data. Our real program calls the "solve" routine thousands of times, from run to run we get slightly different results.
I finally narrowed the issue down to a case where I call the "solve" routine twice, once right after the other. The first time though the loop the answer converges to one point, subsequent times though the loop the answer converges to another. A few things of interest:
What I would like to know is:
Our code is proprietary so I cannot post it to a public forum but would be willing to work with "intel" directly.
Here is a bit more information about my problem.
I took one of our machines with a Zeon processors which was running centos 5.5 and upgraded it to centos 6.5. The code runs find on the Zeon processors, after the upgrade it still runs find. It appears to be an issue when we compile on a Zeon processor and run that code on a "i7-2600" processor.
We just upgraded to the latest, composer_xe_2013_sp1, and the issues that I was having went away. The both the Zeon and the i7-2600 processors are both internally consistent. The results vary from processor to processor but it is well within what we assume is round off error.