- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I just started learing IPP. Now, I'm stuck because the ippiCrossCorrFull_NormLevel_32f_C1R function returns strage results.
I have the following test code:
-----------------------------------------------
Ipp32f pSrc[5*4] = { 0, 0, 0, 1, 0,
0, 0, 0, 1, 0,
0, 0, 0, 1, 0,
0, 0, 0, 1, 0};
Ipp32f pTpl[3*4] = { 0, 1, 0,
0, 1, 0,
0, 1, 0,
0, 1, 0};
// according to the manaul the result is 5+3-1 times 4+4-1
IppiSize dstSize = {7,7};
Ipp32f pDst[7*7];
IppiSize srcRoiSize = {5, 4};
IppiSize tplRoiSize = {3, 4};
int srcStep = 5*sizeof(Ipp32f);
int tplStep = 3*sizeof(Ipp32f);
int dstStep = 7*sizeof(Ipp32f);
IppStatus st = ippiCrossCorrFull_NormLevel_32f_C1R(pSrc, srcStep, srcRoiSize,
pTpl, tplStep, tplRoiSize,
pDst, dstStep);
printf("%s\\n", ippGetStatusString(st));
for(int y = 0; y
for(int x = 0; x
printf("%f ", pDst[x+y*dstSize.width]);
}
printf("\\n");
}
---------------------------------------------
the result is the following:
##########################
ippStsNoErr: No error, it's OK
0.153093 0.000000 0.000000 -0.213201 0.426401 -0.213201 0.000000
0.306186 0.000000 0.000000 -0.316228 0.632456 -0.316228 0.000000
0.000000 -0.306186 -0.153093 -0.408248 0.816496 -0.408248 -0.153093
0.612372 0.000000 0.000000 -0.500000 1.000000 -0.500000 0.000000
0.000000 -0.306186 -0.153093 -0.408248 0.816496 -0.408248 -0.153093
0.306186 0.000000 0.000000 -0.316228 0.632456 -0.316228 0.000000
0.000000 0.076547 0.000000 -0.213201 0.426401 -0.213201 0.000000
###########################
the result according to MATLAB should be:
**********************************
0 0 0 -0.2132 0.4264 -0.2132 0
0 0 0 -0.3162 0.6325 -0.3162 0
0 0 0 -0.4082 0.8165 -0.4082 0
0 0 0 -0.5000 1.0000 -0.5000 0
0 0 0 -0.4082 0.8165 -0.4082 0
0 0 0 -0.3162 0.6325 -0.3162 0
0 0 0 -0.2132 0.4264 -0.2132 0
**********************************
what's puzzling me is why are in the first 3 and the last column no all entries 0. The 3 remaining columns hold the correct numbers.
The result from MATLAB is what I was expecting. I'm wondering IPP produces non-zero entries. Is this a bug or am I using the IPP function wrongl?
The samples from the manual work as expected. The results match what MATLAB computes.
fyi: I'm running an 64bit Arch Linux with an Intel i5 750 CPU and IPP 6.1.2.051
greetings and happy easter
michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Michael,
Thanks for letting us know. The function ower check about this funcions. The problem came from inaccuracy of calculations in 32f data type. We are planing to include the fix in the upcoming IPP 7.0 beta release.
btw, the function does not need to initialize pDst,
Thanks,
Chao
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You are missing an important step:
Ipp32f pDst[7*7];
memset(pDst, 0, 7 * 7 * sizeof(Ipp32f));
In other words, you are using uninitialized array, but your results are identical to Matlab otherwise.
Happy coding :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Michael,
Thanks for letting us know. The function ower check about this funcions. The problem came from inaccuracy of calculations in 32f data type. We are planing to include the fix in the upcoming IPP 7.0 beta release.
btw, the function does not need to initialize pDst,
Thanks,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you mentioned that this should be fixed with the beta release on ipp7. i was wondering how the schedule for the releases of ipp 7 is?
Furthermore, I'm wondering where the rounding issue comes from. I'd like to fully crosscorrelate two images of sizes > 500x500. I heard that the ipp crosscorrelation function may utilize FFT. So my question is, is the rounding issue coming from the FFT? If not then I'm gonna implement my own crosscorrelation.
Many thanks
michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I don't see this fix inhttp://software.intel.com/en-us/articles/intel-ipp-70-library-bug-fixes/
Will it be included in the first non beta 7.0 release?
Thank you
Ciro
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ciro,
This was fixed in the Beta package. You can go to Beta registration page to download the IPP 7.0 beta package.
Thanks,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am seeing an issue with cross correlation computation in IPP 7.0 update 5. The function is ippiCrossCorrValid_NormLevel_32f_C1R. However my results are not bounded. That is I am sometimes getting results like 22.07, 59.49 etc while the results should be less than 1. I did try initializing the destination pointer, but the problem still remains.
The image patches I am correlating are attached. In this particular case, the maximum correlation I am getting is 22.07.
Is this a known bug ? Any workarounds ?
Thanks!
H
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You are missing an important step:
Ipp32f pDst[7*7];
memset(pDst, 0, 7 * 7 * sizeof(Ipp32f));
...
I confirm that it doesn't help.
Now, I'd like to understand a couple of more things. It is just confirmed thatthis is a bug.
But, how is it possible?
Because the function ippiCrossCorrFull_NormLevel_32f_C1R is almost 10 year old! I just tested the
Test-Case with IPP v3.0 andhere is my output:
0.0000000149011612 0.0000000000000000 0.0000000000000000 -0.2132006883621216 0.4264013767242432 -0.2132006883621216 0.0000000000000000
0.0000000298023224 0.0000000000000000 0.0000000000000000 -0.3162277638912201 0.6324555277824402 -0.3162277638912201 0.0000000000000000
0.0000000000000000 -0.0000000298023224 -0.0000000149011612 -0.4082482457160950 0.8164964318275452 -0.4082482457160950 -0.0000000149011612
0.0000000596046448 0.0000000000000000 0.0000000000000000 -0.5000000000000000 1.0000000000000000 -0.5000000000000000 0.0000000000000000
0.0000000000000000 -0.0000000298023224 -0.0000000149011612 -0.4082482457160950 0.8164964318275452 -0.4082482457160950 -0.0000000149011612
0.0000000298023224 0.0000000000000000 0.0000000000000000 -0.3162277638912201 0.6324555277824402 -0.3162277638912201 0.0000000000000000
0.0000000000000000 0.0000000074505806 0.0000000000000000 -0.2132006883621216 0.4264013767242432 -0.2132006883621216 0.0000000000000000
Best regards,
Sergey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sergey,
When I read the IPP v3.0 data, it looks fine. For single precision data, it only has 23 bit from the fragment: http://en.wikipedia.org/wiki/Single-precision_floating-point_format
Typically, it can reach about 1e-7. So the following data is fine the single precision.
0.0000000149011612
But the problem reported in the original post does not look good. It create incorrect data:
0.153093
Thanks,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Take a look at all non-zero values in a "leftzero-region". Even if non-zero values do not exceed IPP's epsilon
there is astrange relation between them. Here is a1st column, for example:
0.0000000149011612 - let's saythis is A
0.0000000298023224 - A * 2
0.0000000000000000
0.0000000596046448 - A * 4
0.0000000000000000
0.0000000298023224 - A * 2
0.0000000000000000
IPP's epsilon for a single-precision value isdefined as follows:
...
#define IPP_EPS_32F 1.192092890e-07f
...
Best regards,
Sergey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sergey,
For the relation of 1st column, most likely they were caused by the difference of the last bit. If these data is in binary format:
0.0000000149011612 --> 1e-25. (the last 25th bit are 1).
0.0000000298023224 --> 1e-24. (the last 24 bit is 1).
.....
As talked before, single precision float has the limited precision, may create some difference due to the rounding error. For complex computation, such error also may be accumulated. The following article has some similar discussion:
http://software.intel.com/en-us/articles/getting-reproducible-results-with-intel-mkl/
If high precision is important, double float point may be better choice for such computation.
Thanks,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
the result according to MATLAB should be:
*******************************************
0 0 0 -0.2132 0.4264 -0.2132 0
0 0 0 -0.3162 0.6325 -0.3162 0
0 0 0 -0.4082 0.8165 -0.4082 0
0 0 0 -0.5000 1.0000 -0.5000 0
0 0 0 -0.4082 0.8165 -0.4082 0
0 0 0 -0.3162 0.6325 -0.3162 0
0 0 0 -0.2132 0.4264 -0.2132 0
*******************************************
...
Regarding MATLAB results:
1. It is not clearif a single-precision or a double-precision data typewas used;
2. The results are rounded to 4 digits after the point and it would be nice to see results withat least 16 digits after the point.
Best regards,
Sergey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks,
Hari
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It is not clear. You've submittedtwo bmp-files ( in Post #7 )but you have not submitted a simpleC/C++ Test-Case to reproduce your problem.
Best regards,
Sergey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
take a look at the CrossCorr formula in the manual - it's too complex. EPS_32f is a weight of the LSB (the last significant bit in mantissa) - so it isan accuracy of each floating point operation - imaging one fp multiplication - each floating pointnumber has 24-bit mantissa - so intermediate result has 48-bit that should be rounded back to 24-bit... Therefore there is no any "strange" relation - it's just a rough representation of a number of fp operations per one output pixel. Also take into account that for rather large images "frame" algorithm based on FFT is used ("frame" means that order of FFT depends on template size, not image).
Regards,
Igor
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Regards,
Igor
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've tested your image/template with IPP 7.0.5(6) - I don't see any correlation values greater than 1.0:
Intel Integrated Performance Primitives
CrossCorr test:
Library IppIP
CPU : p8
Name : ippip8_l.lib
Version : 7.0 build 205.85
Build date: Nov 25 2011
Image info: 59 x 59, number of channels = 1
-0.01 0.06 0.05 0.02 -0.01 -0.00 0.02 0.01 -0.02 -0.06 -0.04 0.02 0.01 -0.08 -0.09 -0.06 0.00 0.03 0.05 0.03 -0.02 -0.02 -0.01 -0.03 -0.07 -0.08 -0.04 0.04 0.08 0.04 -0.01 0.00 0.04 0.07 0.07
0.00 -0.01 -0.01 0.05 0.04 0.04 0.05 0.03 -0.01 -0.05 -0.05 0.04 0.03 -0.04 -0.07 -0.06 -0.01 -0.01 0.01 0.02 -0.00 0.01 0.00 -0.03 -0.07 -0.09 -0.04 0.04 -0.02 -0.03 -0.04 -0.01 0.02 0.04 0.03
0.00 0.00 0.00 0.08 0.06 0.04 0.03 0.01 -0.05 -0.06 -0.03 0.02 0.02 -0.04 -0.06 -0.02 0.00 0.00 0.00 -0.01 0.00 0.05 0.02 -0.03 -0.07 -0.06 -0.02 0.04 -0.02 -0.06 -0.05 0.01 0.02 0.04 0.03
0.00 0.00 0.00 0.12 0.07 0.05 0.05 0.00 -0.07 -0.08 -0.02 0.05 0.03 -0.02 -0.04 0.02 0.03 0.03 0.02 -0.03 -0.04 -0.05 -0.06 -0.07 -0.07 -0.06 -0.01 0.02 0.01 -0.05 -0.04 0.04 0.01 0.01 0.03
0.00 0.00 0.00 -0.01 0.05 0.06 0.06 0.02 -0.04 -0.07 -0.03 0.03 0.02 -0.01 -0.01 0.01 0.00 0.01 0.01 -0.00 0.00 -0.04 -0.08 -0.08 -0.07 -0.04 0.02 0.03 0.02 0.00 0.00 0.03 0.01 -0.05 -0.04
0.00 0.00 0.00 0.00 0.01 0.05 0.07 0.03 -0.03 -0.06 -0.05 -0.04 -0.03 -0.03 -0.00 0.03 0.00 -0.03 -0.01 0.06 0.10 0.03 -0.06 -0.10 -0.08 -0.05 0.00 -0.02 -0.05 -0.01 0.04 0.03 -0.02 -0.10 -0.09
0.00 0.00 0.00 0.00 0.02 0.05 0.05 0.02 -0.02 -0.06 -0.05 -0.05 -0.06 -0.07 -0.03 0.00 0.01 -0.03 -0.05 0.02 0.06 0.03 -0.04 -0.09 -0.07 -0.04 -0.02 -0.04 -0.04 -0.01 0.04 0.03 -0.00 -0.05 -0.04
0.00 0.00 0.00 0.00 0.03 0.07 0.07 0.04 -0.01 -0.05 -0.06 -0.01 -0.03 -0.08 -0.06 -0.04 -0.00 0.00 -0.01 0.02 0.04 0.03 -0.03 -0.07 -0.06 -0.04 -0.03 -0.04 -0.04 -0.01 0.02 0.01 -0.00 -0.00 0.02
0.00 0.00 0.00 0.00 0.04 0.05 0.06 0.03 0.00 -0.02 -0.04 -0.03 -0.05 -0.08 -0.07 -0.05 -0.05 -0.06 -0.05 -0.02 0.03 0.02 -0.02 -0.04 -0.03 -0.05 -0.06 -0.06 -0.06 -0.00 0.02 -0.01 -0.03 -0.03 0.00
0.00 0.00 0.00 0.00 0.05 0.06 0.05 0.01 -0.04 -0.05 -0.03 -0.02 -0.03 -0.06 -0.04 0.02 -0.01 -0.08 -0.08 -0.06 -0.01 0.02 0.03 0.08 0.06 0.00 -0.03 -0.03 -0.01 0.02 0.03 0.01 -0.04 -0.05 -0.01
0.00 0.00 0.00 0.00 0.06 0.08 0.06 -0.01 -0.05 -0.04 0.02 0.03 0.02 0.00 0.00 0.04 0.00 -0.08 -0.09 -0.07 -0.07 -0.06 -0.02 0.08 0.10 0.04 0.00 -0.02 -0.02 -0.04 -0.04 -0.05 -0.07 -0.06 -0.01
0.00 0.00 0.00 0.00 0.08 0.11 0.11 0.03 -0.04 -0.02 0.04 0.05 0.01 -0.01 0.03 0.06 0.06 -0.02 -0.05 -0.03 -0.05 -0.08 -0.06 -0.01 0.02 0.03 0.02 -0.01 -0.03 -0.04 -0.04 -0.04 -0.05 -0.02 0.02
0.00 0.00 0.00 0.00 0.12 0.15 0.18 0.15 0.03 0.00 0.01 -0.02 -0.04 -0.04 -0.00 0.03 0.02 -0.02 -0.04 -0.01 -0.03 -0.07 -0.06 -0.05 -0.05 0.02 0.03 -0.02 -0.04 -0.02 -0.02 -0.04 -0.04 -0.03 -0.01
0.00 0.00 0.00 0.00 -0.01 0.06 0.08 0.10 0.08 0.04 -0.00 -0.04 -0.09 -0.07 -0.03 0.03 0.04 0.01 -0.01 -0.01 -0.02 -0.07 -0.04 -0.02 -0.05 0.02 0.06 0.02 -0.04 -0.04 -0.01 -0.01 -0.02 -0.02 -0.01
0.00 0.00 0.00 0.00 0.00 -0.01 -0.01 0.05 0.11 0.12 0.07 -0.00 -0.07 -0.07 -0.03 -0.00 0.01 0.03 0.05 0.03 -0.04 -0.06 -0.04 -0.01 0.03 0.07 0.10 0.03 -0.05 -0.09 -0.08 -0.06 -0.03 -0.01 -0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.07 0.12 0.13 0.06 -0.02 -0.03 -0.01 -0.03 -0.04 -0.02 0.02 0.04 0.01 -0.01 -0.01 -0.04 -0.00 0.05 0.06 0.01 -0.05 -0.08 -0.07 -0.05 -0.02 -0.03 -0.03
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 0.08 0.09 0.09 0.04 -0.01 -0.02 -0.02 -0.05 -0.07 -0.03 -0.01 -0.01 0.00 0.06 0.04 0.01 0.01 0.01 0.02 0.03 0.02 0.03 0.01 -0.02 -0.06 -0.08
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.11 0.05 0.05 0.02 -0.04 -0.06 -0.05 -0.06 -0.06 -0.03 -0.03 -0.05 -0.05 -0.03 0.00 0.02 0.03 0.05 0.06 0.07 0.06 0.05 0.03 -0.04 -0.06 -0.07
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.09 0.08 0.05 0.05 0.00 -0.03 -0.04 -0.05 -0.06 -0.01 0.01 -0.03 -0.05 -0.04 -0.00 0.05 0.06 0.08 0.08 0.02 -0.02 0.00 0.01 0.00 -0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.03 0.06 0.07 0.05 0.03 -0.02 -0.05 -0.06 -0.02 -0.00 -0.04 -0.05 -0.01 -0.01 -0.03 -0.05 -0.03 -0.01 -0.03 -0.05 0.00 0.01 -0.01 -0.01 0.01
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.07 0.05 0.06 0.02 -0.01 -0.05 -0.04 -0.03 -0.02 -0.05 -0.05 -0.01 -0.01 -0.04 -0.05 -0.06 -0.08 -0.07 -0.03 0.05 0.05 -0.00 -0.02 0.02
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 0.08 0.02 0.03 0.03 -0.02 -0.04 -0.03 0.00 -0.02 -0.04 -0.04 -0.04 -0.03 -0.03 -0.04 -0.05 -0.10 -0.12 -0.10 -0.02 -0.01 -0.04 -0.03 0.02
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.08 0.11 0.08 0.05 0.04 -0.02 -0.08 -0.06 -0.04 -0.03 -0.02 0.00 -0.02 -0.04 -0.01 -0.01 -0.05 -0.09 -0.12 -0.09 -0.02 -0.01 0.03 0.09 0.15
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.14 0.10 0.07 0.04 -0.01 -0.07 -0.09 -0.07 -0.06 -0.02 0.02 0.04 -0.02 -0.03 -0.02 -0.05 -0.09 -0.11 -0.10 -0.02 -0.01 0.01 0.04 0.09
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.09 0.06 0.06 0.08 0.06 -0.00 -0.06 -0.07 -0.08 -0.05 -0.02 -0.00 -0.01 -0.02 0.03 0.02 -0.04 -0.09 -0.08 -0.05 -0.03 -0.02 -0.01 0.02
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.02 0.07 0.08 0.09 0.08 0.06 0.03 -0.01 -0.03 -0.03 -0.01 -0.02 0.01 0.04 0.07 0.03 -0.03 -0.06 -0.07 -0.08 -0.11 -0.12 -0.09
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04 0.05 0.05 0.01 0.01 0.04 0.04 0.03 -0.00 -0.02 -0.02 -0.05 -0.03 -0.02 0.02 0.02 0.00 -0.02 -0.04 -0.07 -0.09 -0.11 -0.11
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.06 0.04 -0.01 -0.06 -0.04 -0.01 0.02 0.00 -0.01 -0.02 -0.04 -0.07 -0.06 -0.05 -0.04 -0.04 -0.04 -0.02 -0.02 -0.05 -0.09 -0.11
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 0.07 0.05 -0.02 -0.06 -0.07 -0.03 0.02 0.02 -0.02 -0.04 -0.01 -0.04 -0.05 -0.03 -0.02 -0.02 -0.04 -0.03 -0.02 -0.02 -0.04 -0.07
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.08 0.14 0.11 -0.01 -0.07 -0.06 -0.04 0.02 0.05 0.03 -0.02 -0.01 -0.01 -0.01 0.00 0.01 -0.02 -0.04 -0.06 -0.08 -0.08 -0.06 -0.06
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.11 0.13 0.06 -0.01 -0.04 -0.04 0.01 0.04 0.01 -0.00 -0.00 -0.03 -0.03 0.01 0.03 0.01 -0.01 -0.03 -0.04 -0.05 -0.07 -0.05
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.09 0.10 0.07 0.01 -0.02 -0.03 -0.02 0.03 0.02 -0.03 -0.01 -0.01 -0.04 0.00 0.05 0.03 0.02 0.00 -0.03 -0.06 -0.09 -0.09
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.09 0.09 0.05 -0.01 -0.06 -0.05 -0.02 -0.00 -0.01 0.00 -0.00 -0.01 0.02 0.06 0.04 0.02 -0.00 -0.02 -0.06 -0.09 -0.06
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.01 0.06 0.06 -0.00 -0.05 -0.04 -0.02 -0.01 0.01 -0.02 -0.01 -0.01 0.00 0.03 0.05 0.03 0.01 -0.00 -0.02 -0.04 -0.03
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.03 0.04 0.01 -0.02 -0.05 -0.06 -0.04 -0.03 -0.04 -0.05 -0.06 -0.01 0.02 0.02 0.02 0.04 0.04 0.02 0.00 0.01
Regards,
Igor
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Regards,
Hari
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page