- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I did some performance comparisons, used Intel C++ compiler 9.1 for Windows.
ippsCopy_32s vs. memcpy: exactly the same speed.
ippsConvert_16s32f is slower than the standard C type cast (float) in a loop.
The ipps_zlib is slower than a standard zlib library.
Do you have an explanation for this?
What am I doing wrong? (I used Dynamic Linkage.)
What performance gains could be expected usually?
Thanks,
Martin
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
hi, could you tell us how you actually compared the performance, is it using Gettickcount of the windows or some other way?
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Ipp64u t1, t2;
Ipp32s *pSrc[Nbuffers], *pDst[Nbuffers];
// ...
t1 = ippGetCpuClocks();
for (i=0; i < Nbuffers; ++i)
ippsCopy_32s(pSrc, pDst, N);
t2 = ippGetCpuClocks();
printf("Time for ippsCopy: %s ", hyperToString((t2-t1)/100000).c_str());
t1 = ippGetCpuClocks();
for (i=0; i < Nbuffers; ++i)
memcpy(pDst, pSrc, N*sizeof(pSrc[0][0]));
t2 = ippGetCpuClocks();
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
If your buffers are small than memcpy (which inlined by compiler) will be faster then call to DLL, is not it?
Vladimir
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Unfortunately, I cannot reproduce this.
The buffer sizes I used are:
N=1000; (this is the array length)
Nbuffers=30000; (so the test was repeated in a loop 30000 times to provide reliable timings).
I tried different other values for N and Nbuffer, but memcpy and ippsCopy are always equally fast.
Cheers, Martin
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
If you compile your code with Intel C compilerthen for the memcpy/ippsCopy you compare mostly the same code as Intel compiler use the same optimized kernel from ippsCopy.
Regarding your note on zlib - could you please provide more details, what you actually compare, what conditions, what is IPP version, what is data you work with.
Our testing shows performance benefits for IPP optimized zlib over standard zllib. That is why it was stated in IPP book. You may fall into some specific conditions where it is not so, we would like to understand what is actually happen within your test.
Regards,
Vladimr
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Can you please tell what i have to do to make the release vesion more faster
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I found that for ippAffine release version(IPP5.2 version) speed is slower than standard implementation.
But debug version is faster than standard implementation when run in debug mode.
What might be the problem .
Can you please tell what i have to do to make the release vesion more faster
I am using dynamic linking.My processor type is P4. 3GHz and Ippi is loading Ippit7 dll when i run my application.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
you do not have debug version of IPP (itnever was released). If you experience difference in performance in your application for debug vs release build you need to check what is wrong in your application because in both cases you use release version of IPP binaries.
Regards,
Vladimir
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I found this post because I searched for "ippsConvert_16s32f is slower".
I have found the same using ippIP AVX (e9) 2020.0.1
What is interesting is that I found ippsConvert_16s32f to be 3-4 times faster if m7_ippsConvert_16s32f is used instead of e9_ippsConvert_16s32f.
I set this by calling ippSetCpuFeatures(PX_FM). Other IPP procedures perform faster when set to E9 as expected, but it seems that the M7 short->float conversion has some advantage.
The CPU I am testing with is:
AMD A10-7800 Radeon R7
