Intel® IPP 2018 Beta is now available as part of the Parallel Studio XE 2018 Beta.
Check the Join the Intel® Parallel Studio XE 2018 Beta program post to learn how to join the Beta program, and the provide your feedback.
What's New in Intel® IPP 2018 Beta Update 1:
- Added the new functions to support the LZ4 data compression and decompression.
- Improved compatibility of the patch files for the GraphicsMagick source in the grayscale functions.
- Improved threading scalability of the patch files for the GraphicsMagick source in the Gaussian functions.
- Added new color conversion functions ippiDemosaicVNG that support demosaicing algorithm with VNG interpolation.
- The Integration Wrappers APIs are now part of the Intel® IPP packages.
- Fixed some known problems in the patch files for the zlib versions 22.214.171.124, 126.96.36.199, and 188.8.131.52.
What's New in Intel® IPP 2018 Beta:
- Introduced the patch files for the GraphicsMagick source to provide drop-in optimization with the Intel® IPP functions:
- The patches support GraphicsMagick version 1.3.25, and provide optimization for the following GraphicsMagick APIs: ResizeImage, ScaleImage, GaussianBlurImage, FlipImage, and FlopImage.
- The patches improved the APIs performance by up to 4x, depending on the functionality, input parameters, and processors.
Removed the cryptography code dependency on the main package. The cryptography functions are now provided as the standalone packages that do not require installation of the main Intel® IPP packages.
Improved performance of LZO data compression functions on Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2).
- Added the 64-bit data length support for Canny edge detection functions (ippiCanny_32f8u_C1R_L).
- Added the Elliptic Curves key generation and Elliptic Curves based Diffie-Hellman shared secret functionality.
- Added the Elliptic Curves sign generation and verification functionalities for the DSA, NR, and SM2 algorithms.
- Extended optimization for Intel® Advanced Vector Extensions 512 (Intel® AVX-512) set and Intel® Advanced Vector Extensions 2 (Intel® AVX2) instruction set.
- Removed support for Intel® Pentium® III processor. The minimal supported instruction set is Intel® Streaming SIMD Extensions 2 (Intel® SSE2).
- Removed support for Intel® Xeon Phi™ Coprocessor x100.
Check Intel(R) 2018 Beta release notes to learn information.
I'm using them myself.
From this release:
"Threading Notes: Though Intel® IPP threaded libraries are not installed by default, these threaded libraries are available by the custom installation, so the code written with these libraries will still work as before. However, the multi-threaded libraries are deprecated and moving to external threading is recommended. Your feedback on this is welcome."
So, my feedback is basically as follows: I'm using IPP from a scripting language where I declare library calls to IPP, and in which I cannot do any external threading myself. That's why I need them.
As I wrote before, the reasoning as to why internal threading is deprecated now, doesn't make sense. If the current implementation of internal threading lames, it must be fixed, not removed.
Also, if I read this mailing-list, the most asked question is: my favorite function has been removed from IPP, please help me.I fail to understand such a policy. Is software there for the sake of change ? Or to help developers ? One often gets the impression that our – once intelligent – profession is all about fashion nowadays – the latest summer fashion dictates that we do things differently this season, we are not allowed to wear pink anymore, we must wear purple now.
Linus Torvalds believes the technology industry's celebration of innovation is "smug, self-congratulatory, and self-serving" https://www.theregister.co.uk/2017/02/15/think_different_shut_up_and_work_harder_says_linus_torvalds.... He thinks that real innovation is paying attention to all the tiny little details and getting the work done. I agree.
Dilbert agrees too http://dilbert.com/strip/1998-07-10
Adriaan van Os
2 Feature Requests:
- Please revive the Multi Threaded library.
There are use case it doesn't make sense to have Application Level MT.
- Add "Multi Threaded Template" for
- Border Filters as in the example of threading example for Gaussian Blur. This could be either a function which has Function Pointer as an input or just make the New MT Library which is a basically wrappers around the current functions.
- Pixel Wise Filters (See above just no need for borders).
In this case the overhead must be minimum.
Just a question, what is the new "Platform Aware" library?
What are the benefits? What has changed?
Hi Bruno Martinez, thanks for the feedback. I see both uint8_t, and Ipp8u is define as the "unsigned char", so if the code uses uint8_t, I expect it use the Ipp8u function. Besides the consistent naming for type uint8_t, is there any other reason that could benefit your code to move to this date type? Now, if it is changed, it will impact the existing code.
Yes, mixing uint8_t and Ipp8u works. I already do that. I think you should leave the typedefs to keep compiling old code but change the function prototypes and documentation to the standard types. I think the advantage is that int32_t could be long and Ipp32 could be int (under Windows where long and int are both 32 bits) and then you cannot mix them so freely.
The integration wrapper was a technical preview package, and it is provided as a standalone download( not part of IPP main package). Now, based on the users' feedback , we now add it into our IPP main package, and deliver it as a part of the product.
What is the Integration Wrapper? What does it do?
Any chance for a guide how to utilize efficiently the function in Multi Threaded Scenario?
With some test cases (Gaussian Blur, General Convolution, Median Filter, etc...).
IWs wrap around IPP functions to provide more user friendly interface for both C and C++. They are also designed specifically to improve tiling and threading experience by removing complex borders and buffers manipulations.
Here is dev guide for IWs, it describes API and features: https://software.intel.com/en-us/ippiw-dev-guide-and-reference
In IPP 2018 beta update IW headers are located in IPPROOT/include/iw and IPPROOT/include/iw++. IW sources and examples can be found in IPPROOT/components/<archive>/interfaces/iw
This is great.
Yet I wish there were more guides and example cases online where people can talk and comment about.
There are performance gains to be made on the table and we, the users of IPP and Intel Compiler, want to know how to pick them up.