Intel® IPP 2017 Update 3 is now available. This release increased ZLIB compression performance, added the new functions in ZLIB to support the user-defined Huffman tables:
Fixed some known problems in Intel® IPP Cryptography functions.
Added support for Microsoft Visual Studio* 2017 on Windows*.
Added Support for Conda* repositories installation
What's New in Intel® IPP 2017 Update 2:
Some issues were identified in Intel® IPP Cryptography XTS-AES, GFp, and HMAC functions. These issues will be fixed in the future versions of Intel® IPP. Visit the known problems in Intel® IPP Cryptography for more information.
What's New in Intel® IPP 2017 Update 1:
Added Support of Intel® Xeon Phi™ processor x200 (formerly Knights Landing) leverage boot mode on Windows
Added the following new functions in the cryptography domain:
Fixed a number of internal and external defects. Visit the Intel® IPP 2017 bug fixes for more information.
What's New in Intel® IPP 2017:
Added Intel® IPP Platform-Aware APIs to support 64-bit parameters for image dimensions and vector length on 64-bit platforms and 64-bit operating systems:
Introduced new Integration Wrappers APIs for some image processing and computer vision functions as a technical preview. The wrappers provide the easy-to-use C and C++ APIs for Intel® IPP functions, and they are available as a separate download in the form of source and pre-built binaries.
Performance and Optimization:
Check the Intel® IPP release notes to find more information.
After execution of "install.sh" in l_ipp_2017.0.098.tgz I get:
The IA-32 architecture host installation is no longer supported.
The product cannot be installed on this system.
Please refer to product documentation for more information.
Thanks for checking. As I understand, you are installing at 32 bit host Linux. right?
Now, installation on IA-32 hosts is no longer supported by the Parallel Studio XE and its any components product:
Intel IPP 32 bit libraries are continued to be provided. If you install the product at the 64 bit Linux host, it will include the both the 32 bit libraries, and 64 bit libraries.
The multi threaded libraries are available by the "custom" installation option: When installing Intel IPP, choose the ‘custom’ installation option, you will get the option to select threaded libraries for different architectures.
Hi, I'm new to using IPP. I noticed that when using ippiDivC_32f_C1IR, I get a residual decimal when I thought I should be getting an integer.
Let a be a 1 x 4 matrix.
Then I compute a = 7 * 65535
Then I use ippiDivC_32f_C1lR to divide matrix a by 7. The result I get is 65535.0039.
Any values less than 65535 seems to have a trailing 0.0039 with the result. Any values above 65535 will be an integer.
Why is this? Am I using the wrong function to do my matrix division?
The "What's New in Intel® IPP 2017" part in the post are the new features introduced in the 2017 release. These are not available in the IPP 9.0, and its update releases.
Can you post your test code that shows the problem? Also, what is the process that you find the problem? That will help us to further check the code.
The trailing decimal affects thresholding for image layers and filters.
Here is the c++ test code that I'm debugging through to see the problem:
using namespace std;
int aa = ippStsNoErr;
roi.height = 2;
roi.width = 2;
for (i = 0; i < 2 * 2; ++i)
a = 0;
b = 0;
c = 0;
a = 64535 * 7;
b = 65530 * 7;
c = 75536 * 7;
ippiMulC_32f_C1IR(7, a, 2 * sizeof(float), roi);
ippiDivC_32f_C1IR(7, a, 2 * sizeof(float), roi);
ippiDivC_32f_C1IR(7, b, 2 * sizeof(float), roi);
ippiDivC_32f_C1IR(7, c, 2 * sizeof(float), roi);
//ippiDivC_32f_C1IR(7, a, 2 * sizeof(float), roi);
By the way, The ThreadedFunctionsList.txt file in the 2017.0.102 release is unchanged from the 9.0 release file. The top line in the file says
"Threaded Functions' list in IPP 9.0 Gold (in alphabetical order"
So, either this is the wrong file or the top line wasn't updated.
Adriaan van Os
We had some further check on the code. This actually does not have problem. Here the code is using the single precision float point, and that precession is using 23 bits for the fraction part:
so, it is typically 1e-6 to 1e-7 on the relative accuracy precision. This code is about 1e-7 precision, and that is some expected result.
Is there access to the old Multi Threaded version of IPP Functions in this release?
We need access to the fastest Multi Threaded Gaussian Blur and we don't update (From 2015) because of you stating you are deprecating the Multi Threaded functions.
I really think you should offer Multi Threaded version of your functions and not only an API for upper level Multi Threading.
Many of your clients need single function at a time to be as fast as possible.
We talked about it here;
Tiling approach doesn't work well in many cases (In case of Spatial Operations stacked one on top the other Tiling will create duplication of work to scales which makes it not reasonable solution).
Please give us both options and let us decide what to use.