Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.
6712 Discussions

Check out the bzip2 functions from Intel IPP 5.2 beta

Ying_S_Intel
Employee
326 Views

Dear Customers,
I'd like to ask in this forum if anyone has a chance to check out the new functions we developed for bzip2 functions.

If you are familiar with the bzip2 algorithm, it is well known for the high quality on compression. With the Intel IPP 5.2 beta, we introduced 12 new APIs for bzip2 functions as below:

bzip2:
Main primitives:
ippsDecodeHuff_BZ2_8u16u
ippsunpackHuffcontect_BZ2_8u16u
ippsEncodeHuff_BZ2_16u8u
ippsPackHuffContext_BZ2_16u8u

Service primitives:
ippsDecodeHuffGetSize_BZ2_8u16u
ippsDecodeHuffInitAlloc_BZ2_8u16u
ippsDecodeHuffInit_BZ2_8u16u ippsDecodeHuffFree_BZ2_8u16u
ippsEncodeHuffGetSize_BZ2_16u8u
ippsEncodeHuffInitAlloc_BZ2_16u8u
ippsEncodeHuffInit_BZ2_16u8u
ippsEncodeHuffFree_BZ2_16u8u

With using these Intel IPP based bzip2 functions, you will be able to follow the building blocks chain defined in bzip2 algorithm and create solution for lossless data compression:

libbzip2 Functions

Corresponding Intel IPP bzip2 functions

BZ2_bzCompressInit

ippsRLEGetSize_BZ2_8u

ippsEncodeRLEInit_BZ2_8u

ippsEncodeRLEInitAlloc_BZ2_8u

ippsBWTFwdGetSize_8u

ippsMTFGetSize_8u

ippsMTFInit_8u

ippsMTFInitAlloc_8u

ippsEncodeHuffGetSize_BZ2_16u8u

ippsEncodeHuffInit_BZ2_16u8u

ippsEncodeHuffInitAlloc_BZ2_16u8u

BZ2_bzCompress

ippsEncodeRLE_BZ2_8u

ippsEncodeRLEFlush_BZ2_8u

ippsRLEGetInUseTable_8u

ippsReduceDictionary_8u_I

ippsBWTFwd_8u

ippsMTFFwd_8u

ippsEncodeZ1Z2_BZ2_8u16u

ippsPackHuffContext_BZ2_16u8u

ippsEncodeHuff_BZ2_16u8u

ippsCRC32_BZ2_8u

BZ2_bzCompressEnd

ippsRLEFree_BZ2_8u

ippsHuffFree_BZ2_16u8u

ippsMTFFree_8u

BZ2_bzDecompressInit

ippsBWTInvGetSize_8u

ippsMTFGetSize_8u

ippsMTFInit_8u

ippsMTFInitAlloc_8u

ippsDecodeHuffGetSize_BZ2_16u8u

ippsDecodeHuffInit_BZ2_16u8u

ippsDecodeHuffInitAlloc_BZ2_16u8u

BZ2_bzDecompress

ippsDecodeRLE_BZ2_8u

ippsBWTInv_8u, ippsMTFInv_8u

ippsDecodeZ1Z2_BZ2_16u8u

ippsExpandDictionary_8u_I

ippsUnackHuffContext_BZ2_16u8u

ippsDecodeHuff_BZ2_16u8u

ippsCRC32_BZ2_8u

BZ2_bzDecompressEnd

ippsMTFFree_8u

ippsHuffFree_BZ2_16u8u

BZ2_bzCompressInit

ippsRLE GetSize_BZ2_8u

ippsEncodeRLEInit_BZ2_8u

ippsEncodeRLEInitAlloc_BZ2_8u

ippsBWTFwdGetSize_8u

ippsMTFGetSize_8u, ippsMTFInit_8u

ippsMTFInitAlloc_8u

ippsEncodeHuffGetSize_BZ2_16u8u

ippsEncodeHuffInit_BZ2_16u8u

ippsEncodeHuffInitAlloc_BZ2_16u8u

BZ2_bzCompress

ippsEncodeRLE_BZ2_8u

ippsEncodeRLEFlush_BZ2_8u

ippsRLEGetInUseTable_8u

ippsReduceDictionary_8u_I

ippsBWTFwd_8u, ippsMTFFwd_8u

ippsEncodeZ1Z2_BZ2_8u16u

ippsPackHuffContext_BZ2_16u8u

ippsEncodeHuff_BZ2_16u8u, ippsCRC32_BZ2_8u

BZ2_bzCompressEnd

ippsRLEFree_BZ2_8u

ippsHuffFree_BZ2_16u8u

ippsMTFFree_8u

BZ2_bzDecompressInit

ippsBWTInvGetSize_8u

ippsMTFGetSize_8u, ippsMTFInit_8u

ippsMTFInitAlloc_8u

ippsDecodeHuffGetSize_BZ2_16u8u

ippsDecodeHuffInit_BZ2_16u8u

ippsDecodeHuffInitAlloc_BZ2_16u8u

BZ2_bzDecompress

ippsDecodeRLE_BZ2_8u

ippsBWTInv_8u, ippsMTFInv_8u

ippsDecodeZ1Z2_BZ2_16u8u

ippsExpandDictionary_8u_I

ippsUnackHuffContext_BZ2_16u8u

ippsDecodeHuff_BZ2_16u8u

ippsCRC32_BZ2_8u

BZ2_bzDecompressEnd

ippsMTFFree_8u

ippsHuffFree_BZ2_16u8u


Please try these out if you are interested, and let us know your feedback on using these functions.
We also have a plan to integrate these bzip2 features into our data compression sample in our future release, stay tuned.

For Intel IPP 5.2 beta program, please visit http://www.intel.com/software/products/ipp/beta.htm

Hope it helps.
Thanks,
Ying
Intel Corp.

0 Kudos
4 Replies
mb79
Beginner
326 Views
This is a very interesting feature!
Are there any benchmarks available to see what performance acceleration could be reached?

Marco

0 Kudos
Vladimir_Dudnik
Employee
326 Views

Hi Marco,

our expert has provided the following brief info on that:

Single-threaded ipp_bzip2 performance gain is ~1.3x for both compression and decompression on Calgary Corpus;

Multi-threaded ipp_bzip2 performance gain on Large Corpus is:

for compression is ~2.45x and decompression - ~1.45x on Core2 Duo;

for compression is ~3.2x and decompression - ~1.65x on 2xCore2 Quad;

All performance numbers are obtained at default performance level 9.

Regards,
Vladimir

0 Kudos
mb79
Beginner
326 Views
Thank you Vladimir for this values! I will discuss using IPP with my colleagues now.

Marco
0 Kudos
Vladimir_Dudnik
Employee
326 Views

Hi Marco,

please note that these numbers are for the latest IPP version, which is 5.3 for now. Currently we are finalizing release IPP 5.3 update 2. The next version, which will be IPP 6.0 beta will became available in several weeks.

Regards,
Vladimir

0 Kudos
Reply