Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

Building Intel® IPP 7.1 Samples --> libippsc.a not found

Basawaraj_J_
Beginner
643 Views

Hi,

I'm Building Intel® IPP 7.1 Samples, following steps from:

https://software.intel.com/en-us/articles/intel-ipp-71-samples-build/

 

After running:

perl build.pl --cmake=speech-codecs,intel64,make,d,mt,debug --build --clean

 

I get:

/usr/bin/ld: cannot find -lippsc
collect2: error: ld returned 1 exit status
gmake[2]: *** [__bin/debug/usc_ec] Error 1
gmake[2]: Leaving directory `/home/hcl/IPP_downloads/ipp-samples.7.1.1.013/__cmake/speech-codecs.intel64.make.d.st.debug'
gmake[1]: *** [application/usc_ec/CMakeFiles/usc_ec.dir/all] Error 2
gmake[1]: Leaving directory `/home/hcl/IPP_downloads/ipp-samples.7.1.1.013/__cmake/speech-codecs.intel64.make.d.st.debug'
gmake: *** [all] Error 2

[ speech-codecs.intel64.make.d.st.debug                     State: FAIL ]

 

My set up:

CentOS 7, broadwell board, composer_xe_2015.3.187, parallel_studio_xe_2015, system_studio_2015.3.055 and obtained speech codecs from:

https://int-software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-intel-ipp-library-71

 

(I know speech codecs is one of deprecated components, but get me support for it.)

Let me know if you need more info.

Thank you,

 

0 Kudos
1 Solution
Sergey_K_Intel
Employee
643 Views

Hi Basawaraj!

You need to re-install (modify installation) of your Composer XE and select "Customize" button on IPP page. There you need to add non-default libraries (i.e. ippsc - speech coding library). Otherwise it is not installed by default.

View solution in original post

0 Kudos
7 Replies
Sergey_K_Intel
Employee
644 Views

Hi Basawaraj!

You need to re-install (modify installation) of your Composer XE and select "Customize" button on IPP page. There you need to add non-default libraries (i.e. ippsc - speech coding library). Otherwise it is not installed by default.

0 Kudos
Basawaraj_J_
Beginner
643 Views

Thank you for the reply.

I re-installed and got it working.

Now, could you please tell me the command:

=====
./usc_speech_codec -format IPP_G711U 16bit_8kHz_PCM.wav outputfile.wav

=====

generate some other file than mu-law encoded file?

(I see that the command {{file outputfile.wav}} generate {{RIFF (little-endian) data, WAVE audio, mono 8000 Hz}} instead of {{RIFF (little-endian) data, WAVE audio, ITU G.711 mu-law, mono 8000 Hz}}... )

How can I use ./usc_speech_codec as encoder, decoder & transcoder? Is it possible? Is  there other application that works as encoder, decoder & transcoder? Where is the documentation for either case?

I'm trying to understand how the application works. This is my first hand on IPP.

Thank you,

Basawaraj.

0 Kudos
Basawaraj_J_
Beginner
643 Views

Also, could you please tell me the meaning of "-timing[CodecName]" option? Is it just the time taken to execute the "./usc_speech_codec" command?

Any other options to enable the encoder (or decoder) to process concurrent streams?

Thank you,

Basawaraj

 

0 Kudos
Ying_H_Intel
Employee
643 Views

Hi Basawaraj,

1) The timing is just the codec execute time, no other like I/O time.  

You can read the code timing.c to see what is the exact time and performance result.

spSeconds = (duration/(Ipp32f)(uscENCParams.pInfo->params.pcmType.bitPerSample>>3))/

                        (Ipp32f)uscENCParams.pInfo->params.pcmType.sample_frequency;

 

            if(f_log) {

               fprintf(f_log,"Processed %g sec of speech\n",spSeconds);

            } else {

               printf("Processed %g sec of speech\n",spSeconds);

            }

            if(clParams->puttologfile) {

               fprintf(f_log,"%4.2f MHz per channel\n",(dTimeENC/spSeconds)*lCPUFreq);

               fprintf(f_log,"%4.2f MHz per channel\n",(dTimeDEC/spSeconds)*lCPUFreq);

            } else {

               printf("%4.2f MHz per channel\n",(dTimeENC/spSeconds)*lCPUFreq);

               printf("%4.2f MHz per channel\n",(dTimeDEC/spSeconds)*lCPUFreq);

            }

 

2)  You can use the ./usc_speech_codec for both encoder or decoder, but no transcoder.

You may read the readme.html or  speech-codecs.html  file, where explain how the codec work.

 

Running the Sample

To run the sample for encode or for decode following commands line to be used:

usc_speech_rtp_codec.exe [options] <infile> <outfile>

Depending on which of two formats WAVE or RTPDump has the input file <infile> either encode or decode operation will be performed respectively. For encode operation the output file <outfile> is stored in RTPDump format, for decode output file is stored in WAVE format.

Here is one encoder command for your reference:

 ./usc_speech_codec -format IPP_GSMAMR -r7400 common.wav common.amr  

 

but please notice there is request about input

input audio WAV file, mono 8000 Hz (G711, G723, G726, G728, G729, GSMAMR, GSMFR, AMRWBE) or mono 16000 Hz (G722, G722SB, AMRWB, AMRWBE) or stereo 8000 Hz (AMRWBE) or stereo 16000 Hz (AMRWBE)

3). As for  the encode file format,

The sample is a command line based application which processes data given by single input file and stores the result in an output file.

When in encoding mode the sample consumes either 16 bit narrowband 8000 Hz or 16 bit wideband 16000 Hz raw PCM data or 8 bit narrowband 8000 Hz A-Law/Mu-Law PCM or 8bit wideband 16000 Hz A-Law/Mu-Law PCM data stored in WAVE file format and produces a compressed USC bitstream stored in a compressed WAVE file. The USC bitstream can be decompressed with the sample and a the lossy copy of original PCM file will be created in the same band and format.

The header of the USC file format is similar to the header of WAVE file format for non-PCM data. Bitstream frames are consecutively stored in data section of the USC file. Each bitstream frame is stored with small header followed by the bitstream data. The bitstream header format is depicted below:

 0               1               2               3               4               5
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7
 -------------------------------------------------------------------------------------------------
 |                    Bitrate                    |  Frame type   |         Frame length          |
 -------------------------------------------------------------------------------------------------

A header comprises of six octets: first three octets represent bitrate in bps, the second octet represents frame type and latter two octets contain the pure bitstream data length, not including the header length. The bitstream data follows after the header. In some cases, depending on the silence compression scheme used, the codec can produce an empty bitstream and so a frame header with zero data length is created.

4) the sample can't  process concurrent streams.  But developer can create process to encoder multi concurrent streams.

Not sure if you have known that the speech codec sample was deprecated, so it is not available in current IPP version and no more support/development for this.

Best Regards,

Ying

Intel IPP Support

 

0 Kudos
Basawaraj_J_
Beginner
643 Views

Thank you very much.

I knew that speech codec is deprecated component, so no more development. But didn't know that support is also withheld.

Thanks once again. I'll work on this speech codec for couple of weeks or so. Your little help I'd highly appreciate.

-Basawaraj 

 

0 Kudos
Basawaraj_J_
Beginner
643 Views

From "Ying H (Intel) Tue, 08/18/2015 - 18:29" post above:

2)

I don't see readme.html or speech-codecs.html in "ipp-samples.7.1.1.013". Could you please tell me where can I find? It will be very helpful. (If you can attach..)

The AMR reference encode command is working as you mentioned (thank you for generously mentioning it). I tried decode command as well. Good that I can play the decode WAVE output in VLC player and listen to it. But encoded common.amr file doesn't play in VLC. Is it because of RTPDump format? How to get proper AMR format, so that it can be played in VLC? What is the significance of RTPDump format, after all?

 

3)

The point mentioned here is from the document "ipp-samples.html" under the heading "Speech Sample". But there is a confusion. When input file is already 8-bit A-law compressed PCM, why is it compressed again to make "compressed USC bitstreams" and stored as WAVE file? Shouldn't it be decompressed in to 16-bit PCM rather(i.e., A-law decoding)? What is the significance of USC header format, after all?

Thank you for your extra mile support,

Basawaraj

 

0 Kudos
Basawaraj_J_
Beginner
643 Views
Can you please reply?
0 Kudos
Reply