i'm interested in using IPP in a VoIP application. are there anystats on the performance of G.729a in the IPP v5.3 (or other recent release)? any 'scaling' stats would be interesting...e.g., how CPU usage increases when using G.729a for any task (converting to/from PCM, etc.) as the number of streams increase.
i'm trying to get a sense of how many simultaneous G.729a RTP streams that can be handle by a single server (e.g., dual quad-core, etc.) for common IVR-style tasks(playing/recording audio, etc.).
i know this is question isn't very specific, but any 'scaling' info would be helpful.
- 5ms of CPU time to encode
- and 1.4ms to decode 1 second of 8KHz signed linear audio.
This also include (small) reasonable overhead imposed by surrounding code.
The test systems these numbers come from are AthlonXP 3200+ 32-bit running a6 SSE core and Pentium4 XEON 3.2GHz 64-bit running m7 SSE3 core.
I believe it scales linearly, but I don't run any sizeable installs. Cache size/channels ratio may have some influence, though.
On the other note do you guys know how to form the RFC 3551 conformant G.729 bit stream from what IPP sample produces? The RFC instructs sending the G.729 frames in big endian but I'm not sure what IPP sample produces.
As long as my receivers were written by me everything was fine but now I need to send it to other gateways.