- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I have a G729 voice call stored as a pcap file, I filtered out one side of that and stored it
inrtpdump format.
I then fed that dump file to the umc_speech_rtp_codec sample, using the command line
umc_speech_rtp_codec -format IPP_G729 test.rtp test.wav
This appears to process ~90 frames of the file, creating a wav file ~1s long, even
though there are ~1400 frames in the file.
There are no errors messages or diagnostics, even when run under the debugger.
I have attached the pcap
Any ideas whats up?
TIA
Paul
inrtpdump format.
I then fed that dump file to the umc_speech_rtp_codec sample, using the command line
umc_speech_rtp_codec -format IPP_G729 test.rtp test.wav
This appears to process ~90 frames of the file, creating a wav file ~1s long, even
though there are ~1400 frames in the file.
There are no errors messages or diagnostics, even when run under the debugger.
I have attached the pcap
Any ideas whats up?
TIA
Paul
1 솔루션
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Paul,
I investigated your file. The problem is related to the presence of SID frames. Currently umc_speech_rtp codec doesn't fully support such streams cause it isworks in offline mode. While used jitter buffer is designed to work in RT mode and produce a lot of lost frames instead of couple untransmitted frames and gets unsyncronized.We will mention it in the documentation in the next version.
But you can change G729DePacketizer::SetPacket method to be capable to insert sufficiend number of untransmitted frames based on information from RTP header: mark field and difference in timestamps and sequence number. Or disable using VAD in your softphone.
IgorS. Belyakov
I investigated your file. The problem is related to the presence of SID frames. Currently umc_speech_rtp codec doesn't fully support such streams cause it isworks in offline mode. While used jitter buffer is designed to work in RT mode and produce a lot of lost frames instead of couple untransmitted frames and gets unsyncronized.We will mention it in the documentation in the next version.
But you can change G729DePacketizer::SetPacket method to be capable to insert sufficiend number of untransmitted frames based on information from RTP header: mark field and difference in timestamps and sequence number. Or disable using VAD in your softphone.
IgorS. Belyakov
링크가 복사됨
1 응답
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Paul,
I investigated your file. The problem is related to the presence of SID frames. Currently umc_speech_rtp codec doesn't fully support such streams cause it isworks in offline mode. While used jitter buffer is designed to work in RT mode and produce a lot of lost frames instead of couple untransmitted frames and gets unsyncronized.We will mention it in the documentation in the next version.
But you can change G729DePacketizer::SetPacket method to be capable to insert sufficiend number of untransmitted frames based on information from RTP header: mark field and difference in timestamps and sequence number. Or disable using VAD in your softphone.
IgorS. Belyakov
I investigated your file. The problem is related to the presence of SID frames. Currently umc_speech_rtp codec doesn't fully support such streams cause it isworks in offline mode. While used jitter buffer is designed to work in RT mode and produce a lot of lost frames instead of couple untransmitted frames and gets unsyncronized.We will mention it in the documentation in the next version.
But you can change G729DePacketizer::SetPacket method to be capable to insert sufficiend number of untransmitted frames based on information from RTP header: mark field and difference in timestamps and sequence number. Or disable using VAD in your softphone.
IgorS. Belyakov
