Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

How to send live video over network

Tomas_Kot
Beginner
340 Views

Hello,

I'm trying to do some improvements on a software made by my colleague (who left our team) and I would welcome some tips from you. Let me explain it:

What we have now (simplified explanation):

The system consists of server and client applications.Server receives images from a camera, compresses each individual image (frame) into JPEG using UIC (IPP) and sends the resulting compressed buffer over network (using TCP) to the client. Client gets the data, uncompresses each frame (agan using UIC) and displays it (as a texture in an Direct3D application made by me). After a whole frame is received, the client sends confirmation to the server and only then the server starts sending another new frame.

What I would like to try:

I imagine that compressing a video into a "stream" of individual JPEG pictures won't be the best approach when data size (bitrate) is concerned. I'm really not an expert at all, but wouldn't it be better to compress it as a video? So for example using something like H264. But the question is - is it possible to get live camera frames, compress them as a video (using UMC), send this data manually via TCP or UDP (WinSock) and then on the client application get somehow the individual uncompressed frames? If you think it is possible, could you please describe the basic implementation? (I don't want the whole code of course, just the idea.)

Conditions:

The point is that we are using wi-fi and we need to transmit as little data as possible, in real time (or with as low latency as possible). And also, we have to deal with cases when the wi-fi signal is quite low which in the current implementation means incredibly low framerates, because TCP tries to transfer the whole frame at all costs :) I would prefer a decrese in quality of the video over decrease in framerate.

I know the problem is quite wide and maybe the explanation isn't the best (also sorry for my English). But I will be grateful for any feedback from you, experts ;)

Thanks.

Tom

0 Kudos
1 Solution
Pavel_V_Intel
Employee
340 Views
Good day.
I'm really not an expert at all, but wouldn't it be better to compress it as a video?
This certainly can give better compression ratio.
So for example using something like H264.
I don't know your hardware resources but software h264 compression is not fast. You may not be able to maintain it in real-time. It is reasonable to consider hardware h264 encoding (e.g. Intel MediaSDK).
But the question is - is it possible to get live camera frames, compress them as a video (using UMC), send this data manually via TCP or UDP (WinSock) and then on the client application get somehow the individual uncompressed frames?
You can send and receive data any way you like, you just need to ensure its order and consistency on far-end. Common way to transmit video streams over network is by means of RTP http://tools.ietf.org/html/rfc3550. Camera -> encoder -> RTP packetization -> network transfer -> depacketization -> decoding -> rendering.
The point is that we are using wi-fi and we need to transmit as little data as possible, in real time (or with as low latency as possible). And also, we have to deal with cases when the wi-fi signal is quite low which in the current implementation means incredibly low framerates, because TCP tries to transfer the whole frame at all costs :) I would prefer a decrese in quality of the video over decrease in framerate.
You can split stream on chunks, usually it is better to keep them as small as MTU in current network to avoid packets splitting during transfer. Bit-rate on server side can be changed depend on packet losses and channel bandwidth. Latency during encoding and decoding usually depends on overall encoder speed, amount of reference frames, restrictions on frames reposition and additional features such as threading.

View solution in original post

0 Kudos
1 Reply
Pavel_V_Intel
Employee
341 Views
Good day.
I'm really not an expert at all, but wouldn't it be better to compress it as a video?
This certainly can give better compression ratio.
So for example using something like H264.
I don't know your hardware resources but software h264 compression is not fast. You may not be able to maintain it in real-time. It is reasonable to consider hardware h264 encoding (e.g. Intel MediaSDK).
But the question is - is it possible to get live camera frames, compress them as a video (using UMC), send this data manually via TCP or UDP (WinSock) and then on the client application get somehow the individual uncompressed frames?
You can send and receive data any way you like, you just need to ensure its order and consistency on far-end. Common way to transmit video streams over network is by means of RTP http://tools.ietf.org/html/rfc3550. Camera -> encoder -> RTP packetization -> network transfer -> depacketization -> decoding -> rendering.
The point is that we are using wi-fi and we need to transmit as little data as possible, in real time (or with as low latency as possible). And also, we have to deal with cases when the wi-fi signal is quite low which in the current implementation means incredibly low framerates, because TCP tries to transfer the whole frame at all costs :) I would prefer a decrese in quality of the video over decrease in framerate.
You can split stream on chunks, usually it is better to keep them as small as MTU in current network to avoid packets splitting during transfer. Bit-rate on server side can be changed depend on packet losses and channel bandwidth. Latency during encoding and decoding usually depends on overall encoder speed, amount of reference frames, restrictions on frames reposition and additional features such as threading.
0 Kudos
Reply