Hello, Just wondering in an OpenCL pipeline how we can send GPU OpenCL images or buffers to OpenVino / clDNN plug-in for inference? Trying to have an end-to-end zero-copy pipeline with all images staying on GPU. If this is possible, in such a pipeline is input resizing also supported?
I have one advice about loading data to Inference Engine which is applicable to all plugins - don't use SetBlob function from InferRequest, use GetBlob only. No clDNN specific tricks.
Input resizing can be done earlier using OpenCV for example.
> use GetBlob only
Thank you for your advice. We are currently using SetBlob in our applications but we can refactor to use GetBlob (I believe that would be more async so it could run faster). Still this will require copies from CPU to GPU, correct? Ideally, we should send our GPU OpenCL image or buffer directly to the clDNN input when the GPU device is selected for zero-copy.