- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can anyone help me understanding TensorDesc constructor?
I installed openvino 2020.1 and I found out the below in C++ sample code in /opt/intel/openvino_2020.1.023/inference_engine/samples/cpp/common/samples/ocv_common.hpp. SizeVector in TensorDesc is initialized in the order of NCHW, but the layer out is NHWC.
static UNUSED InferenceEngine::Blob::Ptr wrapMat2Blob(const cv::Mat &mat) { size_t channels = mat.channels(); size_t height = mat.size().height; size_t width = mat.size().width; size_t strideH = mat.step.buf[0]; size_t strideW = mat.step.buf[1]; bool is_dense = strideW == channels && strideH == channels * width; if (!is_dense) THROW_IE_EXCEPTION << "Doesn't support conversion from not dense cv::Mat"; InferenceEngine::TensorDesc tDesc(InferenceEngine::Precision::U8, {1, channels, height, width}, InferenceEngine::Layout::NHWC); return InferenceEngine::make_shared_blob<uint8_t>(tDesc, mat.data); }
If I grep "NHWC", it looks like this mismatch exists in several places.
ubuntu@ubuntu:/opt/intel/openvino/inference_engine/include$ grep -nr . -e "NHWC"
./gpu/gpu_context_api_ocl.hpp:184: TensorDesc ydesc(Precision::U8, { 1, 1, height, width }, Layout::NHWC);
./gpu/gpu_context_api_ocl.hpp:192: TensorDesc uvdesc(Precision::U8, { 1, 2, height / 2, width / 2 }, Layout::NHWC);
./gpu/gpu_context_api_va.hpp:98: TensorDesc ydesc(Precision::U8, { 1, 1, height, width }, Layout::NHWC);
./gpu/gpu_context_api_va.hpp:106: TensorDesc uvdesc(Precision::U8, { 1, 2, height / 2, width / 2 }, Layout::NHWC);
./gpu/gpu_context_api_dx.hpp:129: TensorDesc desc(Precision::U8, { 1, 1, height, width }, Layout::NHWC);
./gpu/gpu_context_api_dx.hpp:138: TensorDesc uvdesc(Precision::U8, { 1, 2, height / 2, width / 2 }, Layout::NHWC);
./vpu/vpu_plugin_config.hpp:97: * VPU_CONFIG_VALUE(NHWC) executable network forced to use NHWC input/output layouts
However, in the OpenVINO document https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html, VectorSize matches the layout in TensorDesc construct.
InferenceEngine::SizeVector dims_src = { 1 /* batch, N*/, (size_t) frame_in->Info.Height /* Height */, (size_t) frame_in->Info.Width /* Width */, 3 /*Channels,*/, }; TensorDesc desc(InferenceEngine::Precision::U8, dims_src, InferenceEngine::NHWC); /* wrapping the surface data, as RGB is interleaved, need to pass only ptr to the R, notice that this wouldn’t work with planar formats as these are 3 separate planes/pointers*/ InferenceEngine::TBlob<uint8_t>::Ptr p = InferenceEngine::make_shared_blob<uint8_t>( desc, (uint8_t*) frame_in->Data.R); inferRequest.SetBlob(“input”, p); inferRequest.Infer(); //Make sure to unlock the surface upon inference completion, to return the ownership back to the Intel MSS pAlloc->Unlock(pAlloc->pthis, frame_in->Data.MemId, &frame_in->Data);
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a great question, shame it was never answered. I am stuck on the issue of dense matrix as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page