- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Following this link Convert ONNX* Mask R-CNN Model to the Intermediate Representation , I get mask_rcnn_R_50_FPN_1x.xml and then choose DetectionOutput for the detection_output_name.
https://docs.openvinotoolkit.org/latest/omz_models_model_yolact_resnet50_fpn_pytorch.html
But when I run the demo with DetectionOutput, I got an error:
mask_rcnn_demo.exe -i D:/hqx/yolact/test_Color.jpg -m D:/hqx/mask_rcnn_R_50_FPN_1x.xml -detection_output_name=DetectionOutput
InferenceEngine: API version ......... 2.1
Build ........... 2021.3.0-2787-60059f2c755-releases/2021/3
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] D:/hqx/yolact/test_Color.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
[ INFO ] CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2021.3.0-2787-60059f2c755-releases/2021/3
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Network batch size is 1
[ INFO ] Prepare image D:/hqx/yolact/test_Color.jpg
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ INFO ] Setting input data to the blobs
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\parallel\registry_parallel.impl.hpp (90) cv::parallel::ParallelBackendRegistry::ParallelBackendRegistry core(parallel): Enabled backends(3, sorted by priority): ONETBB(1000); TBB(990); OPENMP(980)
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\utils\plugin_loader.impl.hpp (67) cv::plugin::impl::DynamicLib::libraryLoad load C:\Program Files (x86)\Intel\openvino_2021.3.394\opencv\bin\opencv_core_parallel_onetbb452_64d.dll => FAILED
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\utils\plugin_loader.impl.hpp (67) cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_onetbb452_64d.dll => FAILED
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\utils\plugin_loader.impl.hpp (67) cv::plugin::impl::DynamicLib::libraryLoad load C:\Program Files (x86)\Intel\openvino_2021.3.394\opencv\bin\opencv_core_parallel_tbb452_64d.dll => OK
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\parallel\plugin_parallel_wrapper.impl.hpp (48) cv::impl::PluginParallelBackend::initPluginAPI core(parallel): plugin is ready to use 'TBB (interface 9107) OpenCV parallel plugin'
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\include\opencv2/core/parallel/backend/parallel_for.tbb.hpp (54) cv::parallel::tbb::ParallelForBackend::ParallelForBackend Initializing TBB parallel backend: TBB_INTERFACE_VERSION=9107
[ INFO:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\parallel\parallel.cpp (73) cv::parallel::createParallelForAPI core(parallel): using backend: TBB (priority=990)
[ INFO ] Start inference
[ INFO ] Processing output blobs
[ ERROR ] Cannot find blob with name: DetectionOutput
C:\j\workspace\private-ci\ie\build-windows-vs2019@2\b\repos\openvino\inference-engine\src\mkldnn_plugin\mkldnn_infer_request.cpp:293
C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\include\details/ie_exception_conversion.hpp:66
[ INFO:1] global C:\jenkins\workspace\OpenCV\OpenVINO\2021.3\build\windows\opencv\modules\core\src\utils\plugin_loader.impl.hpp (74) cv::plugin::impl::DynamicLib::libraryRelease unload C:\Program Files (x86)\Intel\openvino_2021.3.394\opencv\bin\opencv_core_parallel_tbb452_64d.dll
run the demo with 6849/sink_port_0, still got an error:
mask_rcnn_demo.exe -i D:/hqx/yolact/test_Color.jpg -m D:/hqx/mask_rcnn_R_50_FPN_1x.xml -detection_output_name=6849/sink_port_0
InferenceEngine: API version ......... 2.1
Build ........... 2021.3.0-2787-60059f2c755-releases/2021/3
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] D:/hqx/yolact/test_Color.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
[ INFO ] CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2021.3.0-2787-60059f2c755-releases/2021/3
[ INFO ] Loading network files
[ ERROR ] Cannot add output! Layer 6849/sink_port_0 wasn't found!
What is the problem? How should I do?
(Attachment file is mask rcnn demo source code)
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
please note, each and every Open Model Zoo demo contain models.lst file, which lists Open Model Zoo models that demo support and additionally, you may use this file as a parameter to Open Model Zoo Model Downloader, to download (and convert to IR, if necessary) models supported by this demo.
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hqx627,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Wan

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page