- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Using the latest Openvino release (openvino_2019.1.144), I've used the model optimizer to convert the following tensor flow model: ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03
It compiled successfully and works fine when used with the 'object detection ssd' sample application and the CPU plug-in (--data_type FP32).
However it fails when using it with the NCS2 and the Myriad plugin (--data_type FP16):
[ INFO ] InferenceEngine: API version ............ 1.6 Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] /home/user/dogcat_big.bmp [ INFO ] Loading plugin API version ............ 1.6 Build .................. 22443 Description ....... myriadPlugin [ INFO ] Loading network files: /home/user/ssd_fpn_16/frozen_inference_graph.xml /home/user/ssd_fpn_16/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin E: [xLink] [ 370309] dispatcherEventSend:955 Write failed event -1 E: [xLink] [ 370309] handleIncomingEvent:300 handleIncomingEvent() Read failed -1 E: [xLink] [ 370314] dispatcherEventReceive:368 dispatcherEventReceive() Read failed -1 | event 0x7fb5b306eea0 XLINK_WRITE_REQ E: [xLink] [ 370314] eventReader:230 eventReader stopped E: [watchdog] [ 370314] sendPingMessage:132 Failed send ping message: X_LINK_ERROR E: [xLink] [ 370314] XLinkReadDataWithTimeOut:1377 Event data is invalid E: [ncAPI] [ 370314] ncGraphAllocate:1784 Can't read input tensor descriptors of the graph, rc: X_LINK_ERROR [ ERROR ] Failed to allocate graph: NC_ERROR
However, it works fine (although slow) with the original NCS:
[ INFO ] InferenceEngine: API version ............ 1.6 Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] /home/user/dogcat_big.bmp [ INFO ] Loading plugin API version ............ 1.6 Build .................. 22443 Description ....... myriadPlugin [ INFO ] Loading network files: /home/user/ssd_fpn_16/frozen_inference_graph.xml /home/user/ssd_fpn_16/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ INFO ] Batch size is 1 [ INFO ] Start inference (1 iterations) [ INFO ] Processing output blobs [0,17] element, prob = 0.847168 (347,296)-(648,643) batch id : 0 WILL BE PRINTED! [1,18] element, prob = 0.747559 (9,67)-(411,597) batch id : 0 WILL BE PRINTED! [ INFO ] Image out_0.bmp created! total inference time: 2885.51 Average running time of one iteration: 2885.51 ms Throughput: 0.346559 FPS [ INFO ] Execution successful
If this model is not yet supported by the NCS2, is there a plan to?
The same goes for the other tensorflow SSD FPN variants.
Thanks,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Jones, Acer,
Yes you are right this is broken for MYRIAD. I also just tried ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03 and attempted to run the C++ object_detection_sample_ssd sample. I will file a bug against this on your behalf.
Sorry for the trouble and thanks for your patience.
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
Thanks for the reply.
I think 'ssd_resnet_50_fpn_coco' is also broken for MYRIAD.
Thanks,
Acer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Jones, Acer,
Wouldn't doubt that 'ssd_resnet_50_fpn_coco' is also broken. I think one SSD Bug report is enough. The OpenVino devs will make it work for all SSD (Single Shot Detection) model types.
But thank you for your patience !
Stay tuned on this forum for the next OpenVino release !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
This doesn't appear to have been fixed within the last release (openvino_2019.2.242)?
Unless there are different instructions to get this to work now?
Thanks,
Acer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Jones, Acer,
Can you post the specific error you are getting here on OpenVino 2019R2 ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
Model converted using:
sudo python3 mo_tf.py --input_model ./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api ./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --reverse_input_channels --data_type FP16
Note - I copied frozen_inference_graph.xml, frozen_inference_graph.bin and frozen_inference_graph.mapping to ./ssd_fpn
Example executed using:
./object_detection_sample_ssd -i ./dogcat.jpeg -m ./ssd_fpn/frozen_inference_graph.xml -d MYRIAD [ INFO ] InferenceEngine: API version ............ 2.0 Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] ./dogcat.jpeg [ INFO ] Loading Inference Engine [ INFO ] Device info: MYRIAD myriadPlugin version ......... 2.0 Build ........... 27579 [ INFO ] Loading network files: ./ssd_fpn/frozen_inference_graph.xml ./ssd_fpn/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device E: [xLink] [ 494677] [EventRead00Thr] dispatcherEventReceive:336 dispatcherEventReceive() Read failed (err -1) | event 0x7faf73ffee60 XLINK_WRITE_RESP E: [xLink] [ 494678] [EventRead00Thr] eventReader:223 eventReader thread stopped (err -1) E: [xLink] [ 494678] [Scheduler00Thr] dispatcherEventSend:913 Write failed (header) (err -1) | event XLINK_WRITE_REQ E: [xLink] [ 494678] [Scheduler00Thr] eventSchedulerRun:588 Event sending failed E: [xLink] [ 494678] [object_detectio] XLinkReadDataWithTimeOut:1323 Event data is invalid E: [ncAPI] [ 494678] [object_detectio] ncGraphAllocate:1917 Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR [ ERROR ] Failed to allocate graph: NC_ERROR
Thanks,
Kevin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
Model converted using:
sudo python3 mo_tf.py --input_model ./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api ./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --reverse_input_channels --data_type FP16
Note - I copied frozen_inference_graph.xml, frozen_inference_graph.bin and frozen_inference_graph.mapping to ./ssd_fpn
Example executed using:
./object_detection_sample_ssd -i ./dogcat.jpeg -m ./ssd_fpn/frozen_inference_graph.xml -d MYRIAD [ INFO ] InferenceEngine: API version ............ 2.0 Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] ./dogcat.jpeg [ INFO ] Loading Inference Engine [ INFO ] Device info: MYRIAD myriadPlugin version ......... 2.0 Build ........... 27579 [ INFO ] Loading network files: ./ssd_fpn/frozen_inference_graph.xml ./ssd_fpn/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device E: [xLink] [ 494677] [EventRead00Thr] dispatcherEventReceive:336 dispatcherEventReceive() Read failed (err -1) | event 0x7faf73ffee60 XLINK_WRITE_RESP E: [xLink] [ 494678] [EventRead00Thr] eventReader:223 eventReader thread stopped (err -1) E: [xLink] [ 494678] [Scheduler00Thr] dispatcherEventSend:913 Write failed (header) (err -1) | event XLINK_WRITE_REQ E: [xLink] [ 494678] [Scheduler00Thr] eventSchedulerRun:588 Event sending failed E: [xLink] [ 494678] [object_detectio] XLinkReadDataWithTimeOut:1323 Event data is invalid E: [ncAPI] [ 494678] [object_detectio] ncGraphAllocate:1917 Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR [ ERROR ] Failed to allocate graph: NC_ERROR
Thanks,
Acer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do you require any further information to debug this?
Thanks,
Acer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Jones, Acer,
Nope. You've done good - I have all the information I need. Thanks for your patience. It actually looks to me like the NCS stick ran out of memory. If i can reproduce it, I will file a bug.
I will report back here on this forum.
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Jones, Acer,
I wanted to tell you that i reproduced your bug on OpenVino 2019R2.01 and I have filed a bug on your behalf. Sorry for the trouble and thank you so much for your patience !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just repeated this on the latest release of the OpenVINO SDK (2019.3.0-375-g332562022).
It still does not work for me with the NCS2 but works with the NCS1.
Model converted using:
sudo python3 mo_tf.py --input_model ./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api ./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --reverse_input_channels --data_type FP16 /home/user/.local/lib/python3.5/site-packages/scipy/__init__.py:115: UserWarning: Numpy 1.13.3 or above is required for this version of scipy (detected version 1.13.1) UserWarning) Model Optimizer arguments: Common parameters: - Path to the Input Model: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb - Path for generated IR: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/. - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: True TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/./ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json Model Optimizer version: 2019.3.0-375-g332562022 The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/./frozen_inference_graph.xml [ SUCCESS ] BIN file: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/./frozen_inference_graph.bin
Note - I copied frozen_inference_graph.xml, frozen_inference_graph.bin and frozen_inference_graph.mapping to ~/model_test
Executed on the NCS2 (Broken):
user@user-Desktop:~/inference_engine_samples_build/intel64/Release$ ./object_detection_sample_ssd -i ~/model_test/frozen_inference_graph.xml -d MYRIAD [ INFO ] InferenceEngine: API version ............ 2.1 Build .................. custom_releases/2019/R3_cb6cad9663aea3d282e0e8b3e0bf359df665d5d0 Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] ./dogcat.bmp [ INFO ] Loading Inference Engine [ INFO ] Device info: MYRIAD myriadPlugin version ......... 2.1 Build ........... 30677 [ INFO ] Loading network files: /home/user/fpn_v8/frozen_inference_graph.xml /home/user/fpn_v8/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device E: [xLink] [ 421234] [EventRead00Thr] eventReader:218 eventReader thread stopped (err -1) E: [global] [ 421234] [Scheduler00Thr] dispatcherEventSend:1004 Write failed (header) (err -1) | event XLINK_WRITE_REQ E: [xLink] [ 421234] [Scheduler00Thr] eventSchedulerRun:626 Event sending failed E: [xLink] [ 421234] [Scheduler00Thr] eventSchedulerRun:576 Dispatcher received NULL event! E: [global] [ 421234] [object_detectio] XLinkReadDataWithTimeOut:1494 Event data is invalid E: [ncAPI] [ 421234] [object_detectio] ncGraphAllocate:1947 Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR
Executed on the NCS1 (Work)
user@user-Desktop:~/inference_engine_samples_build/intel64/Release$ ./object_detection_sample_ssd -i ./dogcat.bmp -m ~/model_test/frozen_inference_graph.xml -d MYRIAD [ INFO ] InferenceEngine: API version ............ 2.1 Build .................. custom_releases/2019/R3_cb6cad9663aea3d282e0e8b3e0bf359df665d5d0 Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] ./dogcat.bmp [ INFO ] Loading Inference Engine [ INFO ] Device info: MYRIAD myriadPlugin version ......... 2.1 Build ........... 30677 [ INFO ] Loading network files: /home/user/fpn_v8/frozen_inference_graph.xml /home/user/fpn_v8/frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device [ INFO ] Create infer request [ WARNING ] Image is resized from (300, 300) to (640, 640) [ INFO ] Batch size is 1 [ INFO ] Start inference [ INFO ] Processing output blobs [0,17] element, prob = 0.844727 (162,138)-(303,301) batch id : 0 WILL BE PRINTED! [1,18] element, prob = 0.744141 (4,31)-(191,278) batch id : 0 WILL BE PRINTED! [ INFO ] Image out_0.bmp created! [ INFO ] Execution successful [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
Please advise, if I'm still missing a step.
If it's still broken do you think this model will ever work?
If not, is there an alternative that will work at a decent FPS and with small scale object detection?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Jones, Acer:
You are using the R3 version of Model Optimizer but still the R2 version of the sample. Please rebuild the R3 samples.
Model Optimizer version: 2019.3.0-375-g332562022
API version ............ 2.1
You can safely delete (or rename) these directories:
/home/<user>/inference_engine_samples_build
/home/<user>/openvino_models
Hope it helps,
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shuba,
Thanks for the tip.
I've tried doing as suggested above, however from the newly built samples folder, it still claims to be using version 2.1 of the API i.e.
~/inference_engine_samples_build/intel64/Release$ ./object_detection_sample_ssd -i ~/dogcat.bmp -m ~/test_model/frozen_inference_graph.xml -d MYRIAD [ INFO ] InferenceEngine: API version ............ 2.1 Build .................. custom_releases/2019/R3_cb6cad9663aea3d282e0e8b3e0bf359df665d5d0 Description ....... API
I've tried running make clean and make in this newly built directory but I still get the same result
Any ideas on what to try next?
Thanks again,
Acer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi I'm getting the same Error:
E: [xLink] [ 990513] [EventRead00Thr] eventReader:218 eventReader thread stopped (err -1)
E: [xLink] [ 990513] [Scheduler00Thr] eventSchedulerRun:576 Dispatcher received NULL event!
E: [watchdog] [ 990515] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [global] [ 990515] [python3] XLinkReadDataWithTimeOut:1494 Event data is invalid
E: [ncAPI] [ 990516] [python3] ncGraphAllocate:1947 Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR
Are there any updates for ssd fpn networks for the ncs2?
thaks
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page