- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm trying to run `SSD ResNet50 FPN COCO` (`ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03`) model on NCS2 using MYRIAD, Python API but it stucks when loading IR to the plugin with the following error.
E: [xLink] [ 80143] handleIncomingEvent:240 handleIncomingEvent() Read failed -4
E: [xLink] [ 80143] dispatcherEventReceive:308 dispatcherEventReceive() Read failed -4 | event 0x7f35137fde80 USB_WRITE_REQ
E: [xLink] [ 80143] eventReader:256 eventReader stopped
E: [xLink] [ 80144] dispatcherEventSend:908 Write failed event -4
E: [watchdog] [ 81144] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 82144] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 83144] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 84145] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
...
the `Failed send ping message: X_LINK_ERROR` keeps showing until I pressed ctrl+c to kill the script. I noticed the `USB_WRITE_REQ` in the error so I thought it has something to do with USB3 port but when I tried a lighter model `ssd_mobilenet_v2_coco`, it worked like a charm.
This is the script to generate IR (IR generated successfully)
python mo_tf.py --input_model ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --output_dir ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/openvino_model/FP16 --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --data_type FP16
This is the script I used to test
python test.py -m ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/openvino_model/FP16/frozen_inference_graph.xml -i ~/workspace/object-detection/test_images/image.jpg -d MYRIAD
Here's the snippet of Python script
plugin = IEPlugin(device=args.device, plugin_dirs=args.plugin_dir)
if args.cpu_extension and 'CPU' in args.device:
plugin.add_cpu_extension(args.cpu_extension)
# Read IR
log.info("Reading IR...")
net = IENetwork(model=model_xml, weights=model_bin)
if plugin.device == "CPU":
supported_layers = plugin.get_supported_layers(net)
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
format(plugin.device, ', '.join(not_supported_layers)))
log.error("Please try to specify cpu extensions library path in demo's command line parameters using -l "
"or --cpu_extension command line argument")
sys.exit(1)
assert len(net.inputs.keys()) == 1, "Demo supports only single input topologies"
assert len(net.outputs) == 1, "Demo supports only single output topologies"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
n, c, h, w = net.inputs[input_blob].shape
log.info("Loading IR to the plugin...")
exec_net = plugin.load(network=net) # <== stuck at this line
The only reason I could think of why `ssd_mobilenet_v2_coco_2018_03_29` works and `ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03` not is the size which is 33MB for the former and about 100MB for the latter. I think the SSD Resnet50 model may have reached my laptop resource limitation. If this is the cause, how can I get around it? I'm using `l_openvino_toolkit_p_2018.5.455` on Ubuntu 18.04.
The `SSD ResNet50 FPN COCO` model is from TensorFlow Object Detection Models Zoo and supported by Openvino toolkit (https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow).
thanks
Peeranat F.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Peeranat, does it work with -d CPU ? -d GPU ? Please report the results.
Thanks kindly,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
Thanks for your reply. I tested with -d CPU but had the following error
[ ERROR ] Following layers are not supported by the plugin for specified device CPU:
PriorBoxClustered_2, Resample_6859, PriorBoxClustered_3, PriorBoxClustered_4, Resample_, PriorBoxClustered_1, PriorBoxClustered_0, DetectionOutput
[ ERROR ] Please try to specify cpu extensions library path in demo's command line parameters using -l or --cpu_extension command line argument
It's clear CPU is not supported for this model. I don't have GPU. I'm sorry. Finally I plan to run this model on raspberry pi.
Thanks
Peeranat F.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Peeranat:
I think your issue is that you're reaching a memory limit on your computer at this line: exec_net = plugin.load(network=net) # <== stuck at this line
and you want to know if there's a workaround ? OK got it. Let me check and get back to you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
Any update on this?
Thanks
Peeranat F.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page