I am transfering a detection model to the NCS 2, the .bin file is about 100M. Input image size is 640x640. It works fine with my corei7 gen7 CPU (FP32), when I send it to NCS 2 (FP16), it gives me following error:
E: [xLink] [ 941796] dispatcherEventSend:955 Write failed event -4
E: [xLink] [ 941796] handleIncomingEvent:300 handleIncomingEvent() Read failed -4
E: [xLink] [ 941797] dispatcherEventReceive:368 dispatcherEventReceive() Read failed -4 | event 0x7f260b7fdee0 XLINK_WRITE_REQ
E: [xLink] [ 941797] eventReader:230 eventReader stopped
E: [xLink] [ 941797] XLinkReadDataWithTimeOut:1377 Event data is invalid
E: [ncAPI] [ 941797] ncGraphAllocate:1784 Can't read input tensor descriptors of the graph, rc: X_LINK_ERRORE: [watchdog] [ 941797] sendPingMessage:132Failed send ping message: X_LINK_ERROR
Traceback (most recent call last):
File "object_detection_demo_ssd_async.py", line 194, in <module>
sys.exit(main() or 0)
File "object_detection_demo_ssd_async.py", line 81, in main
exec_net = plugin.load(network=net, num_requests=2)
File "ie_api.pyx", line 395, in openvino.inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx", line 406, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: Failed to allocate graph: NC_ERROR
I suspect it runs out memory to allocate (ncgraph allocate error), since it works well with other provided SSD detection model(smaller model).
In addition, I didn't find any related parameters about the memory for NCS2 in official documents. Though there are people (https://ncsforum.movidius.com/discussion/1351/how-much-memory-does-the-movidius-neural-compute-stick...) saying 320MB, but apparently I cannot run a model with 100MB large. Or because it is a detection problem, and the image size itself occupies lots of memory as well?
The xml, bin, and mapping files I used can be found in here (The data_type = FP16, the plugin is NCS2), feel free to try :
It is a SSD based detector.
Thanks for reaching out! Can you please provide some additional information:
The model I converted can be download through link
My command for converting the model is:
sudo python3 ./model_optimizer/mo_tf.py --input_model=<Dir>/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config <Dir>/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --reverse_input_channels --data_type FP16
And I test it using the modified object_detection_demo_ssd_async.py for single image.
python3 object_detection_demo_ssd_async.py -m <Dir>/ssd_v2_for_retina_FP16/frozen_inference_graph.xml -i <dir>/car_1.bmp -d MYRIAD
The network you are trying to use is not currently supported by the Myriad plugin, you can find a list of the supported networks in the link https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_MYRIAD.html.
It is said in OpenVINO™ 2018 R5 Release note, that you support the Retinanet I gave above.
Or this retinanet is only supported by CPU?
Will retinanet be supported in the future release?