- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm trying to use a Faster RCNN model pre-trained on COCO and fine tuned for 100 steps on COCO itself with Tensorflow 1.13.1.
Inference is done on Neural compute stick 2.
Model optimization seems to run fine with:
python "C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\mo_tf.py" --input_model="H:\Code\tensorflow-models\research\object_detection\exported_models\frozen_inference_graph.pb" --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --tensorflow_object_detection_api_pipeline_config "H:\Code\tensorflow-models\research\object_detection\exported_models\pipeline.config" --reverse_input_channels --data_type FP16 --model_name "frcnn_resnet50_equipment_fp16" Model Optimizer arguments: Common parameters: - Path to the Input Model: H:\Code\tensorflow-models\research\object_detection\exported_models\frozen_inference_graph.pb - Path for generated IR: C:\Users\cs5807\Documents\Intel\OpenVINO\samples\build\intel64\Release\. - IR output name: frcnn_resnet50_equipment_fp16 - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: True TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: H:\Code\tensorflow-models\research\object_detection\exported_models\pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json Model Optimizer version: 2019.1.1-83-g28dfbfd [ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size. Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600). The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer. [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: C:\Users\cs5807\Documents\Intel\OpenVINO\samples\build\intel64\Release\.\frcnn_resnet50_equipment_fp16.xml [ SUCCESS ] BIN file: C:\Users\cs5807\Documents\Intel\OpenVINO\samples\build\intel64\Release\.\frcnn_resnet50_equipment_fp16.bin [ SUCCESS ] Total execution time: 36.84 seconds.
However, at inference, I get the following error:
object_detection_sample_ssd -i "C:\Users\cs5807\Pictures\IMG_2932.bmp" -m frcnn_resnet50_equipment_fp16.xml -d MYRIAD -ni 10 [ INFO ] InferenceEngine: API version ............ 1.6 Build .................. 23780 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] C:\Users\cs5807\Pictures\IMG_2932.bmp [ INFO ] Loading plugin API version ............ 1.6 Build .................. 23780 Description ....... myriadPlugin [ INFO ] Loading network files: frcnn_resnet50_equipment_fp16.xml frcnn_resnet50_equipment_fp16.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ WARNING ] Image is resized from (640, 480) to (600, 600) [ INFO ] Batch size is 1 [ INFO ] Start inference (10 iterations) [35mE: [xLink] [ 0] dispatcherEventSend:934 Write failed event -1 [0m [35mE: [xLink] [ 0] dispatcherWaitEventComplete:708 waiting is timeout, sending reset remote event[0m [35mE: [xLink] [ 0] dispatcherEventSend:924 Write failed header -1 | event XLINK_RESET_REQ [0m [35mE: [xLink] [ 0] eventSchedulerRun:584 Event sending failed[0m [31mF: [xLink] [ 0] dispatcherEventReceive:355 [35mE: [watchdog] [ 0] sendPingMessage:132 Duplicate id detected. Failed send ping message: X_LINK_TIMEOUT[0m [35mE: [xLink] [ 0] XLinkReadDataWithTimeOut:1343 [0m Event data is invalid[0m [35mE: [ncAPI] [ 0] ncFifoReadElem:3353 Packet reading is failed.[0m [35mE: [watchdog] [ 0] sendPingMessage:132 Failed send ping message: X_LINK_ERROR[0m [35mE: [watchdog] [ 0] watchdog_routine:327 [0000020DEC652170] device, not respond, removing from watchdog [0m [33mW: [xLink] [ 0] isAvailableScheduler:441 Scheduler has already been reset or cleaned[0m [33mW: [xLink] [ 0] eventSchedulerRun:610 Failed to reset[0m [35mE: [ncAPI] [ 0] ncFifoDestroy:3176 Failed to write to fifo before deleting it![0m [35mE: [ncAPI] [ 0] ncDeviceClose:1617 Device didn't appear after reboot[0m [ ERROR ] Failed to read output from FIFO: NC_ERROR
Any idea of what is going wrong?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Miralles, Francois, instead of the object_detection_demo_ssd can you try the object_detection_demo ?
Let me know what happens here,
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Running the command:
object_detection_demo -i "C:\Users\cs5807\Pictures\IMG_2932.bmp" -m "C:\Users\cs5807\Documents\Intel\OpenVINO\samples\build\intel64\Release\frcnn_resnet50_equipment_fp16.xml" -d MYRIAD
log:
[ INFO ] InferenceEngine: API version ............ 1.6 Build .................. 23780 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] C:\Users\cs5807\Pictures\IMG_2932.bmp [ INFO ] Loading plugin API version ............ 1.6 Build .................. 23780 Description ....... myriadPlugin [ INFO ] Loading network files: C:\Users\cs5807\Documents\Intel\OpenVINO\samples\build\intel64\Release\frcnn_resnet50_equipment_fp16.xml C:\Users\cs5807\Documents\Intel\OpenVINO\samples\build\intel64\Release\frcnn_resnet50_equipment_fp16.bin
but the demo crashes at line:
int inputWidth = network.getInputsInfo().begin()->second->getTensorDesc().getDims()[3];
because of vector subscript out of range. Dimension returned by getDims() is of 2.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Miralles, Francois,
I'm terribly sorry for this inconvenience - but these bugs should be fixed in the next upcoming release of OpenVino 2019 R2. Should be released very soon, though I am not at liberty to tell you the exact date.
Please be patient with us !
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ok. I understand. This is all very new and on the edge I must say ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Miralles, Francois,
You are right ! AI is constantly changing. For any company in this field, keeping abreast of the latest research is a never-ending challenge. But thanks for your flexibility. I promise R2 is "just around the corner" !
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page