- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I used Tensorflow Object Detection API and finetune the model using my own dataset. After converting the model into IR graph and quantizing to FP16, I noticed the drop in accuracy when running that XML and BIN file in MYRIAD as compared to CPU.
sudo python3 mo_tf.py \ --input_model $BASE_FOLDER$INFERENCE_FOLDER"/frozen_inference_graph.pb" \ --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json \ --tensorflow_object_detection_api_pipeline_config $CONFIG_FILE \ --data_type FP16 \ --output_dir $OUTPUT_DIR \ --reverse_input_channels > $BASE_FOLDER'/debug.log'
This is the code to optimize to IR file.
So can it be the case that in CPU the precision can be high and hence it is detecting better than the MYRIAD which only supports FP16?
Or can it be the case that MultiScaleAnchorGenerator had some trouble? I am not able to understand it as the output should have been be it CPU or MYRIAD.
Please reply to the earliest. Thanks in Advance.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Pansari, Akshay,
Thanks for reaching out. You mentioned there is a drop in accuracy, do you know what the percentage looks like? If possible, it would be great if you can share your custom model for us to try convert and replicate the issue. Please also include a test image/sample to compare in both plugins.
Regards,
Luis
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Luis,
So, the accuracy for the RetinaNet which is available in TensorFlow object detection model zoo as 'ssd_resnet_50_fpn_coco' . The link is " https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md ".
The (mAP) accuracy in CPU is coming out to be 84 and in NCS2 is coming out to be approximately 43. Upon further analysis, I found out that the problem lies in the FPN. When I ran different model without 'fpn' the accuracy were good but after removing it, it decreased.
Please find the attached bin and xml file.
Upon further analysis, I found that the expand dim during upsampling in fpn is not working properly. This has been problem for ncs. I am not sure if that is the case please see to it. The cpu is working as expected. Please see to it.
You can compare the output values. In 1 image it should be almost same and in the other, it will be different.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Akshay,
Which version of the OpenVINO toolkit are you using? Do you see the same accuracy loss when using the pre-trained model from the Tensorflow Object Detection models zoo?
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Jesus,
I am using the openvino_2019.3.334 version. I am using Linux 16.04.
No, with other models in which FPN was not present, the drop inaccuracy is not much. I also saw drop in accuracy with SSD MobileNet v1 FPN, but the SSD without FPN was working fine. I would like to add that I am using pascal VOC metric and convert the IR into FP16 because of NCS2 compatibility. The same IR graph is working fine in CPU.
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Also, I tried converting Faster RCNN model but could not convert it to IR graph. Could you tell me how do we convert to the IR graph for Faster RCNN? My input size of image is 640*480 for all the images.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Akshay,
I will have to take a look at the other FPN models you linked.
Also, I tried converting Faster RCNN model but could not convert it to IR graph. Could you tell me how do we convert to the IR graph for Faster RCNN? My input size of image is 640*480 for all the images.
I converted the faster_rcnn_inception_v2_coco with the following command:
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \ --input_model frozen_inference_graph.pb \ --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json \ --tensorflow_object_detection_api_pipeline_config pipeline.config \ --reverse_input_channels \ --batch 1
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We used modified XML files to check output of each layer. But the output of all the layers that had more than 4 dimensions were produced by cpu but not by myriad.
The Error caused was:
"/home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/op_inf_engine.cpp:704: error: (-215:Assertion failed) Failed to initialize Inference Engine backend: AssertionFailed: newDims[newPerm] == 1 in function 'initPlugin'"
I will attach the files in case you need to run the code to see it.
I am not sure if this is the main error but for CPU it was working perfectly but for NCS2 it gave this error.
This is the log of the error:
"""
/frozen_inference_graph126.xml ERROR OpenCV(4.1.2-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/op_inf_engine.cpp:704: error: (-215:Assertion failed) Failed to initialize Inference Engine backend: data [FeatureExtractor/resnet_v1_50/fpn/top_down/nearest_neighbor_upsampling/stack/ExpandDims_/Dims/Output_0/Data__const] doesn't exist in function 'initPlugin' /frozen_inference_graph127.xml CPU Done! ERROR OpenCV(4.1.2-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/op_inf_engine.cpp:704: error: (-215:Assertion failed) Failed to initialize Inference Engine backend: AssertionFailed: newDims[newPerm] == 1 in function 'initPlugin'
"""
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Akshay,
Which model and command did you use to convert the frozen_inference_graph126.xml and frozen_inference_graph127.xml? Also, are you using our sample/demo applications or are you writing your own?
By the way, openvino_2019.3.334 version is not the latest. Could you try with the latest release 2019.3.376?
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am attaching the code. Also I am using my own application to detect the dogs but the code has been taken from the demo applications only.
I will update you after using the latest version. Model has been attached in the previous mails by the name "intel_forum_upload.zip". In that I have attached the bin file and the xml file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, Akshay Pansari
were you able to run inference on the ssd fpn models?
Because in this post
https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/815924
it is said that for ssd fpn models there is a bug for ncs2.
Im getting this error when i try to infer:
E: [xLink] [ 990513] [EventRead00Thr] eventReader:218 eventReader thread stopped (err -1)
E: [xLink] [ 990513] [Scheduler00Thr] eventSchedulerRun:576 Dispatcher received NULL event!
E: [watchdog] [ 990515] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [global] [ 990515] [python3] XLinkReadDataWithTimeOut:1494 Event data is invalid
E: [ncAPI] [ 990516] [python3] ncGraphAllocate:1947 Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR
are you usuing ncs 1 or 2?
thanks
manu

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page