I am running a SSD MobileNet V2 model trained using TensorFlow object detection API on a custom data-set.
I have attached the output of same image with and without NCS2. Apparently, the model performance seems to be reduced after the conversion. I am not aware of the problem.
The OpenVINO version I am using is 2019.R3.
Below is the code I am using to convert from tensorflow to IR formats.
sudo python3 mo.py --framework tf --input_model Catering_v2_3C.pb --batch 12 --reverse_input_channels --tensorflow_object_detection_api_pipeline_config pipeline.config --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support_api_v1.14.json --output=detection_classes,detection_scores,detection_boxes,num_detections --datatype FP32
I am completely open to any suggestions.
kindly, help me to get past this hurdle.
Thank you, regards.
Thanks for reaching out.
When developing for Intel® Neural Compute Stick 2 (Intel® NCS 2) you want to make sure that you use a model that uses FP16 precision. Hence, I would suggest you try using the --datatype FP16 parameter in your Model Optimizer conversion command.
More information is available at the following pages:
Also, I would encourage you to try out Intel® Distribution of OpenVINO™ Toolkit version 2020.4, which is a vastly improved version with the latest features and leading performance.