Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

NCS2 with custom TensorFlow model

idata
Employee
2,044 Views

Hello,

 

I used the TensorFlow object detection API to train a custom dataset. I used mobilenet_v2.

 

I tested the model on a PC and now I want to run the inference on the NCS2. I tried to follow this guide

 

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

 

but the model_optimizer throws following errors

 

[ ERROR ] Shape is not defined for output 0 of "SecondStagePostprocessor/map/TensorArrayUnstack_1/Shape".

 

[ ERROR ] Cannot infer shapes or values for node "SecondStagePostprocessor/map/TensorArrayUnstack_1/Shape".

 

[ ERROR ] Not all output shapes were inferred or fully defined for node "SecondStagePostprocessor/map/TensorArrayUnstack_1/Shape".

 

For more information please refer to Model Optimizer FAQ (/deployment_tools/documentation/docs/MO_FAQ.html), question #40.

 

[ ERROR ]

 

[ ERROR ] It can happen due to bug in custom shape infer function .

 

[ ERROR ] Or because the node inputs have incorrect values/shapes.

 

[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

 

[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.

 

[ ERROR ] Stopped shape/value propagation at "SecondStagePostprocessor/map/TensorArrayUnstack_1/Shape" node.

 

For more information please refer to Model Optimizer FAQ (/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

As this error indicates a wrong input_shape, I tried to adjust the input_shape but the error stays the same. The command I use is following:

 

python mo_tf.py --input_model C:\KI\Object_Detection\inference_graph\frozen_inference_graph.pb --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config C:\KI\Object_Detection\inference_graph\pipeline.config --reverse_input_channels --data_type FP16 --input_shape [1,640,480,1]

 

Thanks you in advance.

0 Kudos
3 Replies
idata
Employee
1,680 Views

Hi @daoedad

 

Could you provide a link to your model?

 

Make sure the information in the "ssd_v2_support.json" file matches the parameters from your config file. The ssd_v2_support.json file was made for the pretrained data set. If you a custom data set, you will have to edit the parameters in the json file.

 

Regards,

 

Aroop
0 Kudos
idata
Employee
1,680 Views

Hi @Aroop_at_Intel,

 

here is a link to my model:

 

https://drive.google.com/open?id=12YHB0Bes6egGSR0ml9QjLilioCpgrbyl

 

I tried to edit the "ssd_v2_support.json". But I don´t really know what to do and how to match the .json file with my model.

 

Is there a guide for editing or something?

 

Thank you for your help.

0 Kudos
idata
Employee
1,680 Views

Hi @daoedad,

 

Thanks for sharing your model and json file. Try to make the following change to line 57 of ssd_v2_suport.json.

 

Change:

 

"Postprocessor/ToFloat"

 

To:

 

"Postprocessor/Cast"

 

After the change is made, try converting the model again.

 

Regards,

 

Aroop
0 Kudos
Reply