Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Employee
24 Views

how to convert a trained model which based on tensorflow ssd_mobilenet_v1

I trained my own data with ssd_mobilenet_v1_coco model and try to convert the model using this command:

./mo_tf.py --input_model=/tmp/tr3/frozen_inference_graph.pb \
--input=1:Preprocessor/mul \
--input_shape="(1,300,300,3)" \
--tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json \
--output="detection_boxes,detection_scores,num_detections"

then I got error:

Model Optimizer arguments
Batch: 1
Precision of IR: FP32
Enable fusing: True
Enable gfusing: True
Names of input layers: 1:Preprocessor/mul
Path to the Input Model: /tmp/tr3/frozen_inference_graph.pb
Input shapes: (1,300,300,3)
Log level: ERROR
Mean values: ()
IR output name: inherited from the model
Names of output layers: detection_boxes,detection_scores,num_detections
Path for generated IR: /opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer
Reverse input channels: False
Scale factor: None
Scale values: ()
Version: 0.3.75.d6bae621
Input model in text protobuf format: False
Offload unsupported operations: False
Path to model dump for TensorBoard: None
Update the configuration file with input/output node names: None
Operations to offload: None
Patterns to offload: None
Use the config file: extensions/front/tf/ssd_support.json
[ ERROR ]  --input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net. For more information please refer to Model Optimizer FAQ, question #27.

what should I do?

Thanks,

Xiaoqi

 

0 Kudos
2 Replies
Highlighted
Beginner
24 Views

Hi Xiaoqi,

I have exactly the same problem. Have you managed to solve this please?

Thanks,
Martin Peniak

0 Kudos
Highlighted
Beginner
24 Views

I have the same problem, have you solved it?

0 Kudos