Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error when try to optimize vgg_16 model

Q__Y
Beginner
593 Views

Hi,

 

I download a vgg16 model which is a .ckpt file, and I use freeze_graph commands to convert it to a .pb file. And I use "python mo_tf.py --input_model frozen_model_vgg_16.pb --output_dir \opmodel --mean_values=[103.939,116.779,123.68]" to optimize it then to generate .xml and .bin files. And the program stop with error:

[ ERROR ]  Elementwise operation vgg_16/dropout6/dropout/mul has inputs of different data types: float32 and int32
[ ERROR ]  Elementwise operation vgg_16/dropout7/dropout/mul has inputs of different data types: float32 and int32
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      RandomUniform (2)
[ ERROR ]          vgg_16/dropout6/dropout/random_uniform/RandomUniform
[ ERROR ]          vgg_16/dropout7/dropout/random_uniform/RandomUniform
[ ERROR ]      Floor (2)
[ ERROR ]          vgg_16/dropout6/dropout/Floor
[ ERROR ]          vgg_16/dropout7/dropout/Floor
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

 

I don't know why. Are my parameters given wrong or there's mistakes when I freeze it? I would appreciate if someone could provide your assistance.

0 Kudos
1 Reply
Max_L_Intel
Moderator
593 Views

Hello Q, Y.

Please note that only non-frozen version of VGG-16 model is officially supported by OpenVINO toolkit.

I've tested this on my end, and this works fine. Steps to follow:

1) Download non-frozen VGG-16 model from here https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#Convert_From_TF

2) Convert it into .pb format per this example https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models.html

The export_inference_graph.py command should be slightly changed as follows:

python3 tf_models/research/slim/export_inference_graph.py -labels_offset 1 \
    --model_name vgg_16 \
    --output_file vgg_16_inference_graph.pb

3) Follow further steps from the guide above, and once you reach mo_tf.py command, then you need to change it as follows:

<MODEL_OPTIMIZER_INSTALL_DIR>/mo_tf.py --input_model ./vgg_16_inference_graph.pb --input_checkpoint ./vgg_16.ckpt -b 1 --mean_value [103.94,116.78,123.68] --scale 1

 As you can see, here you need to provide additional arguments -b and --scale.

If you face a missing pywrap_tensorflow error during mo_tf.ry command execution, try to downgrade tensorflow version to 1.5 version with

sudo pip3 install tensorflow==1.5

Hope this helps.

0 Kudos
Reply