Showing results for 
Search instead for 
Did you mean: 

Error when try to optimize vgg_16 model



I download a vgg16 model which is a .ckpt file, and I use freeze_graph commands to convert it to a .pb file. And I use "python --input_model frozen_model_vgg_16.pb --output_dir \opmodel --mean_values=[103.939,116.779,123.68]" to optimize it then to generate .xml and .bin files. And the program stop with error:

[ ERROR ]  Elementwise operation vgg_16/dropout6/dropout/mul has inputs of different data types: float32 and int32
[ ERROR ]  Elementwise operation vgg_16/dropout7/dropout/mul has inputs of different data types: float32 and int32
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      RandomUniform (2)
[ ERROR ]          vgg_16/dropout6/dropout/random_uniform/RandomUniform
[ ERROR ]          vgg_16/dropout7/dropout/random_uniform/RandomUniform
[ ERROR ]      Floor (2)
[ ERROR ]          vgg_16/dropout6/dropout/Floor
[ ERROR ]          vgg_16/dropout7/dropout/Floor
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.


I don't know why. Are my parameters given wrong or there's mistakes when I freeze it? I would appreciate if someone could provide your assistance.

0 Kudos
1 Reply

Hello Q, Y.

Please note that only non-frozen version of VGG-16 model is officially supported by OpenVINO toolkit.

I've tested this on my end, and this works fine. Steps to follow:

1) Download non-frozen VGG-16 model from here

2) Convert it into .pb format per this example

The command should be slightly changed as follows:

python3 tf_models/research/slim/ -labels_offset 1 \
    --model_name vgg_16 \
    --output_file vgg_16_inference_graph.pb

3) Follow further steps from the guide above, and once you reach command, then you need to change it as follows:

<MODEL_OPTIMIZER_INSTALL_DIR>/ --input_model ./vgg_16_inference_graph.pb --input_checkpoint ./vgg_16.ckpt -b 1 --mean_value [103.94,116.78,123.68] --scale 1

 As you can see, here you need to provide additional arguments -b and --scale.

If you face a missing pywrap_tensorflow error during mo_tf.ry command execution, try to downgrade tensorflow version to 1.5 version with

sudo pip3 install tensorflow==1.5

Hope this helps.