Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Model Optimizer: .pb created using transfer learning ResNet50: TensorFlow cannot read the .pb: it is incorrect TF model

CBell1
New Contributor II
2,744 Views

Hello,

I generated a .pb model using Keras and tensorflow (version 1.14.0-rc1) with transfer learning method using ResNet50.

Below the command used to generate the model .pb:

#saved_model_path = tf.contrib.saved_model.save_keras_model(model, "checkpoint/Flowers_saved_models")

Using a different VirtualMachine with OpenVino R1.1 (last version that include TF  V. 1.13.1), I execute the following OpenVino model optimizer command to convert the mentioned .pb model to OpenVino IR (.xml & .bin):

#sudo python3 mo_tf.py --input_model /home/datavolume_ovc2/Flowers_saved_model.pb --model_name Flowers_RSN50_FP16 --data_type FP16.

This operation (by model optimizer) generated the following error:

[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/home/datavolume_ovc2/Flowers_saved_model.pb" is incorrect TensorFlow model file.

Below all error message.

Where is the problem?

Is it the different TF version used Training data vs TF used with OpenVino?

Do you have suggestions to solve this kind of problems?

Thank you

 

Complete error message from OpenVino Model_Optimizer:

root@05e96c575d0a:/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer# python3 mo_tf.py --input_model /home/datavolume_ovc2/Flowers_saved_model.pb --model_name Flowers_RSN50_FP16 --data_type FP16
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/datavolume_ovc2/Flowers_saved_model.pb
    - Path for generated IR:     /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
    - IR output name:     Flowers_RSN50_FP16
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP16
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     2019.1.1-83-g28dfbfd
[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/home/datavolume_ovc2/Flowers_saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #43.
root@05e96c575d0a:/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer#

 

0 Kudos
23 Replies
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma,

Please downgrade to Tensorflow 1.12. Model Optimizer does not yet support Tensorflow 1.13.

Thanks,

Shubha

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

I downgraded the TF to 1.12 but the error still the same.

Do I have to downgrade TF to 1.12 also on the Machine used for creating Model and running Training data?

Additional open questions:

A -What do you suggest to save a TF model?

1 - Directly using model (like I did, see command below) that generate a .pb?

#saved_model_path = tf.contrib.saved_model.save_keras_model(model, "checkpoint/Flowers_saved_models")

2 - using the checkpoint method?

3 - or saving the model in a .h5?

B -Which tool Intel suggests to convert a TF .h5 file in a .pb usable with OpenVino Model Optimizer?

Thank you

Regards

Cosma

 

 

 

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma,

Why not choose from one of Tensorflow MO supported List ? 

ResidualNet-50 v1 is one of the supported models.

I think your error can be solved however by adding the  --input_model_is_text switch to your MO command. See the error given:

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.

 

Thanks,

Shubha

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

I run the following command but the error is still the same.

#sudo python3 mo_tf.py --input_model /home/cosma/Downloads/CodeLab/Flowers_saved_model.pb --input_model_is_text --output_dir /home/cosma/Downloads/CodeLab/  --model_name Flowers_RN50_FP16  --data_type FP16

Error:

"[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/home/cosma/Downloads/CodeLab/Flowers_saved_model.pb" is incorrect TensorFlow model file.

Make sure that --input_model_is_text is provided for a model in text format."

 

I trained a dedicated dataset using a transfer learning model with Resnet50 and the final hidden layer customized.

Using the model from MO supported list, how can I train data with a dedicated dataset?

Thank you again for your support!

 

 

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma,

Since ResNet50 is an image classification model (as opposed to Object Detection) I think This Tensorflow Document will help you. And sure you can retrain one of those supported models, Model Optimizer doesn't care. Just please make sure that if you do any pre-processing to your images, that you let model optimizer know this in your mo command other wise your accuracies will be off. Do a mo_tf.py --h and you will see these pre-processing command-line switches. Most important of these are --input_size, make you're you are using the same size images as what you used during training.

Thanks,

Shubha

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

Questions:

1 - do you have some examples about pre-processing images command lines and related MO --input_size?

2 - Is it necessary to use TF v1.12 also on the Machine used to train data and to save the model?

Thank you

Cosma

 

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dearest Cosma.

For question 1) unfortunately we don't have samples or examples of pre-processing at the MO command-line level. There is a distinct reason for this. OpenVino is not involved with training. In fact it really doesn't care about how your model was trained. But the model builder should be well aware of pre-processing details, and (s)he should note those carefully before using Model Optimizer (OpenVino). I would say that passing in wrong pre-processing switches to Model Optimizer is the single most common reason for accuracy loss at inference.

For question 2) This is a tricky question. The answer is I really don't know but at least for starters, wouldn't it be better to just stick with Tensorflow 1.12 even to train the model and save the data ? Tensorflow 1.13 should be officially supported by Model Optimizer soon, but in the meantime, going down to a slightly lower version shouldn't inconvenience you much. 

Hope it helps.

Thanks for your patience !

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dearest Cosma,

Since Keras is a "wrapper" around Tensorflow, before model optimizer can support it the keras model must be converted to a tensorflow frozen pb. It's very easy to perform this conversion. In fact I answered a post on how to perform a keras to tensorflow conversion  before. The conversion script you used must be incorrect since you're getting the error  "is incorrect TensorFlow model file.". Please use the script I posted and try again.

Thanks !

Shubha

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma,

I have uploaded my script. Call it as python keras_to_tf.py . Do a --h and you'll get 

optional arguments:
  -h, --help            show this help message and exit
  --input_model INPUT_MODEL, -m INPUT_MODEL
                        Path to Keras model.
  --num_outputs NUM_OUTPUTS, -no NUM_OUTPUTS
                        Number of outputs. 1 by default.

Hope it helps. I've used this script often to convert keras to frozen tensorflow pb.

Thanks,

Shubha

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma, here's another Keras to TF converter you can try, which I got from this dldt github post . Rather than dump 1 output this version dumps all outputs. Can you kindly try it ? If it doesn't work for you then I'm afraid that something is wrong with your Keras model.

Please try it and report back on this forum,

Thanks,

Shubha

 

0 Kudos
CBell1
New Contributor II
2,525 Views

Many Thanks Shubha!

Anyway, I cannot convert the .h5.

Now, I'm sure the problem is in the code used to build the model and training data.

Attached the code I used.

Can the keras optimizer, used compiling the model, be the problem?

Please, let me know your opinion.

Thank you

Best Regards

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

no way, the problem is my code, I suppose the Keras optimizer creates an issue, I don't know why.

Below the model using transfer learning method:

In []:

pretrained_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, input_shape=[image_height, image_width, 3])

model = tf.keras.Sequential([
    pretrained_model,
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(38, activation='softmax')
])

model.compile(
    loss = tf.keras.losses.categorical_crossentropy,
    optimizer=tf.keras.optimizers.Adam(),
    metrics=['accuracy']
)

model.summary()

And below how I save the .h5

In []:

model.save('cnn_disesaes_Tran_Full_ResNet50.h5')

When I save the model using the following command there is a warning about the Keras model optimizer

In []:

saved_model_path = tf.contrib.saved_model.save_keras_model(model, "checkpoint/Diseases_Tr_RN50_saved_models")

Output:

WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7f0006871208>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved. Consider using a TensorFlow optimizer from `tf.train`. WARNING:tensorflow:Model was compiled with an optimizer, but the optimizer is not from `tf.train` (e.g. `tf.train.AdagradOptimizer`). Only the serving graph was exported. The train and evaluate graphs were not added to the SavedModel. INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'] INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Eval: None INFO:tensorflow:No assets to save. INFO:tensorflow:No assets to write. INFO:tensorflow:SavedModel written to: checkpoint/Diseases_Tr_RN50_saved_models/temp-b'1561540568'/saved_model.pb

The save_model.pb is not usable with OpenVino model optimizer due to errors.

Please, let me know your opinion.

Thank you

 

 

 

 

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma,

According to This Tensorflow document the Keras optimizer WARNING you are getting is normal. So you are saying that using https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer instead of https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam is preferred by Model Optimizer ? If this is so, then it is a bug. Model Optimizer should definitely support the keras optimizer. Put another way, Model Optimizer SHOULD NOT CARE about how the model was trained. 

Please confirm if I am understanding the situation correctly,

Thanks,

Shubha

 

0 Kudos
CBell1
New Contributor II
2,525 Views

Hi Shubha,

thanks for your note. Yes, keras optimizer works fine.

Anyway, there is something in the code I use that it is not right.

There are not errors running this code, the problem is when I try to convert the .h5 to .pb

What does OV Model Optimizer take care about in particular?

The answer could help me to review and writing in a different way the Model using again Keras and TF.

Thank you!

 

0 Kudos
Shubha_R_Intel
Employee
2,525 Views

Dear Cosma,

Glad to hear because if somehow Model Optimizer croaked on keras optimizer - then that would surely be an MO bug. Ahh...you ask a great question. The Model Optimizer Documentation is online. But I would say that your issue seems a bit more complicated - in other words it may not be answerable by perusing the documentation. But every ounce and fiber of Model Optimizer Python code is freely available to you - why not step through the code with your PyCharm debugger and see what's happening ? That's another option for you.

Let me know if there's anything more I can do to help.

Thanks

Shubha

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

please, do you have some suggestions?

Thank you

0 Kudos
Cuenza__Jonathan
Beginner
2,525 Views

Dear Cosma,

     Your problem is also my problem, this was because currently openvino does not support flatten for tensorflow hope they'll fix this on the next release, maybe you can try to convert your pb file to caffe to use it. Thanks and God bless...
P.S. also their script on bat files doesn't handle space on your username, it causes problem.

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

I solved the problem!

In any case, many thanks for you support and time

Regards

0 Kudos
CBell1
New Contributor II
2,525 Views

Dear Shubha,

we found the issue, the "model.save" it seems not useful to save the tf model.

It seems, we should save the model using:

sess = keras.backend.get_session()
keras.backend.get_session().run(tf.global_variables_initializer())

saver = tf.train.Saver()
save_path = saver.save(sess, export_path+"_saved.ckpt")

etc. Please, see attached file .ipynb with all details.

Thanks to this method, I save a inference_graph.pb and a model_1.pb

 

Unfortunately, when I try to convert these files to IR using OpenVino (last version) Model Optimizer there are the following errors:

Using inference_graph.pb:

"Model Optimizer version:     2019.1.1-83-g28dfbfd

[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "None (<class 'extensions.front.no_op_eraser.NoOpEraser'>)": The node training/group_deps must have just one input."

All details in the attached file.

Using mode_1.pb:

"Model Optimizer version:     2019.1.1-83-g28dfbfd
[ ERROR ]  MatMul wasn't able to infer shape because input dimensions are not compatible
[ ERROR ]  Shape is not defined for output 0 of "dense_1/MatMul".
[ ERROR ]  Cannot infer shapes or values for node "dense_1/MatMul".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "dense_1/MatMul".

All details in the attached file.

Can you support me to solve these Model Optimizer errors?

Thank you

 

0 Kudos
CBell1
New Contributor II
2,083 Views

Dear Shubha,

during these days I tried some codes without success.

The last is a really simple as possible case, I created a model using TF with 3x Layers only using mnist as dataset and I saved the model as .h5

Please, see the simple TF code attached.

I tried to convert the .h5 to .pb using the python tools "k2tf.py" you kindly attached and the "keras_to_tensorflow-master".

Following the error after running both tools:

ValueError: Unknown initializer: GlorotUniform

Where is the error?

How is it possible to use openvino if it's not possible to convert a simple model to .pb?

Many thanks again for your support.

 

0 Kudos
Reply