Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Running the model optimizer with a newer version of tensorflow

Paresa__Nathaniel
1,579 Views

Hello,

I recently updated by tensorflow version to 1.11.0. I updated due to the error message "Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary." I received when trying to convert a frozen model on the machine running openVino (tensorflow 1.9.0) but trained on a machine running tensorflow 1.11.0. (Full error message at bottom)

After updating my machine running OpenVINO to tensorflow 1.11.0, when I run mo_tf.py, I get

"Model Optimizer version:     1.5.12.49d067a0
[ ERROR ]  
Detected not satisfied dependencies:
    tensorflow: not installed, required: 1.2.0

Please install required versions of components or use install_prerequisites script
/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components."

 

Does the model optimizer not work with newer versions of tf? Must I downgrade my other machine and retrain my model on a lower version of tf to get it to convert? If so, that may be an issue because some other software I'm using is relying on the newer version of tf.

This is the command/output that made me think to upgrade. Specify the --input_shape only gets rid of the one line about it in the warning.

Command:

python3 /opt/intel/computer_vision_sdk/deployment_tools/mode
l_optimizer/mo_tf.py --input_model ~/kai/pipeline/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json --tensorflow_object_detection_api_pipeline_config ~/kai/pipeline/pipeline.config --reverse_input_channels --data_type=FP16

Output:

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/warrior/kai/pipeline/frozen_inference_graph.pb
    - Path for generated IR:     /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/inference_engine/models/inception/.
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP16
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     True
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /home/warrior/kai/pipeline/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json
Model Optimizer version:     1.5.12.49d067a0
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (1080, 1080).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ ERROR ]  Cannot infer shapes or values for node "ToFloat_3".
[ ERROR ]  NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: ToFloat_3 = Cast[DstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false](image_tensor_port_0_ie_placeholder). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f784f75fae8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "ToFloat_3" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38."

I'm also wondering if there is something I'm missing like specifying the version of the protobuf file that exists for mo_caffe.py flag --input_proto?

Thanks for any ideas/help!

EDIT: Following advice in another thread, I use the following code to try to see a text version of my graph:

import tensorflow as tf

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we import the graph_def into a new Graph and return it
    with tf.Graph().as_default() as graph:
        # The name var will prefix every op/nodes in your graph
        # Since we load everything in a new graph, this is not needed
        tf.import_graph_def(graph_def, name="prefix")
    return graph


if __name__ == '__main__':
    mygraph = load_graph("C:\\<PATH>\\frozen_inference_graph.pb")
    tf.train.write_graph(mygraph, "./", "graph.txt")

The output of which is:

Traceback (most recent call last):
  File "/home/warrior/.local/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 418, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: prefix/ToFloat = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false](prefix/Const). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "export_pb_to_txt.py", line 19, in <module>
    mygraph = load_graph("/home/warrior/kai/pipeline/frozen_inference_graph.pb")
  File "export_pb_to_txt.py", line 14, in load_graph
    tf.import_graph_def(graph_def, name="prefix")
  File "/home/warrior/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
    return func(*args, **kwargs)
  File "/home/warrior/.local/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def
    raise ValueError(str(e))
ValueError: NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: prefix/ToFloat = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false](prefix/Const). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

 

 

0 Kudos
3 Replies
Severine_H_Intel
Employee
1,579 Views

Dear Nathaniel, 

you are doing the correct thing (upgrading to TF11 to avoid the graphdef error) and OpenVINO is working with TF 11. I would like to make sure that you did not upgrade your TF inside a conda / virtual environment, which could explain your issue.

Best, 

Severine

0 Kudos
Payette__Mathieu
Beginner
1,579 Views

I'm getting the same problem on TF 1.12, trying to run the Model Optimizer on a trained Faster RCNN Inception v2 from TF's Object Detection API. The

ToFloat_3

node is causing an InvalidArgumentError:

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: ToFloat_3 = Cast[DstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_image_tensor_port_0_ie_placeholder_0_0). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
         [[Node: ToFloat_3 = Cast[DstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_image_tensor_port_0_ie_placeholder_0_0)]]

Any fixes ?

0 Kudos
Paresa__Nathaniel
1,579 Views

I am not running it inside any virtual environment. I am running it on an Ubuntu 16.04 host. Thanks for the response!

0 Kudos
Reply