Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

Model Optimize : module 'tensorflow' has no attribute 'NodeDef'

ShivSD
Beginner
1,994 Views

Hi ,

I'm trying  to convert a keras (tensorflow) model to openvino format using model optimizer. I get following error. I'm able to do inference on model in tensorflow2.3.  please can you help how to debug this

python version : python 3.6

openvino : openvino_2019.3.334

os : ubuntu 18.04

tensorflow version : 2.3.1

sudo python3 mo_tf.py --input_model /home/models/deconv_fin_munet.h5
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/models/deconv_fin_munet.h5
- Path for generated IR: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/.
- IR output name: deconv_fin_munet
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.3.0-375-g332562022
2020-09-29 16:39:38.295426: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-09-29 16:39:38.295444: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ WARNING ]
Detected not satisfied dependencies:
tensorflow: installed: 2.3.1, required: 2.0.0

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] module 'tensorflow' has no attribute 'NodeDef'
[ ERROR ] Traceback (most recent call last):
File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/main.py", line 302, in main
return driver(argv)
File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/main.py", line 247, in driver
import mo.pipeline.tf as mo_tf
File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 33, in <module>
from mo.front.tf.extractor import get_tf_edges, tf_op_extractor, tf_op_extractors
File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/front/tf/extractor.py", line 26, in <module>
from mo.front.tf.extractors.native_tf import native_tf_node_extractor
File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/front/tf/extractors/native_tf.py", line 17, in <module>
from mo.front.tf.partial_infer.tf import tf_native_tf_node_infer
File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 148, in <module>
is_input: bool = False):
AttributeError: module 'tensorflow' has no attribute 'NodeDef'

[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------

  

0 Kudos
3 Replies
ShivSD
Beginner
1,976 Views

I changed the tensorflow version to 1.14 from 2.3, now getting different error

 

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/models/deconv_fin_munet.h5
- Path for generated IR: /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/.
- IR output name: deconv_fin_munet
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.3.0-375-g332562022
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/front/tf/loader.py:120: RuntimeWarning: Unexpected end-group tag: Not all data was converted
graph_def.ParseFromString(f.read())
[ ERROR ] Graph contains 0 node after executing protobuf2nx. It may happen due to problems with loaded model. It considered as error because resulting IR will be empty which is not usual

0 Kudos
IntelSupport
Community Manager
1,970 Views

Hi Shiv,

 

Thanks for reaching out. Currently, direct conversion from a model in HDF5 file format into OpenVINO Inference Representation (IR) format is not available. So what you can do is first install the latest version of OpenVINO which is version 2020.4.

 

Then load the model using TensorFlow* 2 and serialize it in the SavedModel format.

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#keras_h5

 

From there you can convert a tensorflow model.

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#Convert_From_TF

 

 

Regards,

Adli.

 

0 Kudos
IntelSupport
Community Manager
1,954 Views

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


0 Kudos
Reply