Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

mo_tf.py error from tensorflow model

Maeda__Aki
Beginner
915 Views

Hi,
I'm training the tensorflow model.

https://github.com/stanfordmlgroup/tf-models/tree/master/video_prediction
To download the robot data, run the following.
./download_data.sh

To train the model, run the prediction_train.py file.
python3 prediction_train.py


I'm trying transform tensorflow model to OpenVINO IR files.

https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#inpage-nav-3
2. MetaGraph:
I got a following error message when implemented along with.

$ python3 /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo_tf.py --input_meta_graph model.meta
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      None
        - Path for generated IR:        /home/work/tf-models/video_prediction/data/.
        - IR output name:       model
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Offload unsupported operations:       False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        1.5.12.49d067a0
WARNING: Logging before flag parsing goes to stderr.
E0410 16:17:23.769401 140581425035008 main.py:330] Unexpected exception happened during extracting attributes for node val_model/model/state3_8/mul.
Original exception message: int() argument must be a string, a bytes-like object or a number, not 'Dim'


Environment
OS : Ubuntu 16.04.6 LTS
Processor : Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz
Memory : 16 GB 1600 MHz DDR3


Could you tell me what is wrong here?

I look forward to hearing from you soon.
Yours sincerely,

Aki Maeda

0 Kudos
1 Reply
Shubha_R_Intel
Employee
915 Views

Dear Aki, 

The model you are using doesn't seem to be one of our validated and tested Tensorflow Models. For a supported list, please refer to the document below. That said it's sometimes possible for mo_tf.py to work on unvalidated models. To debug deeper, you'd have to run model optimizer with the switch --log_level DEBUG.

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

Thanks,

Shubha

0 Kudos
Reply