I'm training the tensorflow model.
To download the robot data, run the following.
To train the model, run the prediction_train.py file.
I'm trying transform tensorflow model to OpenVINO IR files.
I got a following error message when implemented along with.
$ python3 /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo_tf.py --input_meta_graph model.meta
Model Optimizer arguments:
- Path to the Input Model: None
- Path for generated IR: /home/work/tf-models/video_prediction/data/.
- IR output name: model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 22.214.171.124d067a0
WARNING: Logging before flag parsing goes to stderr.
E0410 16:17:23.769401 140581425035008 main.py:330] Unexpected exception happened during extracting attributes for node val_model/model/state3_8/mul.
Original exception message: int() argument must be a string, a bytes-like object or a number, not 'Dim'
OS : Ubuntu 16.04.6 LTS
Processor : Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz
Memory : 16 GB 1600 MHz DDR3
Could you tell me what is wrong here?
I look forward to hearing from you soon.
The model you are using doesn't seem to be one of our validated and tested Tensorflow Models. For a supported list, please refer to the document below. That said it's sometimes possible for mo_tf.py to work on unvalidated models. To debug deeper, you'd have to run model optimizer with the switch --log_level DEBUG.