Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Problem in model optimizer

Anju_P_Intel
Employee
614 Views

This thread is raised on behalf of Jack Lee for his question on OpenVINO. Please find the issue he faced below:

"

I tried to run model optimizer for my tensorflow saved model, but failed.

 

Following is my saved model.

(base) D:\tmp\export\1536028618>saved_model_cli show --dir . --tag_set serve --signature_def serving_default

c:\programdata\anaconda3\lib\site-packages\h5py\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.

  from ._conv import register_converters as _register_converters

The given SavedModel SignatureDef contains the following input(s):

  inputs['image'] tensor_info:

      dtype: DT_FLOAT

      shape: (-1, 28, 28)

      name: Placeholder:0

The given SavedModel SignatureDef contains the following output(s):

  outputs['classes'] tensor_info:

      dtype: DT_INT64

      shape: (-1)

      name: ArgMax:0

  outputs['probabilities'] tensor_info:

      dtype: DT_FLOAT

      shape: (-1, 10)

      name: Softmax:0

Method name is: tensorflow/serving/predict

 

And following is what I got failed message from model optimizer.

(base) C:\Intel\computer_vision_sdk_2018.3.343\deployment_tools\model_optimizer>python mo_tf.py --saved_model_dir D:\tmp\export\1536028618 --input_shape [1,28,28]

Model Optimizer arguments:

Common parameters:

        - Path to the Input Model:      None

        - Path for generated IR:        C:\Intel\computer_vision_sdk_2018.3.343\deployment_tools\model_optimizer\.

        - IR output name:       saved_model

        - Log level:    ERROR

        - Batch:        Not specified, inherited from the model

        - Input layers:         Not specified, inherited from the model

        - Output layers:        Not specified, inherited from the model

        - Input shapes:         [1,28,28]

        - Mean values:  Not specified

        - Scale values:         Not specified

        - Scale factor:         Not specified

        - Precision of IR:      FP32

        - Enable fusing:        True

        - Enable grouped convolutions fusing:   True

        - Move mean values to preprocess section:       False

        - Reverse input channels:       False

TensorFlow specific parameters:

        - Input model in text protobuf format:  False

        - Offload unsupported operations:       False

        - Path to model dump for TensorBoard:   None

        - Update the configuration file with input/output node names:   None

        - Use configuration file used to generate the model with Object Detection API:  None

        - Operations to offload:        None

        - Patterns to offload:  None

        - Use the config file:  None

Model Optimizer version:        1.2.185.5335e231

C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.

  from ._conv import register_converters as _register_converters

[ ERROR ]  No or multiple placeholders in the model, but only one shape is provided, cannot set it.

For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

 

I checked doc MO_FAQ.htm, but I can't find enough information to solve the problem.

 

Can someone give me some guidance?"

Please check https://communities.intel.com/thread/128736 for more details.

Please help.

0 Kudos
3 Replies
Monique_J_Intel
Employee
614 Views

Hi Anju,

Can you provide the Model Optimizer command line with your model so I can reproduce the issue on my side?

Kind Regards,

Monique Jones

0 Kudos
Anju_P_Intel
Employee
614 Views

Hi Jones,

Not sure if you are referring to the following command:

python mo_tf.py --saved_model_dir D:\tmp\export\1536028618 --input_shape [1,28,28].

The details of the issues would be available in the thread,  https://communities.intel.com/thread/128736

 

Please note that this thread is raised on behalf of the user, Jack Lee. Have sent his contact details over mail.

Regards,

Anju

0 Kudos
Monique_J_Intel
Employee
614 Views

Hi Anju,

I've read through the thread. So for Model Optimizer to be able to convert the model using the --saved_model_dir flag the path to the directory you provide must have a special structure. That structure is mentioned here. If you don't have that structure that's totally fine there are other ways to convert the TF model. Do you have any of the following frozen pb file, ckpt.meta, or pb file and checkpoint file? If so, can you specify?

Kind Regards,

Monique Jones

0 Kudos
Reply