Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Karol_D_Intel
Employee
148 Views

Tensrflow's MatMul operation not supported by Model Optimizer ?

Hi All, 

   I've been trying to convert a very basic model (realising matrix multiplication) into IR representation, but I keep getting the following error. I'm confused right now because AFAIK matrix multiplication should be supported by model optimizer.

I'm attaching my model - if someone wants to give it a try on their side.

The command I'm running is: python mo_tf.py --input_model matrix_mul.pb --input_shape (1,72)

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\matrix_mul.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       matrix_mul
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         (1,72)
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.2.0-436-gf5827d4
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      MatMul (1)
[ ERROR ]          mat_mul/MatMul
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #24.

0 Kudos
7 Replies
Sahira_Intel
Moderator
148 Views

Hi Karol,

What version of OpenVINO are you using?

I ran your model using OpenVINO R2 and got a similar error. Let me look into this for you. 

Best Regards,

Sahira 

Karol_D_Intel
Employee
148 Views

I'm also using OpenVino R2.

Model Optimizer version is: 2019.2.0-436-gf5827d4

Shubha_R_Intel
Employee
148 Views

Karol Duzinkiewicz,

MatMul should be supported for Tensorflow in OpenVino R2. But it gets transformed into a FullyConnected layer according to the Supported Framework Layers document

It very well could be  bug. 

Since you work for intel I'm going to forward you a link so that you can try it on R3 (which external customers don't have accoess to).

Shubha

Karol_D_Intel
Employee
148 Views

Hi All, 

   after installing OpenVino R3 I still see the same error. Any other advise?

 

c:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model matrix_mul.pb --input_shape (1,72)
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      c:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\matrix_mul.pb
        - Path for generated IR:        c:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       matrix_mul
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         (1,72)
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.3.0-313-g5404e5d36
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      MatMul (1)
[ ERROR ]          mat_mul/MatMul
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #24.

 

Shubha_R_Intel
Employee
148 Views

Dear Karol Duzinkiewicz ,

This looks like a bug. Sorry about that. I will file it on your behalf. PS, you have access to R3 because you are an intel employee. But most OpenVino community members don't have it yet. :)

Shubha

Karol_D_Intel
Employee
148 Views

Thanks Shubha, 

   how will I now that this bug has been fixed? 

Regards, 

Karol

Shubha_R_Intel
Employee
148 Views

Dear Karol Duzinkiewicz,

Unfortunately MatMul for Tensorflow won't be available til OpenVino 2019R4.  Keep in mind today we are on 2.01. A bug ticket has been filed.

Shubha

Reply