Community
cancel
Showing results for 
Search instead for 
Did you mean: 
226 Views

Error while converting tensorflow graph to IR using Openvino

Hello ,

I am trying to convert a tensorflow model to IR representation and I receive the following error.

 

E0910 14:01:26.942649 140444966237952 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

 

the command line i used :

python3 /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo_tf.py --input_meta_graph model.meta --input_shape=[1,64,64,1] --data_type FP16

 

And these are the following log info:

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     None
    - Path for generated IR:     /media/sangathamilan/483BD4A1546D88D2/from_c/Project/Complete_test/old/.
    - IR output name:     model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,64,64,1]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP16
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     2019.1.0-341-gc9b66a2
WARNING: Logging before flag parsing goes to stderr.
E0910 14:01:26.942649 140444966237952 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

 

Can anyone please help?

0 Kudos
19 Replies
Shubha_R_Intel
Employee
226 Views

Dear Raichandran, Sangathamilan,

Did you follow the instructions to freeze tensorflow model  first ?

Let me know.

Thanks,

shubha

226 Views

Hello ,

 

Yes I used the instruction from the intel site but I tried with meta graph as mentioned in here http://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model 

 

as in section loading non frozen model.

 

Best,

Sanga

226 Views

And I also tried to do it with frozen pb model which worked for conversion. To fix this issue with python I had to update the version of Openvino from version 1 to 2. After updating the device works as expected with python classification sample. But when I try to use classification sample async c++ code (provided by intel on samples folder) to do inference, I get wrong results. Please find the details below,  

With python and tensorflow with local GPU: 

1       0.99524504  
0       0.00475495

 

With converted IR but with CPU(FP 32):

1       0.9952450  
0       0.0047549

And with python classification _sample.py with NCS2 (FP 16):

1       0.9952450  
0       0.0047549

All the three above works as expected. But with C++ classification_sample_async.h  build with NCS2(FP16) :

classid probability
------- -----------
1       0.8398438  
0       0.1599121 

I am more interested in doing inference with c++ classification sample async. Could you help me to figure out why am I getting correct results with python but wrong results with c++ inference?

P.S : I am using grayscale images and i already tried --reverse input channels in compilation which doesn't help. The issue exists only with c++ code irrespective of whether I use GPU or CPU or MYRIAD device. 

Best,

Sanga

Shubha_R_Intel
Employee
226 Views

Dear Raichandran, Sangathamilan,

Indeed the inconsistent results you're achieving between the C++ and the Python version of classification sample async should not happen. I'm glad that you updated to OpenVino 2019R2 ! 

Can you kindly attach your model as a *.zip to this ticket (as well as the MO command you used ) ? If you'd rather send it to me privately that's OK too. Just let me know and I will PM you.

thanks,

Shubha

 

226 Views

Hello Shuba,

Thanks for the reply. Sure I would attach the model files here and the Commands i used are as follows,

To convert from .pb to IR's representation :

python mo_tf.py --input_model trained_model\inference_graph.pb  --input_shape=[1,64,64,1]  --output softmax --data_type FP16

also tried 

python mo_tf.py --input_model trained_model\inference_graph.pb  --input_shape=[1,64,64,1]  --output softmax --data_type FP16 --reverse_input_channels

to predict i use (after build the project)

./classifcation_sample_async -i  image.PNG -m inference_graph.xml -nt 2 -d MYRIAD

Best,

Sanga

Shubha_R_Intel
Employee
226 Views

Dear Sanga, 

Thanks for your collaboration. I will surely debug your files shortly and report back on this forum.

thanks for using OpenVino !

Shubha

Shubha_R_Intel
Employee
226 Views

Dear Sanga, 

I may need your original frozen pb in order to go further on this issue. Please attach it as a *.zip if you can. But I will file a bug on your behalf anyway because I did reproduce your issue.

Thanks,

Shubha

226 Views

Hello Shubha,

Please find the frozen model attached. 

Best regards,

Sanga

Shubha_R_Intel
Employee
226 Views

Dear Ravichandran, Sangathamilan

Thanks for following through and attaching the frozen model !

Shubha

 

226 Views

Hello Shubha,

Thanks for the reply. Kindly follow up once you have update on the bug you raised.

Best,

Sanga

Shubha_R_Intel
Employee
226 Views

Dear Ravichandran, Sangathamilan

 Absolutely. I will post on the forum once I know an update. Also  keep an eye peeled for R3 which should be released soon (in the next 3 weeks). 

Thanks,

Shubha

226 Views

Hello Shubha,

Can you provide me an update on this issue please?

Best,

Sanga

Shubha_R_Intel
Employee
226 Views

Dear Ravichandran, Sangathamilan,

Please try this case again in OpenVino 2019R3 which should be released very shortly. Let me know if the issue persists in R3.

Thanks for your patience,

Shubha

226 Views

Hello Shubha,

Ok sure. I would try it and would let you know.

Best,

Sanga

226 Views

Hello Shubha,

I installed Openvino Version 3 and I am getting the same results as Openvino V2. Can you please help ?

Best,

Sanga

Shubha_R_Intel
Employee
226 Views

Dear Ravichandran, Sangathamilan,

Sure. I will investigate it. Thanks for retrying on 2019R3.

Shubha

 

226 Views

Hi Shubha,

Is there an update on this yet?

Best,

Sanga

226 Views

Hello Shubha,

Could you provide me an update on this?

 

Best,

Sanga

226 Views

Hello Shubha,

Is there an Update on this yet? If not could you provide me with how to go about this issue further or any contact to proceed on this?

 

Best.

Sanga

Reply