Community
cancel
Showing results for 
Search instead for 
Did you mean: 
seroussi__rafael
Beginner
175 Views

Openvino Model Optimizer incorrect conversion

When I try to create a Intermediate Representation from a .pb tensorflow frozen model/graph it outputs a bad .xml file and probably a bad .bin file as well. when you try to run the IR file through the classification_sample it gives some error where it can't compile the xml file because it is wrong. However if I "correct" the .xml file by changing the layer shapes to my actual model then it starts to run. It then runs but gives output that doesn't represent the actual model that was tested on tensorflow. I have attached the .pb model, the .xml and .bin files as well as the "fixed" .xml

this is the command that i used to create the model:

python3 /opt/intel/computer_vision_sdk_fpga_2018.4.420/deployment_tools/model_optimizer/mo.py --output_dir ~/models --input_model ~/model.pb --offload_unsupported_operations_to_tf --input_shape [1,3,48,48] --framework tf

why is the xml and bin output wrong for my model?
 

0 Kudos
1 Reply
Severine_H_Intel
Employee
175 Views

Dear Rafael, 

I tried your model, first through the model optimizer and then through the inference engine. What I realized is that you use the format NCHW, while OpenVINO is expecting NHWC. However, we have a command line argument to support NCHW. Add --disable_nhwc_to_nchw to the model optimizer command. 

Also, you dont need --offload_unsupported_operations_to_tf in the command line, your model is getting successfully without it. 

So try: python mo_tf.py -m model.pb -b 1 --output_dir <output_dir> --disable_nhwc_to_nchw 

After this, could you tell me if your results are the good ones in the classification sample?

Best, 

Severine

Reply