Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

DeepLab v3 custom trained model convert error

Tinon__John
Beginner
1,553 Views

python3 mo.py --input_model ${MODEL} --output ArgMax --input 1:mul_1 --input_shape "(1,513,513,3)" --log_level=DEBUG

Works without any issues on the xception model OS=8/16 "xception65_coco_voc_trainaug"  from model_zoo.
https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md

However when using the checkpoint to train new classes my newly exported "frozen_inference" is throwing the following error.

mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No node with name mul_1.

I found some similar issues with custom trained mask-rcnn though I'm not quite sure how to fix it for deeplab.
https://software.intel.com/en-us/forums/computer-vision/topic/809407

Any help would be appreciated
I'm using  2019.1.144, thanks in advance

0 Kudos
1 Solution
HemanthKum_G_Intel
1,553 Views

Hi John,

Netron tool from elsewhere can be used to visualize the original and IR models.

For example, the frozen_inference_graph.pb converted to IR has a node with the following properties: 

type: Power, 

name: mul_1/FusedPower_

Input: id:0:0

Output: id: 1:1

View solution in original post

0 Kudos
5 Replies
HemanthKum_G_Intel
1,553 Views

Hi John,

Shubha R. (Intel) wrote:

Ryan for the TensorFlow DeepLab model please try the following command (it works for me):

python mo.py --scale 1 --model_name test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync --input_shape "(1,513,513,3)" --input 1:mul_1 --input_model  "c:\Users\sdramani\Downloads\deeplabv3_mnv2_pascal_train_aug\frozen_inference_graph.pb" --framework tf --output_dir c:\Users\sdramani\Downloads\out_dir --data_type FP32 --output ArgMax

In the output_dir you should see the following 3 files created:

test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.xml

test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.mapping

test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.bin

As for Fast Style Transfer we are still investigating this. Stay tuned.

Thanks for using OpenVino !

Shubha

0 Kudos
Tinon__John
Beginner
1,553 Views

Thanks for your reply.

Unfortunately my problem persists as the node `mul_1` doesn't seem to exists anymore when I've done transfer learning and exported a new "frozen_inference_graph.pb". I'm just curious if the --input is something else than mul_1? If not, does vino provide any tool to debug a graph and find whatever the new input name is?

0 Kudos
HemanthKum_G_Intel
1,554 Views

Hi John,

Netron tool from elsewhere can be used to visualize the original and IR models.

For example, the frozen_inference_graph.pb converted to IR has a node with the following properties: 

type: Power, 

name: mul_1/FusedPower_

Input: id:0:0

Output: id: 1:1

0 Kudos
Deng__Fucheng
Beginner
1,553 Views

found the same error with my own trained model. I trained my model with 513x513(all the training images are 720x1280). If I just use "python mo_tf.py --input_model path/to/model --input_shape [1,513,513,3] --output ArgMax", the conversion is Okay, but error will occur in the inference time when the input image is 720x1280. It says dimension problem. However, if the input image is 513x513, there is no problem in the inference runtime. But the result is not expected.  

I fixed the problem just use "python mo_tf.py --input_model path/to/model --input_shape [1,720,1280,3] --output ArgMax". The result is expected when the input image is 720x1280.

0 Kudos
Shubha_R_Intel
Employee
1,553 Views

Dear Deng, Fucheng,

Thanks for reporting your solution back to the OpenVino community ! We appreciate it !

Thanks,

Shubha

0 Kudos
Reply