<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Dear Deng, Fucheng, in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178352#M17683</link>
    <description>&lt;P&gt;Dear&amp;nbsp;Deng, Fucheng,&lt;/P&gt;&lt;P&gt;Thanks for reporting your solution back to the OpenVino community ! We appreciate it !&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shubha&lt;/P&gt;</description>
    <pubDate>Tue, 30 Jul 2019 19:07:21 GMT</pubDate>
    <dc:creator>Shubha_R_Intel</dc:creator>
    <dc:date>2019-07-30T19:07:21Z</dc:date>
    <item>
      <title>DeepLab v3 custom trained model convert error</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178347#M17678</link>
      <description>&lt;P&gt;python3 mo.py --input_model ${MODEL} --output ArgMax --input 1:mul_1 --input_shape "(1,513,513,3)" --log_level=DEBUG&lt;BR /&gt;&lt;BR /&gt;Works without any issues on the xception model OS=8/16 "xception65_coco_voc_trainaug"&amp;nbsp; from model_zoo.&lt;BR /&gt;&lt;A href="https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md" target="_blank"&gt;https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md&lt;/A&gt;&lt;/P&gt;&lt;P&gt;However when using the checkpoint to train new classes my newly exported "frozen_inference" is throwing the following error.&lt;BR /&gt;&lt;BR /&gt;mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (&amp;lt;class 'extensions.front.user_data_repack.UserDataRepack'&amp;gt;): No node with name mul_1.&lt;BR /&gt;&lt;BR /&gt;I found some similar issues with custom trained mask-rcnn though I'm not quite sure how to fix it for deeplab.&lt;BR /&gt;&lt;A href="https://software.intel.com/en-us/forums/computer-vision/topic/809407" target="_blank"&gt;https://software.intel.com/en-us/forums/computer-vision/topic/809407&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Any help would be appreciated&lt;BR /&gt;I'm using&amp;nbsp; 2019.1.144, thanks in advance&lt;/P&gt;</description>
      <pubDate>Sun, 07 Jul 2019 22:26:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178347#M17678</guid>
      <dc:creator>Tinon__John</dc:creator>
      <dc:date>2019-07-07T22:26:44Z</dc:date>
    </item>
    <item>
      <title>Hi John,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178348#M17679</link>
      <description>&lt;P&gt;Hi John,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Shubha R. (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Ryan for the TensorFlow DeepLab model please try the following command (it works for me):&lt;/P&gt;&lt;P&gt;python mo.py --scale 1 --model_name test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync --input_shape "(1,513,513,3)" --input 1:mul_1 --input_model&amp;nbsp; "c:\Users\sdramani\Downloads\deeplabv3_mnv2_pascal_train_aug\frozen_inference_graph.pb" --framework tf --output_dir c:\Users\sdramani\Downloads\out_dir --data_type FP32 --output ArgMax&lt;/P&gt;&lt;P&gt;In the output_dir you should see the following 3 files created:&lt;/P&gt;&lt;P&gt;test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.xml&lt;/P&gt;&lt;P&gt;test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.mapping&lt;/P&gt;&lt;P&gt;test_deeplabv3_mobilenet_v2_argmax_tfFP32CPU1Truesync.bin&lt;/P&gt;&lt;P&gt;As for Fast Style Transfer we are still investigating this. Stay tuned.&lt;/P&gt;&lt;P&gt;Thanks for using OpenVino !&lt;/P&gt;&lt;P&gt;Shubha&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jul 2019 13:01:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178348#M17679</guid>
      <dc:creator>HemanthKum_G_Intel</dc:creator>
      <dc:date>2019-07-09T13:01:14Z</dc:date>
    </item>
    <item>
      <title>Thanks for your reply.</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178349#M17680</link>
      <description>&lt;P&gt;Thanks for your reply.&lt;BR /&gt;&lt;BR /&gt;Unfortunately my problem persists as the node `mul_1` doesn't seem to exists anymore when I've done transfer learning and exported a new "frozen_inference_graph.pb". I'm just curious if the --input is something else than mul_1? If not, does vino provide any tool to debug a graph and find whatever the new input name is?&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jul 2019 13:55:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178349#M17680</guid>
      <dc:creator>Tinon__John</dc:creator>
      <dc:date>2019-07-09T13:55:51Z</dc:date>
    </item>
    <item>
      <title>Hi John,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178350#M17681</link>
      <description>&lt;P&gt;Hi John,&lt;/P&gt;&lt;P&gt;Netron tool from elsewhere can be used to visualize the original and IR models.&lt;/P&gt;&lt;P&gt;For example, the&amp;nbsp;frozen_inference_graph.pb converted to IR has a node with the following properties:&amp;nbsp;&lt;/P&gt;&lt;P&gt;type: Power,&amp;nbsp;&lt;/P&gt;&lt;P&gt;name: mul_1/FusedPower_&lt;/P&gt;&lt;P&gt;Input: id:0:0&lt;/P&gt;&lt;P&gt;Output: id: 1:1&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jul 2019 14:43:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178350#M17681</guid>
      <dc:creator>HemanthKum_G_Intel</dc:creator>
      <dc:date>2019-07-09T14:43:12Z</dc:date>
    </item>
    <item>
      <title>found the same error with my</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178351#M17682</link>
      <description>&lt;P&gt;found the same error with my own trained model. I trained my model with 513x513(all the training images are 720x1280). If I just use "python mo_tf.py --input_model path/to/model --input_shape [1,513,513,3] --output ArgMax", the conversion is Okay, but error will occur in the inference time when the input image is 720x1280. It says dimension problem. However, if&amp;nbsp;the input image is 513x513, there is no problem in the inference runtime. But the result is not expected.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I fixed the problem just use&amp;nbsp;"python mo_tf.py --input_model path/to/model --input_shape [1,720,1280,3] --output ArgMax". The result is expected when the input image is 720x1280.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2019 10:41:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178351#M17682</guid>
      <dc:creator>Deng__Fucheng</dc:creator>
      <dc:date>2019-07-29T10:41:58Z</dc:date>
    </item>
    <item>
      <title>Dear Deng, Fucheng,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178352#M17683</link>
      <description>&lt;P&gt;Dear&amp;nbsp;Deng, Fucheng,&lt;/P&gt;&lt;P&gt;Thanks for reporting your solution back to the OpenVino community ! We appreciate it !&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shubha&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jul 2019 19:07:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/DeepLab-v3-custom-trained-model-convert-error/m-p/1178352#M17683</guid>
      <dc:creator>Shubha_R_Intel</dc:creator>
      <dc:date>2019-07-30T19:07:21Z</dc:date>
    </item>
  </channel>
</rss>

