<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Unable to convert frozen_model to IR in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1661392#M31776</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Hi Tuhnu,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thanks for reaching out. To resolve the IR conversion failure, try the following steps:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;1. Set the Correct Input Shape. Your model expects a dynamic batch size (-1,4), but OpenVINO may require a fixed size. Try:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;--input_shape "[1,4]" for a static batch of 1.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;If OpenVINO allows dynamic batch size, use "[?,4]".&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt; 2. Ensure Input/Output Names Match.TensorFlow uses specific input/output names. Use the correct names in your command:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;--input "serving_default_input_1"
--output "StatefulPartitionedCall"&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt; 3. Verify the Model is Frozen. Ensure the model is properly frozen and check tensor names with:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;print([tensor.name for tensor in frozen_func.inputs])
print([tensor.name for tensor in frozen_func.outputs])&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt; 4. Use the --saved_model_dir Option.If possible, convert the entire SavedModel directory instead of using a .pb file.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;OpenVINO 2021.4 has limited support for TensorFlow 2.4. Upgrading to OpenVINO 2023.x may improve compatibility. If the issue persists, share the exact error message for further assistance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Aznie&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jan 2025 02:22:00 GMT</pubDate>
    <dc:creator>Aznie_Intel</dc:creator>
    <dc:date>2025-01-29T02:22:00Z</dc:date>
    <item>
      <title>Unable to convert frozen_model to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1661259#M31774</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am using OpenVINO version 2021.4. I have created a simple model with TensorFlow version 2.4 that has four inputs (input_shape 1,4). I have converted it to a frozen model, but the conversion to IR format fails.&lt;/P&gt;&lt;P&gt;This is the command:&lt;/P&gt;&lt;P&gt;python "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo.py"&lt;BR /&gt;--input_model "C:\Users\tuhnus\pipari_model\frozen_model.pb" ^&lt;BR /&gt;--input "input_1" ^&lt;BR /&gt;--output "Identity" ^&lt;BR /&gt;--input_shape "[1,4]" ^&lt;BR /&gt;--data_type FP16 ^&lt;BR /&gt;--output_dir "D:\AI_Model\trained_models\openvino_ir"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is the signature:&lt;/P&gt;&lt;P&gt;The given SavedModel SignatureDef contains the following input(s):&lt;BR /&gt;inputs['input_1'] tensor_info:&lt;BR /&gt;dtype: DT_FLOAT&lt;BR /&gt;shape: (-1, 4)&lt;BR /&gt;name: serving_default_input_1:0&lt;BR /&gt;The given SavedModel SignatureDef contains the following output(s):&lt;BR /&gt;outputs['dense_2'] tensor_info:&lt;BR /&gt;dtype: DT_FLOAT&lt;BR /&gt;shape: (-1, 1)&lt;BR /&gt;name: StatefulPartitionedCall:0&lt;BR /&gt;Method name is: tensorflow/serving/predict&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What do I have to change so that the model can be converted?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;@model_optimizer&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Tuhnu&lt;/P&gt;</description>
      <pubDate>Tue, 28 Jan 2025 16:43:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1661259#M31774</guid>
      <dc:creator>Tuhnu</dc:creator>
      <dc:date>2025-01-28T16:43:04Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to convert frozen_model to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1661392#M31776</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi Tuhnu,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thanks for reaching out. To resolve the IR conversion failure, try the following steps:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;1. Set the Correct Input Shape. Your model expects a dynamic batch size (-1,4), but OpenVINO may require a fixed size. Try:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;--input_shape "[1,4]" for a static batch of 1.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;If OpenVINO allows dynamic batch size, use "[?,4]".&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt; 2. Ensure Input/Output Names Match.TensorFlow uses specific input/output names. Use the correct names in your command:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;--input "serving_default_input_1"
--output "StatefulPartitionedCall"&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt; 3. Verify the Model is Frozen. Ensure the model is properly frozen and check tensor names with:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;print([tensor.name for tensor in frozen_func.inputs])
print([tensor.name for tensor in frozen_func.outputs])&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN&gt; 4. Use the --saved_model_dir Option.If possible, convert the entire SavedModel directory instead of using a .pb file.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;OpenVINO 2021.4 has limited support for TensorFlow 2.4. Upgrading to OpenVINO 2023.x may improve compatibility. If the issue persists, share the exact error message for further assistance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Aznie&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2025 02:22:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1661392#M31776</guid>
      <dc:creator>Aznie_Intel</dc:creator>
      <dc:date>2025-01-29T02:22:00Z</dc:date>
    </item>
    <item>
      <title>Re:Unable to convert frozen_model to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1663568#M31789</link>
      <description>&lt;P&gt;Hi Tuhnu,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;This thread will no longer be monitored since we have provided a solution.&amp;nbsp;If you need any additional information from Intel, please submit a new question.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Aznie&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 06 Feb 2025 03:03:48 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Unable-to-convert-frozen-model-to-IR/m-p/1663568#M31789</guid>
      <dc:creator>Aznie_Intel</dc:creator>
      <dc:date>2025-02-06T03:03:48Z</dc:date>
    </item>
  </channel>
</rss>

