<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: The quantized INT8 onnx models fails to load with invalid model error in AI Tools from Intel</title>
    <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1412974#M354</link>
    <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have not heard back from you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could you please share the sample reproducer code which was used for inferencing with exact steps followed so that we can try to fix the issue from our end?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Diya&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 06 Sep 2022 06:05:35 GMT</pubDate>
    <dc:creator>DiyaN_Intel</dc:creator>
    <dc:date>2022-09-06T06:05:35Z</dc:date>
    <item>
      <title>The quantized INT8 onnx models fails to load with invalid model error</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1409121#M339</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have quantized my ONNX FP32 model to ONNX INT8 model using Intel's Neural Compressor.&lt;/P&gt;
&lt;P&gt;When I try to load the model to run inference, it fails with an error saying Invalid model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Error:&lt;/P&gt;
&lt;P&gt;File "testOnnxModel.py", line 178, in inference&lt;BR /&gt;session = ort.InferenceSession(onnx_file, sess_options=sess_options, providers=['CPUExecutionProvider'])&lt;BR /&gt;File "/root/anaconda3/envs/versa_benchmark/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__&lt;BR /&gt;self._create_inference_session(providers, provider_options, disabled_optimizers)&lt;BR /&gt;File "/root/anaconda3/envs/versa_benchmark/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session&lt;BR /&gt;sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)&lt;BR /&gt;onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from rappnet2_int8.onnx failed:This is an invalid model. Error: two nodes with same node name (model/rt_block/layer_normalization/moments/SquaredDifference:0_QuantizeLinear).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am able to quantize and work on the same model's tensorflow version.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please help on this.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TIA,&lt;/P&gt;
&lt;P&gt;Anand Viswanath A&lt;/P&gt;</description>
      <pubDate>Thu, 18 Aug 2022 11:52:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1409121#M339</guid>
      <dc:creator>Anand_Viswanath</dc:creator>
      <dc:date>2022-08-18T11:52:35Z</dc:date>
    </item>
    <item>
      <title>Re:The quantized INT8 onnx models fails to load with invalid model error</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1409419#M341</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for posting in Intel Communities.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Could you please share the following details?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;1. Sample reproducer code&lt;/P&gt;&lt;P&gt;2. exact steps and the commands used&lt;/P&gt;&lt;P&gt;3. OS details.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Diya&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 19 Aug 2022 12:40:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1409419#M341</guid>
      <dc:creator>DiyaN_Intel</dc:creator>
      <dc:date>2022-08-19T12:40:21Z</dc:date>
    </item>
    <item>
      <title>Re: Re:The quantized INT8 onnx models fails to load with invalid model error</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1409828#M343</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please find the details below,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Code :&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Anand_Viswanath_0-1661162129630.png" style="width: 400px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/32784iD4A5664DCFE09F78/image-size/medium?v=v2&amp;amp;px=400&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Anand_Viswanath_0-1661162129630.png" alt="Anand_Viswanath_0-1661162129630.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Config :&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Anand_Viswanath_1-1661162214736.png" style="width: 400px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/32785i35843C3B4CDD697B/image-size/medium?v=v2&amp;amp;px=400&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Anand_Viswanath_1-1661162214736.png" alt="Anand_Viswanath_1-1661162214736.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Steps:&lt;/P&gt;
&lt;P&gt;Execute the python script with the quantization code.&lt;/P&gt;
&lt;P&gt;The script takes ONNX FP32 model which is converted from TF Frozen FP32 model using TF2onnx.&lt;/P&gt;
&lt;P&gt;The output INT8 ONNX model when loaded for inferencing, gives the above error&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;OS:&lt;/P&gt;
&lt;P&gt;Linux sdp 5.4.0-110-generic #124-Ubuntu&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Anand Viswanath A&lt;/P&gt;</description>
      <pubDate>Mon, 22 Aug 2022 10:00:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1409828#M343</guid>
      <dc:creator>Anand_Viswanath</dc:creator>
      <dc:date>2022-08-22T10:00:24Z</dc:date>
    </item>
    <item>
      <title>Re: The quantized INT8 onnx models fails to load with invalid model error</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1411318#M349</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi ,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;As discussed privately please share sample reproducer code which was used for inferencing with exact steps followed .&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards, &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Diya&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 29 Aug 2022 06:53:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1411318#M349</guid>
      <dc:creator>DiyaN_Intel</dc:creator>
      <dc:date>2022-08-29T06:53:49Z</dc:date>
    </item>
    <item>
      <title>Re: The quantized INT8 onnx models fails to load with invalid model error</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1412974#M354</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have not heard back from you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could you please share the sample reproducer code which was used for inferencing with exact steps followed so that we can try to fix the issue from our end?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Diya&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 06 Sep 2022 06:05:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1412974#M354</guid>
      <dc:creator>DiyaN_Intel</dc:creator>
      <dc:date>2022-09-06T06:05:35Z</dc:date>
    </item>
    <item>
      <title>Re:The quantized INT8 onnx models fails to load with invalid model error</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1414385#M357</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;We have not heard back from you. This thread will no longer be monitored by Intel. If you need further assistance, please post a new question.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Diya&amp;nbsp;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Sep 2022 07:04:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/The-quantized-INT8-onnx-models-fails-to-load-with-invalid-model/m-p/1414385#M357</guid>
      <dc:creator>DiyaN_Intel</dc:creator>
      <dc:date>2022-09-13T07:04:33Z</dc:date>
    </item>
  </channel>
</rss>

