<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Failed to convert Neural Compressor quantized INT8 TF model to onnx in AI Tools from Intel</title>
    <link>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1361559#M286</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for posting in Intel Communities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TF2ONNX was built to translate TensorFlow models to ONNX. To convert a TensorFlow model (frozen graph *.pb, SavedModel or whatever) to ONNX you can try tf2onnx.&lt;BR /&gt;TF2ONNX does have a limitation such as no support for quantization, which is mentioned in the below github link.&lt;BR /&gt;&lt;A tabindex="-1" title="https://github.com/onnx/tensorflow-onnx/issues/686" href="https://github.com/onnx/tensorflow-onnx/issues/686" target="_blank" rel="noopener noreferrer" aria-label="Link https://github.com/onnx/tensorflow-onnx/issues/686"&gt;https://github.com/onnx/tensorflow-onnx/issues/686&lt;/A&gt;&lt;BR /&gt;You can convert your TensorFlow model to ONNX and then try to convert it to int8.&lt;BR /&gt;Or else you have another option like TFLite2ONNX. It is created to convert TFLite models to ONNX. As of v0.3, TFLite2ONNX is compatible with TensorFlow 2.0 and quantization conversion.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To install via pip: pip install tflite2onnx.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Hope our response would clarify your doubts!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Thanks&lt;/P&gt;</description>
    <pubDate>Fri, 18 Feb 2022 10:22:08 GMT</pubDate>
    <dc:creator>Rahila_T_Intel</dc:creator>
    <dc:date>2022-02-18T10:22:08Z</dc:date>
    <item>
      <title>Failed to convert Neural Compressor quantized INT8 TF model to onnx</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1361136#M285</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I have quantized a frozen FP32 model to INT8 model using Neural compressor. I am trying to convert these models to onnx. I am able to convert FP32 model to onnx using&amp;nbsp;&lt;STRONG&gt;tf2onnx.convert&amp;nbsp;&lt;/STRONG&gt;, but the conversion fails for quantized INT8 model. Any help would be much appreciated. Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Error Trace:&lt;/P&gt;
&lt;P&gt;2021-09-26 10:46:32,113 - INFO - Using tensorflow=2.7.0, onnx=1.10.2, tf2onnx=1.9.3/1190aa&lt;BR /&gt;2021-09-26 10:46:32,114 - INFO - Using opset &amp;lt;onnx, 9&amp;gt;&lt;BR /&gt;Traceback (most recent call last):&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main&lt;BR /&gt;return _run_code(code, main_globals, None,&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/runpy.py", line 87, in _run_code&lt;BR /&gt;exec(code, run_globals)&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/convert.py", line 633, in &amp;lt;module&amp;gt;&lt;BR /&gt;main()&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/convert.py", line 264, in main&lt;BR /&gt;model_proto, _ = _convert_common(&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/convert.py", line 162, in _convert_common&lt;BR /&gt;g = process_tf_graph(tf_graph, const_node_values=const_node_values,&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/tfonnx.py", line 433, in process_tf_graph&lt;BR /&gt;main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/tfonnx.py", line 448, in graphs_from_tf&lt;BR /&gt;ordered_func = resolve_functions(tf_graph)&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/tf_loader.py", line 759, in resolve_functions&lt;BR /&gt;_, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/tf_utils.py", line 416, in tflist_to_onnx&lt;BR /&gt;dtypes[out.name] = map_tf_dtype(out.dtype)&lt;BR /&gt;File "/root/anaconda3/lib/python3.8/site-packages/tf2onnx/tf_utils.py", line 112, in map_tf_dtype&lt;BR /&gt;dtype = TF_TO_ONNX_DTYPE[dtype]&lt;BR /&gt;KeyError: tf.qint8&lt;/P&gt;</description>
      <pubDate>Thu, 17 Feb 2022 07:00:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1361136#M285</guid>
      <dc:creator>Anand_Viswanath</dc:creator>
      <dc:date>2022-02-17T07:00:10Z</dc:date>
    </item>
    <item>
      <title>Re: Failed to convert Neural Compressor quantized INT8 TF model to onnx</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1361559#M286</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for posting in Intel Communities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TF2ONNX was built to translate TensorFlow models to ONNX. To convert a TensorFlow model (frozen graph *.pb, SavedModel or whatever) to ONNX you can try tf2onnx.&lt;BR /&gt;TF2ONNX does have a limitation such as no support for quantization, which is mentioned in the below github link.&lt;BR /&gt;&lt;A tabindex="-1" title="https://github.com/onnx/tensorflow-onnx/issues/686" href="https://github.com/onnx/tensorflow-onnx/issues/686" target="_blank" rel="noopener noreferrer" aria-label="Link https://github.com/onnx/tensorflow-onnx/issues/686"&gt;https://github.com/onnx/tensorflow-onnx/issues/686&lt;/A&gt;&lt;BR /&gt;You can convert your TensorFlow model to ONNX and then try to convert it to int8.&lt;BR /&gt;Or else you have another option like TFLite2ONNX. It is created to convert TFLite models to ONNX. As of v0.3, TFLite2ONNX is compatible with TensorFlow 2.0 and quantization conversion.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To install via pip: pip install tflite2onnx.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Hope our response would clarify your doubts!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Fri, 18 Feb 2022 10:22:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1361559#M286</guid>
      <dc:creator>Rahila_T_Intel</dc:creator>
      <dc:date>2022-02-18T10:22:08Z</dc:date>
    </item>
    <item>
      <title>Re: Failed to convert Neural Compressor quantized INT8 TF model to onnx</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1362235#M288</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your response. I was able to convert TF FP32 model to ONNX using tf2onnx and then quantize the model using onnx quantization and Intel neural compressor.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Anand&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 21 Feb 2022 14:49:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1362235#M288</guid>
      <dc:creator>Anand_Viswanath</dc:creator>
      <dc:date>2022-02-21T14:49:40Z</dc:date>
    </item>
    <item>
      <title>Re:Failed to convert Neural Compressor quantized INT8 TF model to onnx</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1362425#M289</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Glad to know that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Rahila T&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Feb 2022 05:49:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1362425#M289</guid>
      <dc:creator>Rahila_T_Intel</dc:creator>
      <dc:date>2022-02-22T05:49:41Z</dc:date>
    </item>
    <item>
      <title>Re: Failed to convert Neural Compressor quantized INT8 TF model to onnx</title>
      <link>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1386740#M309</link>
      <description>&lt;P&gt;Can you share more in-depth details on how the issue was resolved?It might be useful for me too.&lt;/P&gt;</description>
      <pubDate>Mon, 23 May 2022 22:14:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/AI-Tools-from-Intel/Failed-to-convert-Neural-Compressor-quantized-INT8-TF-model-to/m-p/1386740#M309</guid>
      <dc:creator>Abhishek81</dc:creator>
      <dc:date>2022-05-23T22:14:59Z</dc:date>
    </item>
  </channel>
</rss>

