<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Re:INT8 optimization for YOLOv4 in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235462#M21836</link>
    <description>&lt;P&gt;Hi Adli,&lt;/P&gt;
&lt;P&gt;I have fixed the issue &lt;LI-EMOJI id="lia_smiling-face-with-smiling-eyes" title=":smiling_face_with_smiling_eyes:"&gt;&lt;/LI-EMOJI&gt;&lt;/P&gt;
&lt;P&gt;I created a new int8 model by referring the below link.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-enable-the-Post-Training-Optimization-Tool/m-p/1180921?profile.language=ko" target="_blank"&gt;https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-enable-the-Post-Training-Optimization-Tool/m-p/1180921?profile.language=ko&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Now I am able to run the bench mark app without any error.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Rahila T&lt;/P&gt;</description>
    <pubDate>Wed, 09 Dec 2020 05:13:50 GMT</pubDate>
    <dc:creator>Rahila_T_Intel</dc:creator>
    <dc:date>2020-12-09T05:13:50Z</dc:date>
    <item>
      <title>INT8 optimization for YOLOv4</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1234748#M21796</link>
      <description>&lt;P&gt;I was trying to do POT optimization with custom trained yolov4 weights.&lt;/P&gt;
&lt;P&gt;Initially converted the yolov4 weights to .pb file and then created IR files using openvino.&amp;nbsp;Also I am able to optimize the model using POT.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have checked the benchmark app with both IR FP32 model and INT8 model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;BENCHMARK APP RESULT :&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;------------------Using FP32 IR model----------------&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Count:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 860 iterations&lt;/P&gt;
&lt;P&gt;Duration:&amp;nbsp;&amp;nbsp; 61260.79 ms&lt;/P&gt;
&lt;P&gt;Latency:&amp;nbsp;&amp;nbsp;&amp;nbsp; 695.84 ms&lt;/P&gt;
&lt;P&gt;Throughput: 14.04 FPS&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;----------------Using INT8 model----------------------&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Command tried :&amp;nbsp;&lt;/P&gt;
&lt;P&gt;python3 /openvino_2021.1.110/deployment_tools/tools/benchmark_tool/benchmark_app.py -m /results/TFYOLOV4_DefaultQuantization/2020-12-02_22-57-00/optimized/TFYOLOV4.xml -i cam --api_type async --number_iterations 10&lt;/P&gt;
&lt;P&gt;[Step 1/11] Parsing and validating input arguments&lt;/P&gt;
&lt;P&gt;/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead&lt;/P&gt;
&lt;P&gt;&amp;nbsp; logger.warn(" -nstreams default value is determined automatically for a device. "&lt;/P&gt;
&lt;P&gt;[ WARNING ]&amp;nbsp; -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.&lt;/P&gt;
&lt;P&gt;[Step 2/11] Loading Inference Engine&lt;/P&gt;
&lt;P&gt;[ INFO ] InferenceEngine:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; API version............. 2.1.2021.1.0-1237-bece22ac675-releases/2021/1&lt;/P&gt;
&lt;P&gt;[ INFO ] Device info&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; CPU&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; MKLDNNPlugin............ version 2.1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Build................... 2021.1.0-1237-bece22ac675-releases/2021/1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;[Step 3/11] Setting device configuration&lt;/P&gt;
&lt;P&gt;[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.&lt;/P&gt;
&lt;P&gt;[Step 4/11] Reading network files&lt;/P&gt;
&lt;P&gt;[ INFO ] Read network took 377.16 ms&lt;/P&gt;
&lt;P&gt;[Step 5/11] Resizing network to match image sizes and given batch&lt;/P&gt;
&lt;P&gt;[ INFO ] Network batch size: 1&lt;/P&gt;
&lt;P&gt;[Step 6/11] Configuring input of the model&lt;/P&gt;
&lt;P&gt;[Step 7/11] Loading the model to the device&lt;/P&gt;
&lt;P&gt;[ ERROR ] Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels &amp;gt; 1&lt;/P&gt;
&lt;P&gt;Traceback (most recent call last):&lt;/P&gt;
&lt;P&gt;&amp;nbsp; File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py", line 218, in run&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; exe_network = benchmark.load_network(ie_network)&lt;/P&gt;
&lt;P&gt;&amp;nbsp; File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/benchmark.py", line 73, in load_network&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; num_requests=1 if self.api_type == 'sync' else self.nireq or 0)&lt;/P&gt;
&lt;P&gt;&amp;nbsp; File "ie_api.pyx", line 311, in openvino.inference_engine.ie_api.IECore.load_network&lt;/P&gt;
&lt;P&gt;&amp;nbsp; File "ie_api.pyx", line 320, in openvino.inference_engine.ie_api.IECore.load_network&lt;/P&gt;
&lt;P&gt;RuntimeError: Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels &amp;gt; 1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I tried out the same procedure for publicaly available Yolov4 weight file, where I am able to run the benchmark app without any errors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 07 Dec 2020 13:14:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1234748#M21796</guid>
      <dc:creator>Rahila_T_Intel</dc:creator>
      <dc:date>2020-12-07T13:14:34Z</dc:date>
    </item>
    <item>
      <title>Re:INT8 optimization for YOLOv4</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235442#M21833</link>
      <description>&lt;P&gt;Hi Rahali,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for reaching out to us. The current release of the OpenVINO toolkit does not officially support the YOLOv4. We encourage our users to try and explore the YOLOv4 on the OpenVINO toolkit.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;If possible, could you set a single image as the benchmark app input and then run the benchmark app? Please share the outcome here.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Adli&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Dec 2020 02:57:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235442#M21833</guid>
      <dc:creator>Adli</dc:creator>
      <dc:date>2020-12-09T02:57:11Z</dc:date>
    </item>
    <item>
      <title>Re: Re:INT8 optimization for YOLOv4</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235458#M21835</link>
      <description>&lt;P&gt;Hi Adli,&lt;/P&gt;
&lt;P&gt;There is no change with a&amp;nbsp;&lt;SPAN&gt;single image as the benchmark app input.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Command&lt;/STRONG&gt; : python3 /openvino_2021.1.110/deployment_tools/tools/benchmark_tool/benchmark_app.py -m /results1/TFYOLOV4_DefaultQuantization/2020-12-02_22-57-00/optimized//TFYOLOV4.xml -i /models/6dogs.jpg&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Output&lt;/STRONG&gt; :&amp;nbsp;&lt;/P&gt;
&lt;P&gt;[Step 1/11] Parsing and validating input arguments&lt;BR /&gt;/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead&lt;BR /&gt;logger.warn(" -nstreams default value is determined automatically for a device. "&lt;BR /&gt;[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.&lt;BR /&gt;[Step 2/11] Loading Inference Engine&lt;BR /&gt;[ INFO ] InferenceEngine:&lt;BR /&gt;API version............. 2.1.2021.1.0-1237-bece22ac675-releases/2021/1&lt;BR /&gt;[ INFO ] Device info&lt;BR /&gt;CPU&lt;BR /&gt;MKLDNNPlugin............ version 2.1&lt;BR /&gt;Build................... 2021.1.0-1237-bece22ac675-releases/2021/1&lt;/P&gt;
&lt;P&gt;[Step 3/11] Setting device configuration&lt;BR /&gt;[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.&lt;BR /&gt;[Step 4/11] Reading network files&lt;BR /&gt;[ INFO ] Read network took 392.63 ms&lt;BR /&gt;[Step 5/11] Resizing network to match image sizes and given batch&lt;BR /&gt;[ INFO ] Network batch size: 1&lt;BR /&gt;[Step 6/11] Configuring input of the model&lt;BR /&gt;[Step 7/11] Loading the model to the device&lt;BR /&gt;[ ERROR ] Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels &amp;gt; 1&lt;BR /&gt;Traceback (most recent call last):&lt;BR /&gt;File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py", line 218, in run&lt;BR /&gt;exe_network = benchmark.load_network(ie_network)&lt;BR /&gt;File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/benchmark.py", line 73, in load_network&lt;BR /&gt;num_requests=1 if self.api_type == 'sync' else self.nireq or 0)&lt;BR /&gt;File "ie_api.pyx", line 311, in openvino.inference_engine.ie_api.IECore.load_network&lt;BR /&gt;File "ie_api.pyx", line 320, in openvino.inference_engine.ie_api.IECore.load_network&lt;BR /&gt;RuntimeError: Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels &amp;gt; 1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;Rahila&lt;/P&gt;</description>
      <pubDate>Wed, 09 Dec 2020 04:33:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235458#M21835</guid>
      <dc:creator>Rahila_T_Intel</dc:creator>
      <dc:date>2020-12-09T04:33:38Z</dc:date>
    </item>
    <item>
      <title>Re: Re:INT8 optimization for YOLOv4</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235462#M21836</link>
      <description>&lt;P&gt;Hi Adli,&lt;/P&gt;
&lt;P&gt;I have fixed the issue &lt;LI-EMOJI id="lia_smiling-face-with-smiling-eyes" title=":smiling_face_with_smiling_eyes:"&gt;&lt;/LI-EMOJI&gt;&lt;/P&gt;
&lt;P&gt;I created a new int8 model by referring the below link.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-enable-the-Post-Training-Optimization-Tool/m-p/1180921?profile.language=ko" target="_blank"&gt;https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-enable-the-Post-Training-Optimization-Tool/m-p/1180921?profile.language=ko&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Now I am able to run the bench mark app without any error.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Rahila T&lt;/P&gt;</description>
      <pubDate>Wed, 09 Dec 2020 05:13:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235462#M21836</guid>
      <dc:creator>Rahila_T_Intel</dc:creator>
      <dc:date>2020-12-09T05:13:50Z</dc:date>
    </item>
    <item>
      <title>Re:INT8 optimization for YOLOv4</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235503#M21841</link>
      <description>&lt;P&gt;Hi Rahila,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Adli&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Dec 2020 06:45:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235503#M21841</guid>
      <dc:creator>Adli</dc:creator>
      <dc:date>2020-12-09T06:45:27Z</dc:date>
    </item>
  </channel>
</rss>

