Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

INT8 optimization for YOLOv4

Rahila_T_Intel
Moderator
382 Views

I was trying to do POT optimization with custom trained yolov4 weights.

Initially converted the yolov4 weights to .pb file and then created IR files using openvino. Also I am able to optimize the model using POT. 

I have checked the benchmark app with both IR FP32 model and INT8 model.

 

BENCHMARK APP RESULT :

------------------Using FP32 IR model----------------

Count:      860 iterations

Duration:   61260.79 ms

Latency:    695.84 ms

Throughput: 14.04 FPS

----------------Using INT8 model----------------------

Command tried : 

python3 /openvino_2021.1.110/deployment_tools/tools/benchmark_tool/benchmark_app.py -m /results/TFYOLOV4_DefaultQuantization/2020-12-02_22-57-00/optimized/TFYOLOV4.xml -i cam --api_type async --number_iterations 10

[Step 1/11] Parsing and validating input arguments

/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead

  logger.warn(" -nstreams default value is determined automatically for a device. "

[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.

[Step 2/11] Loading Inference Engine

[ INFO ] InferenceEngine:

         API version............. 2.1.2021.1.0-1237-bece22ac675-releases/2021/1

[ INFO ] Device info

         CPU

         MKLDNNPlugin............ version 2.1

         Build................... 2021.1.0-1237-bece22ac675-releases/2021/1

 

[Step 3/11] Setting device configuration

[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.

[Step 4/11] Reading network files

[ INFO ] Read network took 377.16 ms

[Step 5/11] Resizing network to match image sizes and given batch

[ INFO ] Network batch size: 1

[Step 6/11] Configuring input of the model

[Step 7/11] Loading the model to the device

[ ERROR ] Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels > 1

Traceback (most recent call last):

  File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py", line 218, in run

    exe_network = benchmark.load_network(ie_network)

  File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/benchmark.py", line 73, in load_network

    num_requests=1 if self.api_type == 'sync' else self.nireq or 0)

  File "ie_api.pyx", line 311, in openvino.inference_engine.ie_api.IECore.load_network

  File "ie_api.pyx", line 320, in openvino.inference_engine.ie_api.IECore.load_network

RuntimeError: Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels > 1

 

I tried out the same procedure for publicaly available Yolov4 weight file, where I am able to run the benchmark app without any errors.

 

0 Kudos
1 Solution
Rahila_T_Intel
Moderator
351 Views

Hi Adli,

I have fixed the issue

I created a new int8 model by referring the below link.

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-enable-the-Post-Training-Optimi...

Now I am able to run the bench mark app without any error.

 

Regards,

Rahila T

View solution in original post

4 Replies
Adli
Moderator
360 Views

Hi Rahali,


Thank you for reaching out to us. The current release of the OpenVINO toolkit does not officially support the YOLOv4. We encourage our users to try and explore the YOLOv4 on the OpenVINO toolkit.


If possible, could you set a single image as the benchmark app input and then run the benchmark app? Please share the outcome here.


Regards,

Adli


Rahila_T_Intel
Moderator
356 Views

Hi Adli,

There is no change with a single image as the benchmark app input.

Command : python3 /openvino_2021.1.110/deployment_tools/tools/benchmark_tool/benchmark_app.py -m /results1/TFYOLOV4_DefaultQuantization/2020-12-02_22-57-00/optimized//TFYOLOV4.xml -i /models/6dogs.jpg

Output

[Step 1/11] Parsing and validating input arguments
/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(" -nstreams default value is determined automatically for a device. "
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.2021.1.0-1237-bece22ac675-releases/2021/1
[ INFO ] Device info
CPU
MKLDNNPlugin............ version 2.1
Build................... 2021.1.0-1237-bece22ac675-releases/2021/1

[Step 3/11] Setting device configuration
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.
[Step 4/11] Reading network files
[ INFO ] Read network took 392.63 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[ ERROR ] Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels > 1
Traceback (most recent call last):
File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/main.py", line 218, in run
exe_network = benchmark.load_network(ie_network)
File "/openvino_2021.1.110/python/python3.6/openvino/tools/benchmark/benchmark.py", line 73, in load_network
num_requests=1 if self.api_type == 'sync' else self.nireq or 0)
File "ie_api.pyx", line 311, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 320, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Quantize layer StatefulPartitionedCall/functional_7/up_sampling2d_2/resize/ResizeNearestNeighbor/fq_input_0 supports only parameter levels > 1

 

Thanks,

Rahila

Rahila_T_Intel
Moderator
352 Views

Hi Adli,

I have fixed the issue

I created a new int8 model by referring the below link.

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-enable-the-Post-Training-Optimi...

Now I am able to run the bench mark app without any error.

 

Regards,

Rahila T

View solution in original post

Adli
Moderator
342 Views

Hi Rahila,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Adli


Reply