Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6493 ディスカッション

difference between sparce and pruning

timosy
新規コントリビューター I
1,113件の閲覧回数

I found expanation on Neural Network Compression Framework
https://docs.openvino.ai/latest/docs_nncf_introduction.html


I'm confusing the difference between pruning and sparcity. I know there is unstructure and structure pruning, is this "sparcity" in NNCF?

Is it possible to perfom structure pruning to make inference fast? I

 

checked these pages, but, I'm not sure which page gives corresponding information...
https://github.com/openvinotoolkit/nncf/tree/develop/examples/torch/classification/configs/sparsity
https://github.com/openvinotoolkit/nncf/blob/develop/docs/compression_algorithms/Sparsity.md

ラベル(3)
0 件の賞賛
1 解決策
Zulkifli_Intel
モデレーター
1,082件の閲覧回数

Hi Timosy,

Thank you for reaching out to us.

 

Sparsity is an approach to compressing CNN. There are two types of sparsification methods:

  1. Structured sparsification (also known as pruning). As a result of structured sparsity, we get a new neural network that is smaller than the original network (fewer channels, filters, etc.).
  2. Unstructured sparsity. As a result of unstructured sparsity, we get a new network the same size as the original once, but weight tensors are sparse now. Using unstructured sparsification, we can remove more weights than via pruning.

 

The main idea of all sparsification algorithms is based on the fact that many modern DNNs are over-parameterized. It means that DNN contains more weights than it is needed to solve the problem (or more than we can effectively train). Thus the target of any sparsification algorithm is to find a subset of weights that contribute maximum result in accuracy and remove all other weights. So the contributions of sparsity algorithms are as follows:

  • Minimization of the physical size of weights, using some sparse data representation methods.
  • Improve the inference time using an implementation of sparse arithmetics (software or hardware).


The objective of NNCF is to prepare the model for accelerated inference by simulating the compression at train time. You can refer to Introducing a Training Add-on for OpenVINO™ toolkit: Neural Network Compression Framework in the Sparsity section for more detail.

 


Sincerely,

Zulkifli


元の投稿で解決策を見る

4 返答(返信)
Zulkifli_Intel
モデレーター
1,083件の閲覧回数

Hi Timosy,

Thank you for reaching out to us.

 

Sparsity is an approach to compressing CNN. There are two types of sparsification methods:

  1. Structured sparsification (also known as pruning). As a result of structured sparsity, we get a new neural network that is smaller than the original network (fewer channels, filters, etc.).
  2. Unstructured sparsity. As a result of unstructured sparsity, we get a new network the same size as the original once, but weight tensors are sparse now. Using unstructured sparsification, we can remove more weights than via pruning.

 

The main idea of all sparsification algorithms is based on the fact that many modern DNNs are over-parameterized. It means that DNN contains more weights than it is needed to solve the problem (or more than we can effectively train). Thus the target of any sparsification algorithm is to find a subset of weights that contribute maximum result in accuracy and remove all other weights. So the contributions of sparsity algorithms are as follows:

  • Minimization of the physical size of weights, using some sparse data representation methods.
  • Improve the inference time using an implementation of sparse arithmetics (software or hardware).


The objective of NNCF is to prepare the model for accelerated inference by simulating the compression at train time. You can refer to Introducing a Training Add-on for OpenVINO™ toolkit: Neural Network Compression Framework in the Sparsity section for more detail.

 


Sincerely,

Zulkifli


timosy
新規コントリビューター I
1,049件の閲覧回数

Dear Zulkifli_Intel

Thanks for your explantion.

I understand that, 

Structured sparsification here is same with structure pruning that is mentioned in several Web page as one of the compression methods, and Unstructured sparsity is same with unstructure pruning.

timosy
新規コントリビューター I
1,010件の閲覧回数
Zulkifli_Intel
モデレーター
1,038件の閲覧回数

Hi Timosy,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Zulkifli


返信