Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVino sparse matrix operations

Andrii
Beginner
1,032 Views

I want to check if there are any sparse matrix operations support in OpenVino?

I've seen similar question for MyriadX, but I'm more interested in general. I tried to test this with sparse matrices on c4 aws instance, but failed to notice any speed improvements. 

Tests that I've performed:

- run inference on dense neural networks

- run inference on neural networks with weights close to 0(but not actually 0)

- run inference on neural networks with 95% of the weights being 0

In all of them speed remains the same, is this intended?

0 Kudos
1 Solution
Munesh_Intel
Moderator
989 Views

Hi Andrii,

Thanks for reaching out to us.

OpenVINO’s support for sparse matrix operations is availed through CPU plugin. The CPU plugin was developed using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Highly optimized functions, including sparse solvers, are supported through Intel® Math Kernel Library (Intel® MKL).

 

For discussion purposes, let’s acknowledge that support for unstructured sparsity patterns have not achieved the desired efficiency on modern CPUs and GPUs typically used for deep learning inference.

 

To further improve your inferencing performance, I would suggest you utilize LIBXSMM, which is a library for specialized dense and sparse matrix operations. LIBXSMM enables acceleration of sparse functionality.

 

On a separate note, I would suggest you read through the following paper, SparseDNN: Fast Sparse Deep Learning Inference on CPUs. SparseDNN is inspired by other state-of-the-art inference systems for dense neural networks such as OpenVINO™, LIBXSMM and SkimCaffe, among others.

 

Regards,

Munesh


View solution in original post

2 Replies
Munesh_Intel
Moderator
990 Views

Hi Andrii,

Thanks for reaching out to us.

OpenVINO’s support for sparse matrix operations is availed through CPU plugin. The CPU plugin was developed using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Highly optimized functions, including sparse solvers, are supported through Intel® Math Kernel Library (Intel® MKL).

 

For discussion purposes, let’s acknowledge that support for unstructured sparsity patterns have not achieved the desired efficiency on modern CPUs and GPUs typically used for deep learning inference.

 

To further improve your inferencing performance, I would suggest you utilize LIBXSMM, which is a library for specialized dense and sparse matrix operations. LIBXSMM enables acceleration of sparse functionality.

 

On a separate note, I would suggest you read through the following paper, SparseDNN: Fast Sparse Deep Learning Inference on CPUs. SparseDNN is inspired by other state-of-the-art inference systems for dense neural networks such as OpenVINO™, LIBXSMM and SkimCaffe, among others.

 

Regards,

Munesh


Munesh_Intel
Moderator
967 Views

Hi Andrii,

This thread will no longer be monitored since we have provided suggestions. If you need any additional information from Intel, please submit a new question.


Regards,

Munesh


0 Kudos
Reply