- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I want to check if there are any sparse matrix operations support in OpenVino?
I've seen similar question for MyriadX, but I'm more interested in general. I tried to test this with sparse matrices on c4 aws instance, but failed to notice any speed improvements.
Tests that I've performed:
- run inference on dense neural networks
- run inference on neural networks with weights close to 0(but not actually 0)
- run inference on neural networks with 95% of the weights being 0
In all of them speed remains the same, is this intended?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Andrii,
Thanks for reaching out to us.
OpenVINO’s support for sparse matrix operations is availed through CPU plugin. The CPU plugin was developed using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Highly optimized functions, including sparse solvers, are supported through Intel® Math Kernel Library (Intel® MKL).
For discussion purposes, let’s acknowledge that support for unstructured sparsity patterns have not achieved the desired efficiency on modern CPUs and GPUs typically used for deep learning inference.
To further improve your inferencing performance, I would suggest you utilize LIBXSMM, which is a library for specialized dense and sparse matrix operations. LIBXSMM enables acceleration of sparse functionality.
On a separate note, I would suggest you read through the following paper, SparseDNN: Fast Sparse Deep Learning Inference on CPUs. SparseDNN is inspired by other state-of-the-art inference systems for dense neural networks such as OpenVINO™, LIBXSMM and SkimCaffe, among others.
Regards,
Munesh
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Andrii,
Thanks for reaching out to us.
OpenVINO’s support for sparse matrix operations is availed through CPU plugin. The CPU plugin was developed using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Highly optimized functions, including sparse solvers, are supported through Intel® Math Kernel Library (Intel® MKL).
For discussion purposes, let’s acknowledge that support for unstructured sparsity patterns have not achieved the desired efficiency on modern CPUs and GPUs typically used for deep learning inference.
To further improve your inferencing performance, I would suggest you utilize LIBXSMM, which is a library for specialized dense and sparse matrix operations. LIBXSMM enables acceleration of sparse functionality.
On a separate note, I would suggest you read through the following paper, SparseDNN: Fast Sparse Deep Learning Inference on CPUs. SparseDNN is inspired by other state-of-the-art inference systems for dense neural networks such as OpenVINO™, LIBXSMM and SkimCaffe, among others.
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Andrii,
This thread will no longer be monitored since we have provided suggestions. If you need any additional information from Intel, please submit a new question.
Regards,
Munesh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page