Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6473 Discussions

OpenVINO™ has now an option to be used with GStreamer

jchaves
Beginner
2,583 Views

 

GstInference is a  GStreamer plugin that enables out-of-the-box integration of deep learning models with GStreamer pipelines for inference tasks. This project is open source, multi platform and now it supports OpenVINO through the ONNX Runtime inference engine. Support for Intel® CPUs, Intel® Integrated Graphics and Intel® MovidiusTM USB sticks is now available.

Check out code samples, documentation and benchmarks for GstInference here.

 

Detection crop exampleDetection crop example

Labels (3)
0 Kudos
1 Solution
Max_L_Intel
Moderator
2,557 Views

Hi @jchaves 

Thanks for sharing your project with OpenVINO community.
We have also informed developers team about it.

View solution in original post

0 Kudos
2 Replies
Max_L_Intel
Moderator
2,558 Views

Hi @jchaves 

Thanks for sharing your project with OpenVINO community.
We have also informed developers team about it.

0 Kudos
Max_L_Intel
Moderator
2,526 Views

Hi @jchaves 

OpenVINO Toolkit includes DL Streamer which provides GStreamer Video Analytics plugin with elements for Deep Leaning inference using OpenVINO inference engine on Intel CPU, GPU and VPU. For more details on DL Streamer, please refer to the open-source repository here.

Currently, DL Streamer inference elements require models converted to IR format. We have plans to support ONNX models directly without IR conversion in our future version. 

DL Streamer is highly optimized for Intel platforms. We have listed some of the optimizations below: 

  • Optimized interop between media decode, preprocessing and inference
    • Optimal color format conversions
    • Zero-copy buffer sharing between decode, pre-processing and inference on CPU or GPU
  • Asynchronous pipeline execution
  • Optimized multi-stream processing
  • Sharing of IE instances
  • Offloading decode, preprocess to GPU
  • Ability to reduce inference frequency by leveraging object tracking in between inference operations
  • Ability to skip classification on the same object by leveraging object tracking

We will continue to optimize it further and support all the Intel HW. Thank you for your contribution to supporting OpenVINO inference via ONNX RT in GstInference.

0 Kudos
Reply