Artificial Intelligence (AI)
Engage with our experts on topics in AI
290 Discussions

Media Analytics Apps using OpenVINO™ toolkit Deep Learning Streamer

MaryT_Intel
Community Manager
0 0 89

Key Takeaways

  • Learn how to build media analytics-powered applications using the Intel® Distribution of OpenVINO™ toolkit Deep Learning Streamer (DL Streamer) based on the GStreamer* multimedia framework.
  • Leverage the power of Azure-ready, Intel-powered inferencing to build video pipeline and solve real-world use cases.
  • Get started quickly and execute streaming analytics pipeline workloads on Azure IoT Edge with containers now available on the Microsoft Azure Marketplace.

Overview

Intel® Distribution of OpenVINO™ Toolkit enables deep learning inference and easy heterogeneous execution, optimal pipeline interoperability and optimized inferencing across Intel® Xeon® processors, Intel® Core™ processors, Intel® Atom® processors, Intel® Movidius™ Vision Processing Units. DL Streamer, a component of the Intel® Distribution of OpenVINO™ toolkit, is a streaming analytics framework based on GStreamer* multimedia framework for creating complex media analytics pipelines using the toolkit’s Inference Engine. It provides optimal pipeline interoperability and optimized inferencing across Intel® architecture, CPU, iGPU and Intel® Movidius™ VPU, enabling implementations across cloud architectures to edge devices.

DL Streamer is part of the default installation package for Intel® Distribution of OpenVINO™ Toolkit. For more information on DL Streamer, please refer to the DL Streamer webinar, documentation and YouTube channel.

This blog introduces the Intel® Distribution of OpenVINO™ toolkit DL Streamer container on Microsoft Azure marketplace. Azure developers can now download the DL Streamer container and start executing their streaming analytics pipeline workload on Azure IoT Edge, with very few configuration changes.

Azure Marketplace

The container for the Azure IoT Edge module consists of the OpenVINO™ toolkit Inference Engine, DL Streamer and a sample Python application. The sample app can be configured to execute your inference and analytics pipeline with your own Deep Learning models and input stream(s), and to send the inference results to Azure Cloud platform via Azure IoT Hub or view with RTSP server URI. Thus, it enables Azure developers to readily leverage the power of Azure-qualified, Intel-powered edge inferencing and build interesting video pipelines to solve their business use cases. The developers can also leverage Intel® Distribution of OpenVINO™ Toolkit Pre-trained Models (included in the open-sourced Open Model Zoo) and sample applications to build their pipelines.

Deploying Intel® Distribution of OpenVINO™ Toolkit DL Streamer Container

The steps below demonstrate how to deploy the Intel® Distribution of OpenVINO™ Toolkit DL Streamer container on the IoT Edge device and trigger the sample Python application for executing Pedestrian and Vehicle detection and classification pipelines using the pre-trained models from Open Model Zoo.

Step 1: Access Intel® Distribution of OpenVINO™ Toolkit DL Streamer container

·       Access on Azure Marketplace URL

·       Click -> “GET IT NOW” and then “CONTINUE”. It will take you to “portal.azure.com”

Create app

Step 2: Deploy on IoT Edge device and check deployment status

·       Select IoT Edge Device connected to Azure IoT Hub:

IOT Device

·       Check deployment Status:

Deplyment status

Step 3: Acquiring deep learning model

·       After deployment, the app is launched automatically.[MN1] [SP2]

·       You can see the app log messages with the below command

        sudo iotedge logs -f OpenVINODLStreamer

·       App starts by downloading/locating IR model files by reading details from “config_examples” .yaml configuration file

Config examples

Step 4: Starting inference with Pedestrian-and-Vehicle-detectors-adas-0001

·       App starts executing the video pipeline using the downloaded model and input stream

Video pipeline

Note: Device used in the above example is Intel® Core™ i7-7567U CPU @ 3.50GHz x 4. The FPS numbers shown above might change based on your device

·       Here’s the GStreamer pipeline that is executed in the app by default:

gst-pipeline:

filesrc location=/app/video_samples/person-bicycle-car-detection.mp4 ! decodebin ! gvadetect model=/app/ov_ir/pedestrian-and-vehicle-detector-adas-0001/FP32/pedestrian-and-vehicle-detector-adas-0001.xml device=CPU model-proc=/app/ov_ir/pedestrian-and-vehicle-detector-adas-0001/FP32/pedestrian-and-vehicle-detector-adas-0001.json ! queue2 ! gvametaconvert ! gvafpscounter interval=1 ! videoconvert ! video/x-raw,format=BGRx ! gvawatermark ! fakesink sync=false name=sink0

·       The inference results, in the form of bounding boxes overlaid on the input stream, can be viewed on a web browser using the URI: rtsp://192.168.0.17:8554/stream_0 

Sample Inference output via RTSP URI: 

Sample inference

Step 5: [Optional] To pass Open Model Zoo pre-trained IR Detection and Classification models through config file changes

Note: When the DL Streamer IoT Edge module is deployed on an edge machine, it will automatically launch the sample app, and the app will execute pedestrian-vehicle detection pipeline on the sample video. Once the input sample video reaches the end of file, the module will come to a STOP state. You can see the state by using ‘iotedge list’ command on the edge machine (as shown in the screenshot in step 2). The pre-requisite for passing custom IR files to the sample app is that, the edge module (OpenVINODLStreamer) should be in the STOP state for changing the container options and pass new config_sample.yaml parameters as explained in this step.

1)    Go to the Edge device in Azure portal, replace the container create options with the following and deploy:

{

    "HostConfig": {

        "Binds": [

            "/tmp/.X11-unix:/tmp/.X11-unix",

            "/dev:/dev"

        ],

        "NetworkMode": "host",

        "IpcMode": "host",

        "Privileged": true

    },

    "NetworkingConfig": {

        "EndpointsConfig": {

            "host": {}

        }

    },

    "Env": [

        "DISPLAY=:0"

    ],

    "Entrypoint": [

        "tail",

        "-f",

        "/dev/null"

    ]

}

2) On the edge machine, run the following command to open OpenVINODLStreamer container in a bash shell:

sudo docker exec -it OpenVINODLStreamer bash

3)    Open the pedestrain_and_vehicle_detector.yaml and change -model parameter with the path to the model. Below is an example:

model: /app/ov_ir/pedestrian-and-vehicle-detector-adas-0001

4)    Run the application with the following command:

python3 reference_app/reference_app.py

Step 6: [Optional] To modify config file for single and multi-model/ device use-case

The configuration file is located at /app/config_examples/pedestrain_and_vehicle_detector.yaml

Here are some configurations that you can try:

Single-model-single-device:

-detect:

   model: pedestrian-and-vehicle-detector-adas-0001

   device: CPU

   output-postproc:

     - attribute_name: detection_result

       layer_name: detection_out

       labels: [ None, Vehicle, Pedestrain ]

Single-model-multi-device:

-detect:

   model: pedestrian-and-vehicle-detector-adas-0001

   device: MULTI:CPU,MYRIAD

   output-postproc:

     - attribute_name: detection_result

       layer_name: detection_out

       labels: [ None, Vehicle, Pedestrain ]

Multi-model-single-device

model: /app/ov_ir/pedestrian-and-vehicle-detector-adas-0002

  - detect:

      model: vehicle-detection-adas-0002

      device: CPU

      output-postproc:

        - attribute_name: detection_result

          layer_name: detection_out

          labels: [ None, Vehicle ]

  - detect:

      model: pedestrian-detection-adas-0002

      device: CPU

      output-postproc:

        - attribute_name: detection_result

          layer_name: detection_out

          labels: [ None, Pedestrain ]

For more information about DL Streamer elements, please visit: https://github.com/opencv/gst-video-analytics/wiki/Elements

We are looking at qualifying more models and would love to know which models you would like to accelerate using Intel® Distribution of OpenVINO™ Toolkit with DL streamer by joining the conversations in our community forum.

 

Notices and Disclaimers:

Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human rights

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others. 

Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human rights

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.