Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

What is the use of MultipleOutputPostprocessor in object_detection_demo_ssd_async?

andreas_sheron
455 Views

Recently, the code for object_detection_demo_ssd_async got updated. I see that it now includes two classes: `SingleOutputPostprocessor` and `MultipleOutputPostprocessor` using which the output buffer gets defined.

I know that SSD Mobilenet models outputs 3 tensors namely: boxes, scores, labels for which `MultipleOutputPostprocessor` would be used but I just converted the available ssd_mobilenet_v2_coco model(downloaded via model_downloader), loaded and printed out net.outputs which gives me only one tensor `{'DetectionOutput': <openvino.inference_engine.ie_api.DataPtr at 0x7fe836957dd0>}`. This will simply utilize `SingleOutputPostprocessor`.

Since this demo is specifically for SSD object detection, I would like to know for which model implementation would we use `MultipleOutputPostprocessor`. Do we need this class here at all or am I missing something I should know?

I am currently using toolkit version 2020.3, the demo is for toolkit version 2020.4 but I modified the code to make it work on the previous one by not using the '.buffer' when getting the tensor(s).

Here is what I did:
I returned `outputs[self.output_layer][0][0]` instead of `outputs[self.output_layer].buffer[0][0]` inside the `__call__()` method of `SingleOutputPostprocessor` class.

I believe relevant explanations/comments would help here.

0 Kudos
2 Replies
Iffa_Intel
Moderator
432 Views

Greetings,


The MultipleOutputPostprocessor is a function to assign the output values of bboxes, scores and layers onto a buffer. In the object_detection_demo_ssd_async, it is used to return these values from output layer, hence determining whether the model is supported or not.



Sincerely,

Iffa


0 Kudos
Iffa_Intel
Moderator
407 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


0 Kudos
Reply