Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
23 Views

Unable to get detection from openvino model

I am using person-detection-action-recognition-0005 pre-trained model from openvino to detect the person and their action.

From this documentation, I wrote a python script to get detections.

This is the script.

import cv2 

def main():

    print(cv2.__file__)

    frame = cv2.imread('/home/naveen/Downloads/person.jpg')

    actionNet = cv2.dnn.readNet('person-detection-action-recognition-0005.bin',
                    'person-detection-action-recognition-0005.xml')


    actionBlob = cv2.dnn.blobFromImage(frame, size=(680, 400))
    actionNet.setInput(actionBlob)

    # detection output
    actionOut = actionNet.forward(['mbox_loc1/out/conv/flat',
                'mbox_main_conf/out/conv/flat/softmax/flat',
                'out/anchor1','out/anchor2',
                'out/anchor3','out/anchor4'])


    # this is the part where I dont know how to get person bbox
    # and action label for those person fro actionOut

    for detection in actionOut[2].reshape(-1, 3):
        print('sitting ' +str( detection[0]))
        print('standing ' +str(detection[1]))
        print('raising hand ' +str(detection[2]))

        
Now, I don't know how to get bbox and action label from the output variable(actionOut). I am unable to find any documentation or blog explaining this.

Does someone have any idea or suggestion, how it can be done?

0 Kudos
1 Reply
Highlighted
Employee
23 Views

Hi Naveen, 

You can refer to this demo code to get the concept of how to dealing with the output of this model. 

https://github.com/opencv/open_model_zoo/tree/master/demos/smart_classroom_demo

Thank you.

0 Kudos