Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
235 Views

OpenVINO inference engine python

Hi,

I'm using this OpenVINO example file for a object detection case:

https://github.com/opencv/open_model_zoo/blob/master/demos/python_demos/object_detection_demo_ssd_as...

with a NCS and a USB camera.

I'm also using a custom xml and bin model.

I have a res object that should contain the result of inference:

res = exec_net.requests[cur_request_id].outputs[out_blob]

The output of res object is:

[ INFO ] [[0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
  0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

that is not good for the for cycle (and I really don't understand the content of the object):

for obj in res[0][0]:
    if obj[2] > args.prob_threshold:

if i try to print "res.shape" i have some element like:

(1, 38)

I would understand where I can find the inference result in res object.

Thanks

  

0 Kudos
10 Replies
Highlighted
Employee
235 Views

Dear Curci, Matteo

In that code you referenced, the output (inference results) are exactly found in this part:

# Parse detection results of the current request
            res = exec_net.requests[cur_request_id].outputs[out_blob]
            for obj in res[0][0]:
                # Draw only objects when probability more than specified threshold
                if obj[2] > args.prob_threshold:
                    xmin = int(obj[3] * initial_w)
                    ymin = int(obj[4] * initial_h)
                    xmax = int(obj[5] * initial_w)
                    ymax = int(obj[6] * initial_h)
                    class_id = int(obj[1])
                    # Draw box and label\class_id
                    color = (min(class_id * 12.5, 255), min(class_id * 7, 255), min(class_id * 5, 255))
                    cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color, 2)
                    det_label = labels_map[class_id] if labels_map else str(class_id)
                    cv2.putText(frame, det_label + ' ' + str(round(obj[2] * 100, 1)) + ' %', (xmin, ymin - 7),
                                cv2.FONT_HERSHEY_COMPLEX, 0.6, color, 1)

 

Hope it helps. 

Thanks for using OpenVino !

Shubha

0 Kudos
Highlighted
Beginner
235 Views

No, because if I use:

for obj in res[0][0]:

                # Draw only objects when probability more than specified threshold

                if obj[2] > args.prob_threshold:

I have an error, because it doesnt exists res[0][0] element.

If I do:

for obj in res:

                # Draw only objects when probability more than specified threshold

                if obj[2] > args.prob_threshold:

the code works but I have:

[ INFO ] [[0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
2
  0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

 

0 Kudos
Highlighted
Employee
235 Views

Dear Curci, Matteo

If you are using the OpenVino sample without changes ("as is"), a USB camera and NCS - then your errors seem to indicate that data is not getting to the openvino application. Perhaps it's the USB camera, but honestly, I don't know. To troubleshoot this, I'd eliminate the USB camera first and just try the webcam on your laptop. Also I would use "CPU" first instead of "MYRIAD". Once you eliminate hardware issues, you can slowly build up and re-try usb-camera and NCS.

Hope it helps,

Thanks,

Shubha

0 Kudos
Highlighted
Beginner
235 Views

Sorry but now I tried with:

- Ubuntu

- Openvino 2019

- NCS

- laptop camera

I run both object_detection_ssd_sample and classification_sample, but I have this error:

https://imgur.com/8A1brSh

numpy float32 object is not iterable

It's simply your example with default hardware...

0 Kudos
Highlighted
Employee
235 Views

Dear Curci, Matteo,

This error:

numpy float32 object is not iterable

Has nothing to do with OpenVino. It is a side-effect of the real problem, which is that the sample is not finding your NCS2 device. Which version of Ubuntu are you using ? Are you using OpenVino 2019R2 ? Please install Ubuntu 16.04 (not 18) to get things working. It can work with 18 but there is an issue with NCS/MYRIAD drivers on Ubuntu 18. It can be fixed by extracting the Ubuntu 16.04 RPM, locating the drivers and putting them in the right place in Ubuntu 18, but it would be easier to just go with Ubuntu 16.04 to start until you get things working.

Thanks for your patience !

Shubha

 

0 Kudos
Highlighted
Beginner
235 Views

Ehm...I'm working with NCS, the first version, on Ubuntu 16.04.

Also, if I run python script samples, the NCS device works correctly....

 

The model that I'm using, does classification on 38 classes, and return an array prediction, where 1 is the class predicted. This could be the problem? Maybe the model is different from the samples model?

 

 

0 Kudos
Highlighted
Employee
235 Views

Dear Curci, Matteo

If we are still talking about the error in your screenshot above, I see "Cannot Init MYRIAD device :NC_ERROR" but maybe you got past that problem. If you look at the Python Object Detection SSD Async Doc as long as the model is of SSD type, it should work fine. Since you are using one of our standard Python samples, my guess is that you are not using the correct model. 

Hope it helps,

Thanks,

Shubha

0 Kudos
Highlighted
Beginner
235 Views

I never talked about "Cannot Init MYRIAD device :NC_ERROR". You saw that error in my photo, but it was an old error....

Anyway, some suggestions about the model?

I have my custom model that works, do the inference, but it seems to be bad for the code.

From this:

res = exec_net.requests[cur_request_id].outputs[out_blob]
            for obj in res[0][0]:
                # Draw only objects when probability more than specified threshold
                if obj[2] > args.prob_threshold:
                    xmin = int(obj[3] * initial_w)
                    ymin = int(obj[4] * initial_h)
                    xmax = int(obj[5] * initial_w)
                    ymax = int(obj[6] * initial_h)
                    class_id = int(obj[1])
                    # Draw box and label\class_id
                    color = (min(class_id * 12.5, 255), min(class_id * 7, 255), min(class_id * 5, 255))
                    cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color, 2)
                    det_label = labels_map[class_id] if labels_map else str(class_id)
                    cv2.putText(frame, det_label + ' ' + str(round(obj[2] * 100, 1)) + ' %', (xmin, ymin - 7),
                                cv2.FONT_HERSHEY_COMPLEX, 0.6, color, 1)

I can't extract the position of the object, because the format of res is different than how the system expects it.

0 Kudos
Highlighted
Employee
235 Views

Dear Curci, Matteo,

The below forum post also pertains to an SSD inference sample. This person was able to fix their problem by studying the SSD sample within the OpenVino package. Hopefully it can help you also.

https://software.intel.com/en-us/forums/computer-vision/topic/815742

Also I don't understand your question : 

I can't extract the position of the object, because the format of res is different than how the system expects it.

Thanks,

Shubha

0 Kudos
Highlighted
Beginner
235 Views

I'll take a look to the example, but I don't know it it can help me..

About the second question.

The res object, the output of:

exec_net.requests[cur_request_id].outputs[out_blob]

command, should contain the result of prediction, correct? It should contain also the position, in terms of coordinates, of the detected object. Correct?If yes, how can I extract the position information if my res object has another form ((1, 38))?

 

Thanks

0 Kudos