Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5771 Discussions

Python asyc example and asyc loss of accuracy.

IpslWon
Novice
1,518 Views

I've found the following how-to leswright1977/RPI4_NCS2: Raspberrry pi 4 Openvino Python (github.com) which is several years old and throws a deprecation error, specifically:

DeprecationWarning: 'outputs' property of InferRequest is deprecated. Please instead use 'output_blobs' property.
detections = exec_net.requests[cur_request_id].outputs[out_blob]

 

This is also causing a major drop in accuracy, so I assume there's something wrong with the code itself.  What is the correct way to iterate though the output and what is correct way to call for the results? 

Labels (2)
0 Kudos
1 Solution
IpslWon
Novice
1,279 Views

Right, and having bad documentation and no workable examples makes the learning curve worse. 

Let's take the code you just shared. That doesn't work at all when making inferences on objects. Object detections use "DetectionOutput' in the output blob. It also doesn't show how to iterate through the resulting data. A couple a calls down, there's an example of what I mean.

exec_net = ie_core.load_network(network=net, device_name="CPU", num_requests=2)
exec_net.requests[0].infer({input_blob: image})
res = exec_net.requests[0].output_blobs['prob']
np.flip(np.sort(np.squeeze(res)),0)
array([4.85416055e-01, 1.70385033e-01, 1.21873841e-01, 1.18894853e-01,
5.45198545e-02, 2.44456064e-02, 5.41366823e-03, 3.42589128e-03,
2.26027006e-03, 2.12283316e-03 ...])

I would assume a print or log function was called, yet it's not shown. Depending on how you call it, from my personal experience, it can just saw api.openvino.blob. 

Let us also remember that this isn't a product that actually creates anything, it's a platform that increases the performance of an already created thing. Creating the model is a whole process unto itself. I can't be expected to require the same amount of work for this as I don't need OpenVINO to do inferences, but OpenVINO needs other products to make inferences. 

A how-to should never call other custom made functions or it's not a how-to. It should be in one file. Your documentation and "examples" don't do that.  That is why I went to the example I found on the web. From the one example that was several years old and outdated, I was able to create a working example in a matter of days of free time. I was unable to do the same with over a week of free time with your documentation. 

I have, been able to fix the accuracy issue well as removing the depreciation warning. For the amount of back and forth on this ticket, it was much less helpful than it should have been. Bad documentation isn't new to Intel, so it is what it is. It is telling that I, with almost no experience, was able to get the old code to work when the employees couldn't.  

I am seeing a very real improvement in the speeds of inferences, but the lack of workable documentation leaves to feel like this product isn't ready for primetime and I defrinetly won't be updating the software based on how many breaking changes seem to happen without any documentation on how to fix them. 

View solution in original post

14 Replies
IntelSupport
Community Manager
1,486 Views

Hello lpslWon,

Thanks for reaching out.

The deprecated warning that you get might due to the old version of OpenVINO that you are using and the property is being removed. It shows that the outputs property is no longer valid and must be replaced by output_blobs. You can check the detailed list of changes made to the Inference Engine API from the following page.

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_API_Changes.html

 

Meanwhile, here is the OpenVINO documentation on how to Integrate the Inference Engine with Your Application that explains the details. The Asynchronous Inference Request documentation shows how to implement the Async in OpenVINO.

 

Hence, I would recommend you upgrade your OpenVINO to our latest version (2021.3) for better features supportability.

 

Regards,

Aznie


IpslWon
Novice
1,478 Views

Nothing in your response is close to answering my question.
1) I'm using the newest version of the API which is why I'm getting that deprecation warning in the first place. If I were using an older API it wouldn't' have that warning. 
2) Nothing in the documentation shows how to change the code to make it work.
3) You also didn't address my comment about the lack of accuracy. 
4) The documentation doesn't go into detail and it doesn't explain what to do with the resulting body nor is there anywhere that explains the resulting blobs contents or how to iterate over it.


IpslWon
Novice
1,441 Views

For those who might also be running into these issues without any help. It looks like this line is causing the accuracy issue.

in_frame = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 0.007843, (300, 300),127.5)

I'm sure this was the right way at some point, but you replace that with 

in_frame = frame.transpose((2, 0, 1))

 Which is copied from one of their other how-to.

Would still really like some support on iterating through the blob as nothing in the how-to works or translates. 

IntelSupport
Community Manager
1,415 Views

Hi lpslWon,

 I'm sorry for the misunderstanding regarding the OpenVINO version since I see the instruction from the Github page that you pointed shows that they are using the 2019 version instead of the latest. Thanks for sharing the information regarding the accuracy in this community. For the blob integration, we still investigating this and will get back with further information soon. Since the project is not officially from our developer, we need some time to test it out from our end.


Regards,

Aznie


IntelSupport
Community Manager
1,393 Views

Hi lpslwon,

Greetings to you.

From our side, we have tested the project but we are facing different errors. We cannot guarantee or validated the compatibility since the project is not from our developer. We would like to apologize to you since we cannot support custom application from somebody else and its seems like the project is tested with really outdated OpenVINO version. I would suggest you submit a support request to Github for your unexpected behavior.

 

Regards,

Aznie


IpslWon
Novice
1,388 Views

No, I am asking a very straightforward question with regards to your product and the error I'm receiving. I'm reaching out on this board as I purchased a product that requires I use this tool. This is where the answer should come from. I have already gotten all of it to work. I just need to see an example of how to use output blob and iterate through it. Iterating through the results is a basic working aspect of the product and I need you to support me with this. 

I have already fixed half of the problems. I need help with basic operations of your product and your team should support me. 

There is no working example I can find in your documentation using async in python.  

Vladimir_Dudnik
Employee
1,374 Views

it seems MobileNetSSD IR which you use from github repo was generated by OpenVINO 2019.1, this version of IR not supported in the latest versions of OpenVINO runtime (and it seems you are using newer version of OpenVINO). You need either regenerate this model IR with new version of OpenVINO (you will need to have original model for that) or use OpenVINO 2019.1 to run inference for this model.

And yes, the code to iterate over model output blob is presented in the repository you use:

			for detection in out.reshape(-1, 7):
				inference = []
				obj_type = int(detection[1]-1)
				confidence = float(detection[2])
				xmin = int(detection[3] * frame.shape[1])
				ymin = int(detection[4] * frame.shape[0])
				xmax = int(detection[5] * frame.shape[1])
				ymax = int(detection[6] * frame.shape[0])

				if confidence > 0: #ignore garbage
					inference.extend((obj_type,confidence,xmin,ymin,xmax,ymax))
					data_out.append(inference)
IpslWon
Novice
1,352 Views

@Vladimir_Dudnik Putting aside the model issue as I've already taken care of that. (However, thank you just the same for information. It shows you're working on this and I appreciate it) My issue is solely what the OP was about:

  • the depreciation warning,
  • how to change to use the proper method, and
  • why there was a loss of accuracy.
    •  I figured out the loss of accuracy was the np.array not being in transposed to the right order. 

Searching through your github repo and docs, no, I don't see what you are sharing with regards to the for loop. You just shared the loop from the example I'm asking about, but the wrong code! Can you share a link that shows a for loop from an asyc call from a doc or git hub from your company? Every example I see in your companies github repo does an enumerate but not from an requets[id] call. (Like the object_detection_demo.py which I have already gone through.) On the docs pages, the examples stop at the call being put into an out_blob or just print out a result but don't actually show a print or log function. I digress.

If I change this line: 

detections = exec_net.requests[cur_request_id].outputs[out_blob]

which gives the warning to 

 detections = exec_net.requests[cur_request_id].output_blobs[out_blob]

 

It looks like you're looking at the wrong code. I can't share the direct link to the right page. Something with your site goes to the top of that github repo. I'm looking at "refined_picam_async.py" not "refined_picam_test_NCS2_mobilenet.py". Which is what you shared in your response. 

 

Vladimir_Dudnik
Employee
1,319 Views

Hi @IpslWon 

let's put this old third-party github repo aside, the code in it based on pretty old version of OpenVINO, and probably has some mistakes. It is better to take a look at OpenVINO samples or OpenVINO Open Model Zoo demos, this at least regularly validated for each release and proven to work. Moreover, Open Model Zoo allow you to download a set of Intel and public pre-trained models (and even convert public models to IR, so you do not need to guess or study which Model Optimizer parameters should be applied to make conversion correct).

Note, across many public models, available through Open Model Zoo, there are several implementations of mobilenet-ssd topologies and it's variants, so you may play and choose one which fits your need best (or train and fine-tune model for your particular task with OpenVINO Training Extensions).

Regarding how to process output of SSD-like models, it should be quite straightforward, SSD topology defines output format as a blob of specific shape and many SSD like models developed independently follow this format. The examples of how to parse such output format you may find in OpenVINO samples and OMZ demos, like object_detection_demo

Note, in OpenVINO 2021.3 OMZ demo start to follow kind of modular design. We introduce model class hierarchy, which allow to unify application while hiding model specific pre- and post- processing in particular model class. For SSD-like models, you may examine this source file, to see SSD specific pre and post processing.

Worth to notice, that in OpenVINO 2021.3 release time frame we have also introduced OpenVINO ARM inference plugin. Although, it is not distributed as part of Intel OpenVINO install package, it is available in open source, and OpenVINO contrib github. Anyone are welcome to review and contribute there.

This open source plugin can be built together with dldt and was tested with subset of Open Model Zoo demos on RaspberryPi 4 platform, so you actually now have a choice to run inference on RaspberryPi board on MyriadX, on ARM CPU or on both.

Regards,
  Vladimir

IpslWon
Novice
1,308 Views

While everything you're showing me is interesting, it's totally avoiding the initial question. 

I already have my own custom model that I've built and works, so everything about models isn't relevant and a waste of my time and yours since you had to type it. 

Nowhere in the example you give is there a use of asyc command. It's also, quite frankly, useless from a training stand point. There are so many custom functions and abstractions that are called, it's pointless to try and follow as a simple "hello world" example.  It makes sense to all of you because you all live in it.  If it had comment blocks to explain what was happening that would be one thing. It looks like it calls other files and functions.

 # Submit for inference
detector_pipeline.submit_data(frame, next_frame_id, {'frame': frame, 'start_time': start_time}) 

 

Which may call the asyc function, but then it's not a how to script now is it? (it appears it's some common/pyhton) directory 

So I will ask again. Show me the exact file where the async is called as a hello world example. 

I've got something working using the buffer from the output, but that's being deprecated. If I have time, I will go look into this undocumented "common/python" directory but this is beyond frustrating at this point and not where I should be looking any way. 

Vladimir_Dudnik
Employee
1,295 Views

Hello,

usually learning curve does not come without wasting of some time. There were references to samples and demo where you may find an examples of how to make asynchronous call to inference, like classification_sample_async.

Note, the link is on OpenVINO online documentation, which only contains sample's readme. To view actual code please find sample in your OpenVINO install folder, <openvino_install_dir>\deployment_tools\inference_engine\samples\python\classification_sample_async and look through the code.

Also, make sense to review OpenVINO Python API documentation, which gives a pseudo code sample for async_infer call

exec_net = ie_core.load_network(network=net, device_name="CPU", num_requests=2)
exec_net.requests[0].async_infer({input_blob: image})
request_status = exec_net.requests[0].wait()
res = exec_net.requests[0].output_blobs['prob']

Hope, this help

Regards,
  Vladimir

IpslWon
Novice
1,280 Views

Right, and having bad documentation and no workable examples makes the learning curve worse. 

Let's take the code you just shared. That doesn't work at all when making inferences on objects. Object detections use "DetectionOutput' in the output blob. It also doesn't show how to iterate through the resulting data. A couple a calls down, there's an example of what I mean.

exec_net = ie_core.load_network(network=net, device_name="CPU", num_requests=2)
exec_net.requests[0].infer({input_blob: image})
res = exec_net.requests[0].output_blobs['prob']
np.flip(np.sort(np.squeeze(res)),0)
array([4.85416055e-01, 1.70385033e-01, 1.21873841e-01, 1.18894853e-01,
5.45198545e-02, 2.44456064e-02, 5.41366823e-03, 3.42589128e-03,
2.26027006e-03, 2.12283316e-03 ...])

I would assume a print or log function was called, yet it's not shown. Depending on how you call it, from my personal experience, it can just saw api.openvino.blob. 

Let us also remember that this isn't a product that actually creates anything, it's a platform that increases the performance of an already created thing. Creating the model is a whole process unto itself. I can't be expected to require the same amount of work for this as I don't need OpenVINO to do inferences, but OpenVINO needs other products to make inferences. 

A how-to should never call other custom made functions or it's not a how-to. It should be in one file. Your documentation and "examples" don't do that.  That is why I went to the example I found on the web. From the one example that was several years old and outdated, I was able to create a working example in a matter of days of free time. I was unable to do the same with over a week of free time with your documentation. 

I have, been able to fix the accuracy issue well as removing the depreciation warning. For the amount of back and forth on this ticket, it was much less helpful than it should have been. Bad documentation isn't new to Intel, so it is what it is. It is telling that I, with almost no experience, was able to get the old code to work when the employees couldn't.  

I am seeing a very real improvement in the speeds of inferences, but the lack of workable documentation leaves to feel like this product isn't ready for primetime and I defrinetly won't be updating the software based on how many breaking changes seem to happen without any documentation on how to fix them. 

Vladimir_Dudnik
Employee
1,266 Views

I would partially agree, there is a room in improvement for documentation and samples/demos, especially in creating quite simple "how to" samples, and we are working on this. For example, we are working on Jupyter notebook demo collections (currently there is only one, but this will be extended), so, if it is more convenient for you to work with notebooks to learn OpenVINO API, then I'd refer you to openvino_notebooks repository.

And we still have workable demos and samples, if there are issues with launching provided sample or demo according instruction in their documentation, then it is clearly a bug and we have to fix that, if you could point which sample or demo does not work as intended. Although, classification_sample_async, I pointer earlier, might be still not that simple as one would expect, but it is implemented in single file, so expected to be observable?

Pseudo code from Python API documentation intended to show the basic idea of how to call function, like, asynchronous async_infer() or synchronous infer(), which you have reviwed. It is not about how to parse output blob data. Anyway, completely agree, it could be improved, so thank you for pointing to this.

 

IntelSupport
Community Manager
1,238 Views

Hi,

This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question. 


Regards,

Aznie


Reply