Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Customized vgg16 model is converted to openvino, need help to write code for inferencing

Snehal_Yesugade
Beginner
1,552 Views

Hi

 

we have trained customized VGG16 model to classify two classes.

 

To integrate same in Intel EIS framework, we have converted that to openvino model.

But while writing inferencing code, we are facing issues. We are taking help from existing documentation, but it is not enough.

 

Need help.

0 Kudos
8 Replies
Parag_Jain
Employee
1,545 Views

Hi Snehal,

 

I want to you share more information about the issue that you are facing. For example :

Which language are you using? Is it an issue while writing the code or while deploying the code?  Is it an issue with any particular API that you are using? Which version of OpenVINO are you using?

 

Do add more relevant details about the problem with you are facing which can help us resolve your issue.

0 Kudos
ravi31
Novice
1,521 Views

Hi Parag!

We are using Openvino 2021.3.394.

The problem statement is like this.

1) We trained a model in TensorFlow and exported it as model.pb file.

2) Then using the mo.py script inside the model optimizer, we converted the .pb file to get three separate files. They are model.bin , model.mapping & model.xml.

3) Now, our objective is to run videos through the Neural Network and get the outputs for each frame.

4) But, we couldn't find much documentation regarding that.

5) We had used as well as modified a different model(Pose estimation) which was present inside the Openvino installation. But, that came with a pre-written inference code.

6) We tried to use the same file for inferencing, but that file has a lot of architecture specific code blocks which are quite abstract. So, we scrapped that idea.

Please guide us on this. If there's any script in the Openvino documentation/github that simply takes in the bin & the xml of Openvino converted model, and runs an image/video through it to produce the output of the Neural Network, please refer us that.

 

 

 

0 Kudos
Parag_Jain
Employee
1,517 Views

Hi Ravi,

 

2 parts of your requirements which I understand from your reply is :

 

1) Is there any script that can directly take the input image/video frame along with the IR model (bin + xml files) and produce the output?

Yes. You can use the benchmark app that comes along with the OpenVINO installation. You will already have it on your machine. Another source for it is here : https://github.com/openvinotoolkit/openvino/tree/master/tools/benchmark

Note : This will perform the inferencing and thus produce the output as well. But it will not plot the output back on the input image/video frame.

 

2) How can I plot the output produced back to the input image/video frame?

Production of output is done by the OpenVINO API. It will produce the output with the dimension that you have determined (since it is custom VGG16). In case, it is similar to original VGG16, the output dimension will be as given here : https://docs.openvinotoolkit.org/latest/omz_models_model_vgg16.html

However, plotting the output generally requires postprocessing and that is something that you will have to do yourself and is not done by OpenVINO APIs. You can take a look at sample applications that comes along with the OpenVINO installation or here : https://github.com/openvinotoolkit/openvino/tree/master/inference-engine/ie_bridges/python/sample to see how output is obtained from the OpenVINO Python APIs and later post-processed as well.

 

Do let me know if there is anything that Intel can provide help you with.

ravi31
Novice
1,505 Views
0 Kudos
ravi31
Novice
1,500 Views

The references were quite helpful. I tried with a simple batch of images and with some modifications, I was able to do inferencing on that.

Now, I will be integrating it to the main project. I will let you know if there's any issue.

Thanks a lot!

0 Kudos
Syamimi_Intel
Moderator
1,449 Views

Hi Snehal Yesugade,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.



Regards,

Syamimi


0 Kudos
Snehal_Yesugade
Beginner
1,446 Views

Thank you Intel team for solution. We can close this thread

0 Kudos
Parag_Jain
Employee
1,437 Views

Thank you for the confirmation @Snehal_Yesugade. We are closing the thread now.

0 Kudos
Reply