Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Preproc for openvino model

AjithKJ
Beginner
334 Views

Hi,

 

  • I have object detection based openvino model (bin and xml format) along with preprocessing and postprocessing scripts written using
                                    defprocess_frame(frame: VideoFrame) function.
 
  • The things which is not working is preproc code - using frame variable we are doing some preprocessing steps, passing them to the model and followed by postproc.
  • Command used:

gst-launch-1.0 filesrc location=<input_video> ! decodebin ! videoconvert ! gvapython module="preproc.py" ! gvainference model="saved_model.xml" model-proc="saved_model.json" device=CPU ! queue ! gvapython module="postpro.py" ! gvawatermark ! videoconvert ! vaapisink sync=false

 

  • How to save the preprocessed frame used in the gvapython preproc module and pass them to the gvainference model to run it successfully.
0 Kudos
5 Replies
Megat_Intel
Moderator
279 Views

Hi AjithKJ,

Thank you for reaching out to us.

 

You can perform Model pre- and post-processing operations before/after inference configured using the model-proc file where you can do the Pre-processing configuration using input_preproc. You can check out the samples/gstreamer/model_proc for examples of .json files for both pre- and post-processing.

 

On the other hand, what errors did you get when the pre-processing did not work? To investigate further, could you provide us with the details below:

 

  • Model Used:
  • Model-proc pre-processing and post-processing JSON file:
  • Pre-processing and post-processing Python code:

 

 

Regards,

Megat


0 Kudos
Harsha_KN
Beginner
169 Views

Hi Megat,

                 Thanks a lot @Megat_Intel  for the reply. I'm providing details below on behalf of @AjithKJ which you've requested.

  • Model used - Original model was a Keras one, we converted it to OpenVino (Attached zip file with .xml file but couldn't attach .bin due to filesize limit)

 

  • Model-proc pre-processing and post-processing JSON file- Attached "saved_model.json" model-proc file we used with an "input_preproc" section. With this model-proc file we couldn't get proper prediction from our converted OpenVino model in Gstreamer pipeline. W.r.t post processing we're just adding a metadata to Video frame, attached the "postproc.py" Python postprocessor script.

 

  • Pre-processing and post-processing Python code- But for the same model when both preprocessing and inferencing are done inside a Python script, we're getting correct predictions. Attached that python script (preprocess_inference.py). However with this approach we observed performance issues, so we would like to do the inferencing using  "gvainference" Gstreamer element and preprocessing in model-proc file. We think preprocessing  given in our model-proc file is not set correctly, we might have missed something.

 

Please help us to figure out correct "input_preproc" settings in our model-proc file and do let me know if you need anymore information.

 

Regards,

Harsha

 

0 Kudos
Wan_Intel
Moderator
134 Views

Hi Harsha_KN,

Thanks for sharing the information with us.

 

Referring to the previous post, the original model was a Keras model, and it has been converted to an Intermediate Representation. Could you please share the name of the original model? Examples of .json files using various models from Open Model Zoo and some public models are available at the following link:

https://github.com/dlstreamer/dlstreamer/tree/master/samples/gstreamer/model_proc

 

On another note, referring to the .json file that you have shared, we noticed that you have use the key "range" and the value for the key is [-1.0, 1.0]. For your information, the possible values for the key "range" are [0.0, 1.0]. For more information on the pre-processing configuration and post-processing configuration, please refer to the following link:

 

On the other hand, if you still facing issue, could you please share the steps to replicate the issue with us so that we can replicate it from our end? You may upload the required files to your Google Drive so that we can request the files from you.

 

 

Regards,

Wan

 

0 Kudos
AjithKJ
Beginner
83 Views

Hi,

 

Thank you for detailed response.

Following are the steps how we converted from tensorflow h5 to openvino format:

  • We have our own customised classification keras model. (attached model.py)
  • Used code to convert from h5 model to pb. (attached h5_to_pb.py)
  • Ran the following command 
    • mo --saved_model_dir <saved_model.pb> --output_dir openvino_tensorflow_model --input_shape [1,64,64,3].
  • Final openvino model is ready. (attached inside classification_model folder)

Issue:

  • If we run python code alone without gstreamer, it works fine, because of preprocessing step handled.
  • The problem is when we run the inference script via gstreamer pipeline, we need to use the preprocessing step, without which we will not be able to get the good prediction.

 

Doubt:

  • Where can we add this preprocessing step in the gstreamer pipeline, we cannot add this in .json file in input_preproc, since it handle only range, mean etc., (Attached screenshot)
  • We need to add one preprocess gvapython before running gvainference.

AjithKJ_0-1720528137200.png

 

 

Here is the google drive link for codes mentioned in this comment.

https://drive.google.com/drive/folders/1QYmJC8AMn91-uLCzFeHf3WvHxijlaQ0p?usp=sharing

 

0 Kudos
Wan_Intel
Moderator
63 Views

Hi AjithKJ,

Thanks for sharing the information with us.

 

Let me check with relevant team and we will get back to you as soon as possible.

 

 

Regards,

Wan

 

0 Kudos
Reply