Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Vairaprakash
Beginner
301 Views

Problem with openvino inference with faster-rcnn

Jump to solution

Actually, I used Faster-RCNN model for Object Character Recognition(OCR). It was trained by more than 300 invoices with variety of invoice shape in terms of height and width. My model was performed well and predict the proper boxes for four label called invoice number, invoice date, vendor and invoice total. The same model I was deployed into open vino and got a IR file also. But, here the problem was, IR file not produce the proper output and some time not generating the .bmp file during the inference. So, my question is, shall we train the images(invoices) with the same size only or is there is any other option in open vino to produce a proper output with model trained by various size of the images(invoices)

0 Kudos
1 Solution
Rizal_Intel
Moderator
240 Views

Hi Vairaprakash,

 

No, the --keep_shape_ops does not allow you to change the size of image during Inference Engine execution.


The --keep_shape_ops  should be used when you want to reshape (changing the input size) the neural network.

It allows you to change the input shape of the IR by using reshape method (which does not change the image size).

This would require you to reload the IR again to your inference device after reshaping as shown in this code example.

 

Regards,

Rizal


View solution in original post

6 Replies
Rizal_Intel
Moderator
287 Views

Hi Vairaprakash,


If you the IR of your model gives such a different inference from your original model, there is a probability that your color input channels are reversed (e.g. Tensorflow usually uses RGB while OpenCV usually use BGR).


Could you try reversing the input channels when creating the IR

or switch the application image read channel (e.g. for opencv, im_rgb = cv2.cvtColor(im_cv, cv2.COLOR_BGR2RGB )?


Regards,

Rizal


Vairaprakash
Beginner
279 Views

Hi Rizal,

               Already I use the following parameters during the IR creation,

1. --reverse_input_channel

2. --input_shape [1,600,1024,3]

so, my question is, in the tensor flow objection model model, can I reshape the image(invoices) during the Inference Engine part or not? 

Rizal_Intel
Moderator
270 Views

Hi Vairaprakash,


You could not reshape the image during the Inference Engine part.

Reshaping images should be a preprocessing step before the data is pushed into the neural network.

The neural network should have a fixed input shape (in your IR it is [1,600,1024,3]).


You should resize the image using OpenCV and then feed it to the inference engine.


Regards,

Rizal


Vairaprakash
Beginner
257 Views

Hi Rizal,

                 Thank you for your quick reply. During the IR creation, I add the keep_shape_ops parameter and I am thinking that this parameter allow me to change the size of the image(invoices) during the Inference Engine part. But, the box detection was not happened with the size of 1240*1750 invoices as well as .bmp file also not created. so, my question is, why we want to use keep_shape_ops parameter? by the parameter what we can do?

Rizal_Intel
Moderator
241 Views

Hi Vairaprakash,

 

No, the --keep_shape_ops does not allow you to change the size of image during Inference Engine execution.


The --keep_shape_ops  should be used when you want to reshape (changing the input size) the neural network.

It allows you to change the input shape of the IR by using reshape method (which does not change the image size).

This would require you to reload the IR again to your inference device after reshaping as shown in this code example.

 

Regards,

Rizal


View solution in original post

Rizal_Intel
Moderator
223 Views

Hi Vairaprakash,


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Rizal.


Reply