This my code >>>
from openvino.inference_engine import IENetwork
plugin = IEPlugin(device="CPU")
net = IENetwork(
net.batch_size = 1
#Out is 1,3
but when I convert to IR the shape is [1,600,600,3]
and in xml file >>
|<?xml version="1.0" ?>
<net name="hand" version="10">
<layer id="0" name="Preprocessor/mul/x/Output_0/Data_/copy_const" type="Const" version="opset1">
<data element_type="f32" offset="0" shape="1,1,1,1" size="4"/>
<port id="1" precision="FP32">
<layer id="1" name="image_tensor" type="Parameter" version="opset1">
<data element_type="f32" shape="1,3,600,600"/>
<port id="0" precision="FP32">
or I miss somethings
Let's take a look at this faster rcnn model .prototxt file as an example since I don't have your specific .prototxt file (they should have the same concept): https://raw.githubusercontent.com/rbgirshick/py-faster-rcnn/master/models/pascal_voc/VGG16/faster_rc...
The first layer is an input layer and it's expected data is to be by size [1,3,255,255]. The input image size required in this case is 255x255. The image is an RBG image, hence 1 image require size 3 dimension . This is why the shape[1,3] is used.
You can do resizing to the model which you can refer to this step by step guide : https://www.youtube.com/watch?v=Ga8j0lgi-OQ
For simplicity, you can define the input_model as a variable pointing to your faster rcnn model.
If you want to use an input image with different size without training a new model,
the model optimizer can manipulate the sizes for you by using for example: --input_shape[1,3,100,100]
By giving this parameter, your image size will become 100x100. The sizes of all other layers down the pipes are also adjusted.
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question