Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

openvino inference model outputting (17,17,32) while tensorflow model outputs (32,17,17))

jain__Yasha
Beginner
546 Views

I trained a tf model and converted it into IR format. 

However the output of openvino is the exact same shape as tf but mirrored?

How and why is this happening?

 

I am converting the poseEstimation tflite model from tf.js to tf then to IR format. 

 

0 Kudos
4 Replies
JAIVIN_J_Intel
Employee
546 Views

Hi Yasha,

What command are you using to convert the model to IR using model optimizer?

The Tensorflow uses NHWC tensors while Inference Engine uses NCHW layout. So internal conversion might happen to get the layer format into NCHW.

You could use the --disable_nhwc_to_nchw parameter with mo_tf.py to disable default translation from NHWC to NCHW.

Regards,

Jaivin

 

0 Kudos
jain__Yasha
Beginner
546 Views

Hi  Jaivan, 

Heres my script. I am using the NCHW layout so did not convert the model using  --disable_nhwc_to_nchw.

model_xml = args.model
    model_bin = os.path.splitext(model_xml)[0] + '.bin'

    # Plugin initialization for specified device and load extensions
    # library if specified.
    log.info("Creating Inference Engine...")
    ie = IECore()
    if args.cpu_extension and 'CPU' in args.device:
        ie.add_extension(args.cpu_extension, "CPU")
    # Read IR
    log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))
    net = IENetwork(model=model_xml, weights=model_bin)

    n, c, w, h = net.inputs['image'].shape
    net.batch_size = n

    log.info('Loading IR to the plugin...')
    exec_net = ie.load_network(network=net, device_name=args.device,
                               num_requests=2)
    del net

    frame = cv2.imread(args.input)
    frame = cv2.resize(frame, (w, h))
    frame = frame.reshape((n, c, h, w))

    log.info("Start inference")
    start_time = datetime.now()
    print(start_time)

    for i in range(60):
        exec_net.infer({'image': frame})

    end_time = datetime.now()
    infer_time = end_time - start_time
    print(end_time)

    log.info("Finish inference")
    log.info("Inference time is {}".format(infer_time))

 

0 Kudos
JAIVIN_J_Intel
Employee
546 Views

Hi Yasha,

Can you please share your TF model and other required files for us to reproduce the issue?

Also, mention the model optimizer command you have used. If possible, try to send the screenshots of the observed output.

Regards,

Jaivin

0 Kudos
jain__Yasha
Beginner
546 Views

Hi Javin, 

The files are uploaded here on my github:  https://github.com/Yaffa16/-rpi_posenet

It contains the

tf file : https://github.com/Yaffa16/-rpi_posenet/blob/master/model/model-mobilenet_v1_100.pb

openvino model : https://github.com/Yaffa16/-rpi_posenet/blob/master/posenet_v1_1_Posent2_model/model-mobilenet_v1_101.xml

Here's the code I used to convert 

python .\deployment_tools\model_optimizer\mo.py --input_model C:\Users\jainy\OneDrive\Desktop\posenset2\_models\model-mobilenet_v1_101.pb --framework tf -o ~\posenet_v1_1_Posent2_model\ --input image --input_shape [1,257,257,3] --output "offset_2,displacement_fwd_2,displacement_bwd_2,heatmap" --data_type FP16 --generate_deprecated_IR_V7

 

The output file of the TF inference is : 

INFO:root:HEATMAPS
INFO:root:(21, 21, 17)
INFO:root:offsets_result
INFO:root:(21, 21, 34)
INFO:root:displacement_fwd_result
INFO:root:(21, 21, 32)
INFO:root:displacement_bwd_result
INFO:root:(21, 21, 32)
 

 

Output of the Openvino model is: 

heatmap.shape
(17, 17, 17)
<class 'numpy.ndarray'>
offset_2.shape
(34, 17, 17)
displacement_bwd_2.shape
(32, 17, 17)
displacement_fwd_2.shape
(32, 17, 17)

 

0 Kudos
Reply