Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Using the model of RetinaNet in OpenVINO

Koriukin__Maksim
Beginner
2,660 Views

Hello! I encountered difficulties when using the implementation of the model of RetinaNet https://github.com/fizyr/keras-retinanet in openVINO.
The implementation from fizyr separates the training model and the inference model. The model for training at the exit has layers for classification and regression. The model for inference adds additional user layers. These layers include applying the regression values to the anchors and performing NMS. 
This can be seen here https://github.com/fizyr/keras-retinanet/blob/master/keras_retinanet/models/retinanet.py#L287

I tried to convert the trained model, without adding additional layers in keras. I did it. The model not has the same dimension as in the keras implementation. 

openVino: (1, 57600, 1)
keras: (1, 24633, 4), (1, 24633, 1)

In this case, in xml, I see two layers that I need at the output:
 

<layer id="198" name="classification/concat" precision="FP32" type="Concat">
<data axis="1"/>
<input>
<port id="0">
<dim>1</dim>
<dim>35721</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>9216</dim>
<dim>1</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>2304</dim>
<dim>1</dim>
</port>
<port id="3">
<dim>1</dim>
<dim>576</dim>
<dim>1</dim>
</port>
<port id="4">
<dim>1</dim>
<dim>144</dim>
<dim>1</dim>
</port>
</input>
<output>
<port id="5">
<dim>1</dim>
<dim>47961</dim>
<dim>1</dim>
</port>
</output>
</layer>
....

<layer id="259" name="regression/concat" precision="FP32" type="Concat">
<data axis="1"/>
<input>
<port id="0">
<dim>1</dim>
<dim>35721</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>1</dim>
<dim>9216</dim>
<dim>4</dim>
</port>
<port id="2">
<dim>1</dim>
<dim>2304</dim>
<dim>4</dim>
</port>
<port id="3">
<dim>1</dim>
<dim>576</dim>
<dim>4</dim>
</port>
<port id="4">
<dim>1</dim>
<dim>144</dim>
<dim>4</dim>
</port>
</input>
<output>
<port id="5">
<dim>1</dim>
<dim>47961</dim>
<dim>4</dim>
</port>
</output>
</layer>



Ok, but if I try to convert a keras model to which user layers have been added - I will not get what I expected at the output of the network.

I load the keras model and convert the model to a frozen graph:

from keras import backend as K
from keras_retinanet import models
import tensorflow as tf

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = tf.graph_util.convert_variables_to_constants(
            session, input_graph_def, output_names, freeze_var_names)
        return frozen_graph

model = models.load_model('resnet50_pascal_07.h5', backbone_name='resnet50')
model = models.convert_model(model)

frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs])
tf.train.write_graph(frozen_graph, '/path/', "resnet50_pascal_07.pb", as_text=False)

After that I use model_optimizer:

python mo_tf.py  --input_model 'resnet50_pascal_07.pb' --input_shape '[1,480,640,3]' --data_type FP32 --output_dir '/path/' --tensorflow_use_custom_operations_config ./extensions/front/tf/retinanet.json

After that, I load the model and perform the inference on one image. The result I get in openVINO is different from what I get in keras:

import glob

import cv2
import numpy as np
from openvino.inference_engine import IENetwork, IEPlugin

# ONLY one images!

fns = glob.glob('imgs/*jpg')

model_xml = 'resnet50_pascal_07.xml'
model_bin = 'resnet50_pascal_07.bin'

plugin = IEPlugin(device="CPU", plugin_dirs='/intel/computer_vision_sdk_2018.5.455/deployment_tools/inference_engine/lib/ubuntu_18.04/intel64/')
plugin.add_cpu_extension('/intel/computer_vision_sdk_2018.5.455/deployment_tools/inference_engine/lib/ubuntu_18.04/intel64/libcpu_extension_avx2.so')
net = IENetwork(model=model_xml, weights=model_bin)

supported_layers = plugin.get_supported_layers(net)
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
print('not supported layers:', not_supported_layers)

input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))

print('out:', len(net.outputs))

net.batch_size = len(fns)
n, c, h, w = net.inputs[input_blob].shape
print(n, c, h, w)

print("Loading model to the plugin")
exec_net = plugin.load(network=net, num_requests=2)
del net

# load images
for idx,fn in enumerate(fns):
    image = cv2.imread(fn)
    if image.shape[:-1] != (h, w):
        print("Image {} is resized from {} to {}".format(fn, image.shape[:-1], (h, w)))
        image = cv2.resize(image, (w, h))
    image = image.transpose((2, 0, 1))  # Change data layout from HWC to CHW
    image = np.expand_dims(image, axis=0)
    image = image.astype(np.float64)

res = exec_net.infer(inputs={input_blob: image})
res = res[out_blob]

print(res.shape)
print("*"*50)
print(res)
print("*"*50)


I get the following result:

not supported layers: []
out: 1
1 3 480 640
Loading model to the plugin
Image imgs/599.jpg is resized from (1080, 1920) to (480, 640)
(1, 1, 300, 7)
**************************************************
[[[[-1.  0.  0. ...  0.  0.  0.]
   [ 0.  0.  0. ...  0.  0.  0.]
   [ 0.  0.  0. ...  0.  0.  0.]
   ...
   [ 0.  0.  0. ...  0.  0.  0.]
   [ 0.  0.  0. ...  0.  0.  0.]
   [ 0.  0.  0. ...  0.  0.  0.]]]]
**************************************************

keras result:

boxes, scores, labels (1, 300, 4) (1, 300) (1, 300)
[[[301.5591    60.040943 337.5506   132.44531 ]
  [351.31454   51.48535  373.8011   100.82195 ]
  [369.77106   44.90435  394.83795   96.91844 ]
  ...
  [ -1.        -1.        -1.        -1.      ]
  [ -1.        -1.        -1.        -1.      ]
  [ -1.        -1.        -1.        -1.      ]]]
[[ 0.61092675  0.39506134  0.21634595  0.1704979   0.14281695  0.13980182
   0.1340574   0.10564897  0.08412086  0.08110106  0.07932754  0.07105038
   0.07094233  0.06902117  0.06568456  0.06281518  0.06170008  0.05614603
   0.05522677  0.05040509 -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.
  -1.         -1.         -1.         -1.         -1.         -1.        ]]
[[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
  -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]]


What catches your eye is a different measurement at the output of the neural network. (1, 300, 4) (1, 300) (1, 300) vs (1, 1, 300, 7).
As it is written officially that openVINO supports RetinaNet https://software.intel.com/en-us/articles/OpenVINO-RelNotes :
 

TensorFlow*

Added support of the following TensorFlow* operations: Gather, GatherV2, ResourceGather, Sqrt, Square, full support of ResizeBilinear, ReverseSequence near the LSTM loop, Pad/PadV2/MirrorPad which are not fuse-able to convolution.

Added support of the following TensorFlow* topologies: VDCNN, Unet, A3C, DeepSpeech, lm_1b, lpr-net, CRNN, NCF, RetinaNet, DenseNet, ResNext.

Added support for Reverse and Bi-directional forms of LSTM loops in the TensorFlow* models.

Added ability to load TensorFlow* model from sharded checkpoints.

Fixed bug with conversion of the TensorFlow* model with Split/Unstack operations where not all output tensors are used.


However, I did not find any mention for which implementation this support is being implemented.
I have a suspicion that the code that performs the conversion is not working properly. But I lack the competence to fix it: /intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/RetinaNetFilteredDetectionsReplacement.py
I will be grateful for any help.

0 Kudos
14 Replies
Koriukin__Maksim
Beginner
2,660 Views

I have a suspicion that the code that performs the conversion is not working properly. But I lack the competence to fix it.

intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/front/tf/RetinaNetFilteredDetectionsReplacement.py
 

0 Kudos
Shubha_R_Intel
Employee
2,660 Views

Dear Maksim, I sent you a message asking for your full Keras code, so that I can reproduce this issue.

Thank you for using OpenVino !

Shubha

0 Kudos
Koriukin__Maksim
Beginner
2,660 Views

Hi, Shubha.
To reproduce the error follow the steps:
1) Clone the repository https://github.com/Maxfashko/keras-retinanet-inference
Follow the instructions to run my pipeline. So, as the weights contained in the official retinanet repository are already converted, you need to download the not converted weights: https://yadi.sk/d/9y7F2iOf5_yL-w
1.1) You need to change the script code slightly to see the measurement of the neural network on the output layers. https://github.com/Maxfashko/keras-retinanet-inference/blob/master/bin/inference_retinanet.py#L63
Just add this code:

reg, cl = self.model.predict_on_batch(np.array(img_batch))
print("cl", np.array(cl).shape)
print("reg", np.array(reg).shape)
print(cl)
print(reg)
sys.exit(-1)

Well, the command to run the script will look like this:

./bin/inference_retinanet.py \
--weights snapshot/resnet50_pascal_07.h5 \
--convert false \
--data data/image.jpg \
--labels data/coco_labels.json

You should see the following:

cl (1, 329706, 1)
reg (1, 329706, 4)
[[[0.00734594]
  [0.00750174]
  [0.00769961]
  ...
  [0.00909076]
  [0.01096065]
  [0.01178531]]]
[[[ 0.20512821  0.07351568  0.01868737  0.06051755]
  [ 0.19080408  0.06185164 -0.01386412  0.12379301]
  [ 0.14048176  0.02960548 -0.00405663  0.07327364]
  ...
  [-0.10057355 -0.24512951 -0.22110356 -0.18410827]
  [-0.18897493 -0.17596444 -0.11185227 -0.05784295]
  [-0.0770526  -0.06576442 -0.01173029  0.01229405]]]

2) Now convert weights from *h5 to a frozen graph *pb:

#!/usr/bin/env python

import os
import argparse

import tensorflow as tf
from keras import backend as K
from keras_retinanet import models


def parse_args(args):
    parser = argparse.ArgumentParser(description='convert model')
    parser.add_argument('--input', help='Path to *.h5', type=str, required=True)
    return parser.parse_args(args)


def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = tf.graph_util.convert_variables_to_constants(
            session, input_graph_def, output_names, freeze_var_names)
        return frozen_graph


def main(args=None):
    args = parse_args(args)
    weights_name = args.input

    dirname = os.path.dirname(weights_name)
    basename = os.path.basename(weights_name)
    fn, ext = os.path.splitext(basename)

    model = models.load_model(weights_name, backbone_name='resnet50')
    #model = models.convert_model(model)
    frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs])
    tf.train.write_graph(frozen_graph, dirname, f'{fn}.pb', as_text=False)
    print(f'weights saved: {dirname}')


if __name__ == '__main__':
    main()

 

3) use model optimizer:

python mo_tf.py  --input_model '/resnet50_pascal_07.pb' --input_shape '[1,1080,1920,3]' --output_dir '/path/'

4) run_openvino:
 

#!/usr/bin/env python

import os
import time
import argparse

import cv2
import numpy as np
from openvino.inference_engine import IENetwork, IEPlugin


def parse_args(args):
    parser = argparse.ArgumentParser(description='convert model')
    parser.add_argument(
        '--img',
        help='path to image',
        type=str,
        required=True
    )
    parser.add_argument(
        '--xml',
        help='path to xml',
        type=str,
        required=True
    )
    parser.add_argument(
        '--bin',
        help='path to bin',
        type=str,
        required=True
    )
    parser.add_argument(
        '--plugin',
        help='path to plugin dir',
        type=str,
        required=False,
        default='/home/maksim/libs/intel/computer_vision_sdk_2018.5.455/deployment_tools/inference_engine/lib/ubuntu_18.04/intel64/'
    )
    parser.add_argument(
        '--cpu-ext',
        help='path to plugin dir',
        type=str,
        required=False,
        default='/home/maksim/libs/intel/computer_vision_sdk_2018.5.455/deployment_tools/inference_engine/lib/ubuntu_18.04/intel64/libcpu_extension_sse4.so'
    )
    return parser.parse_args(args)


def main(args=None):
    args=parse_args(args)

    model_xml = args.xml
    model_bin = args.bin
    img_fn = args.img

    plugin = IEPlugin(device="CPU", plugin_dirs=args.plugin)
    plugin.add_cpu_extension(args.cpu_ext)
    net = IENetwork(model=model_xml, weights=model_bin)

    supported_layers = plugin.get_supported_layers(net)
    not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
    print('not supported layers:', not_supported_layers)

    input_blob = 'input_1'
    cl_out_blob = 'classification/concat'
    rg_out_blob = 'regression/concat'

    print('out:', len(net.outputs))
    print('outputs', net.outputs)

    net.batch_size = 1
    n, c, h, w = net.inputs[input_blob].shape
    print(n, c, h, w)

    print("Loading model to the plugin")
    exec_net = plugin.load(network=net)
    del net

    # load images
    image = cv2.imread(img_fn)
    if image.shape[:-1] != (h, w):
        print("Image {} is resized from {} to {}".format(img_fn, image.shape[:-1], (h, w)))
        image = cv2.resize(image, (w, h))

    image = image.transpose((2, 0, 1))  # Change data layout from HWC to CHW
    image = np.expand_dims(image, axis=0)

    res = exec_net.infer(inputs={input_blob: image})
    classifications = res[cl_out_blob]
    regressions = res[rg_out_blob]

    print('cl', classifications.shape)
    print('rg', regressions.shape)

    print('cl', classifications)
    print('rg', regressions)


if __name__ == '__main__':
    main()

 

You should see the following:

cl (1, 389205, 1)
rg (1, 389205, 4)
cl [[[0.00731658]
  [0.00593163]
  [0.00490844]
  ...
  [0.013796  ]
  [0.01271339]
  [0.01176905]]]
rg [[[ 0.19778913  0.42981434  0.4873137   0.4337933 ]
  [ 0.35420343  0.34238777  0.3551264   0.35943335]
  [ 0.34856698  0.3398867   0.34825876  0.355788  ]
  ...
  [-0.02663497 -0.05773252  0.0025167   0.12971793]
  [ 0.10235062  0.02202121 -0.05354824 -0.08908419]
  [-0.2890372  -0.37682706 -0.32895935  0.02852303]]]

0 Kudos
panxizhou
Beginner
2,327 Views

Hi, Shubha. When I use openvino to deploy the retinanet model, I don't understand.

I can get similar results to you. as follow:

cl (1, 67995, 1)
reg (1, 67995, 4)
[[[0.0056458]
  [0.00638934]
  [0.00500558]
  ...
  [0.00484054]
  [0.01096065]
  [0.00460911]]]
[[[ 0.332659  -0.0957022 -0.341365  -0.289769]
  [ 0.299811  -0.412403 -0.01386412  0.12379301]
  [ 0.14048176  0.02960548 -0.00405663  0.07327364]
  ...
  [-0.10057355 -0.24512951 -0.22110356 -0.18410827]
  [-0.18897493 -0.17596444 -0.11185227 -0.05784295]
  [-0.0770526  -0.06576442 -0.01173029  0.01229405]]]

But now I don‘t know how to convert the data to get the coordinates of the box. I need to get the coordinates of the target to perform the positioning.

Thank you very much!

0 Kudos
Shubha_R_Intel
Employee
2,660 Views

Dear Maksim, thank you for such detailed reproduction steps. I will look into it and post my findings here.

Shubha

0 Kudos
Perevozchikov__Georg
2,660 Views

Hallo, Maxim. Are you from russia?) I ve bump into the same broblem.

I am trying to use openVINO with retinanet to optimize the network.

My project is: https://github.com/gosha20777/rescuer-la

Con you sent me your contact.

My E-mail is: gosha20777@live.ru

My telegramm is: @gosha20777

 

0 Kudos
Shubha_R_Intel
Employee
2,660 Views

Dear Maxim, I don't want you to think I'm ignoring you. I am working on reproducing this right now. thanks for your detailed steps. I will report back here.

Thanks for your patience -

Shubha

0 Kudos
Shubha_R_Intel
Employee
2,660 Views

Dear Maxim, attempts to run inference_retinanet.py bombed. Honestly I don't have the bandwidth to fully debug what went wrong (since  inference_retinanet.py is not Intel code) but suffice it to say that I use Windows and this line of code threw an exception :

handler = Handler(data_provider=data_provider, **params)

The exact error I get is this one:

https://github.com/sbraz/pymediainfo/issues/39

https://stackoverflow.com/questions/36453900/python-pymediainfo-module-error-126-the-specified-module-could-not-be-found

I also had to install a bunch of things which were missing too (pandas, pymediainfo, etc...)

However, all is not lost. For issues like this one, it's enough if I get the following items from you :

1) the frozen.pb ( I don't have this from you). Please zip it up and send it to me via private message.

2) the full IR xml and the mo command you used to produce that XML (the mo command was given but the full XML was not - please zip it up and PM it to me)

3) the openvino inference code (which you have posted above).  Incidentally have you tried one of our classification samples directly rather than write your own code ? Your classification code seems pretty straightforward, so why not just use classification_sample.py under inference_engine\samples\python_samples ? I ask this because the fewer variables we have the more straightforward the debugging will be.

Looking forward to hearing from you.

Thanks for using OpenVino !

Shubha

0 Kudos
pkhan10
New Contributor I
2,660 Views

Hey shubha,maksim
i am facing same issue from openvino model, the detections are not coming good
I use  resnet50_coco_best_v2.1.0.h5 (from here) and converted to frozen model before  using model optimizer

I used following script to convert model 

python mo_tf.py --input_model "/home/prateek/Downloads/Notebooks/personal/openvino/model_files/Retinanet/tf_model/resnet50_coco_best_v2.1.0.pb" --input_shape '[1,480,640,3]' --data_type FP32 --tensorflow_use_custom_operations_config extensions/front/tf/retinanet.json --reverse_input_channels --output_dir ~/Downloads/Notebooks/personal/openvino/model_files/

I ran both raw retinanet model and the converted openvino model  sharing two small clips here, please download and view

openvino gave far less detection at same threshold of .3, I believe its an issue related to model conversion.


 

 

0 Kudos
Tech__Xplorazzi
Beginner
2,661 Views

Hi Maksim - were you able to resolve the issue at your end? We are running into a similar issue and don't know how to proceed? Is it a retinaNet bug - nothing to do with OpenVINO?

 

Prateek - looks like you were able to get to the detection which is beyond what Marksim had reported. Can you please share the version of Tensorflow, OpenVINO and Intel platform you are using?

 

thanks

Dash

0 Kudos
Shubha_R_Intel
Employee
2,661 Views

Dear RetinaNet guys, 

if you can come up with a simple self-contained program which demonstrates the problem, I'd be happy to investigate. 

Also, please download OpenVino 2019R3 if you haven't already.

Thanks,

Shubha

0 Kudos
Soni__Shre_Yash
Beginner
2,661 Views

Hi,

Was the issue of output shape mismatch solved? I'm stuck with the same problem.

I'm using openvino_2019.3.334

Has anyone successfully optimized and run inference using retinanet?

0 Kudos
A_G__Ashwin
Beginner
2,661 Views

Hi guys,

 

For anyone facing the issue, I was able to get the inferencing working through openvino. I am using 2020.1.023. I have been using the fizyr/retinanet implementation found here - fizyr/retinanet

Certain takeaways:

1. Make sure that the preprocessing used for images during training is being followed in the exact same way during inference.

2. OpenVINO gives detections differently compared to the frozen graph. The frozen graph returns a list of bounding boxes (300,4), scores (300,1) and labels (300,1) but OpenVINO returns a concatenated version of the same (300,7) for every image passed.

OpenVINO format decode for the 7 columns: 

  • Labels are contained in the second column (index 1)
  • Scores are contained in the third column (index 2)
  • Boxes are in the next four columns as (xmin, ymin, xmax, ymax ) (index 3,4,5,6)

A simple function to decode the detections:

def decode_openvino_detections(detections, input_shape = (800, 1333)):
    """
    Converts openvino detections to understandable format

    Parameters:
    detections: Detections obtained from net.infer() method.
    input_shape: This is required to scale the bounding boxes coordinates passed.

    Returns:
    boxes: The bounding box coordinates representing (xmin, ymin, xmax, ymax)
    scores: The confidence of the detections
    labels: The class of the object detected

    """
    detections = detections[:,:,detections[:,:,:,2].argsort()[0][0][::-1],:] # sort detections on score
    labels = detections[:,:,:,1].astype(int)
    scores = detections[:,:,:,2]
    boxes = detections[:,:,:,(3,4,5,6)] # in decimal
    # rescale to pixel
    boxes[:,:,:,(0,2)] = boxes[:,:,:,(0,2)]*input_shape[1]
    boxes[:,:,:,(1,3)] = boxes[:,:,:,(1,3)]*input_shape[0]

    return boxes, scores, labels

 

Bussola__Riccardo
2,661 Views

Hi A G,

I'm trying to convert and use the same retinanet implementation from fizyr/retinanet.

In particular this pretreined  resnet50_coco_best_v2.1.0.h5.
I use this command to convert the model:

python mo_tf.py --input_model "path_model/model.pb" --input_shape '[1,480,640,3]' --data_type FP32 --tensorflow_use_custom_operations_config extensions/front/tf/retinanet.json --reverse_input_channels --output_dir "path_output_model"

I'm using opencvat to load the model,but also when I try to load the model with a python script, the error is :

"Error reading network: in Layer conv1_relu/Relu: trying to connect an edge to non existing output port: 7.5"
"Model was not properly created/updated. Test failed: Error reading network: in Layer conv1_relu/Relu: trying to connect an edge to non existing output port: 7.5"

Please can you guide me to do the process?
Thanks, it's very important

 

0 Kudos
Reply