Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6402 Discussions

Inference Engine crash on SiamFC model load

Tai-Min
Beginner
773 Views

I have created simple fully convolutional siamese network using Tensorflow 2.0:

class SiameseConvEmbedding(layers.Layer):
    def __init__(self):
        super(SiameseConvEmbedding, self).__init__()
        self.conv1 = layers.Conv2D(96, 11, strides=2, padding='valid')
        self.relu1 = layers.ReLU()
        self.pool1 = layers.MaxPool2D(pool_size=3, strides=2, padding='valid')
        self.conv2 = layers.Conv2D(256, 5, strides=1, padding='valid')
        self.relu2 = layers.ReLU()
        self.pool2 = layers.MaxPool2D(pool_size=3, strides=2, padding='valid')
        self.conv3 = layers.Conv2D(384, 3, strides=1, padding='valid')
        self.relu3 = layers.ReLU()
        self.conv4 = layers.Conv2D(384, 3, strides=1, padding='valid')
        self.relu4 = layers.ReLU()
        self.conv5 = layers.Conv2D(256, 3, strides=1, padding='valid')

    def call(self, inputs, training = None):
        res = self.conv1(inputs)
        res = self.relu1(res)
        res = self.pool1(res)
        res = self.conv2(res)
        res = self.relu2(res)
        res = self.pool2(res)
        res = self.conv3(res)
        res = self.relu3(res)
        res = self.conv4(res)
        res = self.relu4(res)
        res = self.conv5(res)

        return res

class SiamFC(keras.Model):
    def __init__(self):
        super(SiamFC, self).__init__()
        self.embedding = SiameseConvEmbedding()

    def convSingle(self, x):
        return tf.squeeze(tf.nn.convolution(tf.expand_dims(x[0], 0), tf.expand_dims(x[1], -1)), 0)
        
    def call(self, inputs, training = None):

        exemplar = inputs[0]
        source_frame = inputs[1]
    
        filters = self.embedding(exemplar, training)
        inputs = self.embedding(source_frame, training)

        # corss corelation layer
        result = tf.map_fn(
            fn = self.convSingle,
            elems=(inputs, filters),
            fn_output_signature=tf.TensorSpec([None, None, 1])
        )   

        return tf.squeeze(result, 0)

Model optimizer parses the network into intermediate representation flawlessly but when I try to load the network using

InferenceEngine::Core::ReadNetwork

it crashes (reading standard convolutional sequential network works so it's not a bug in my code). Below you can see dump from debugger.

1 RaiseException KERNELBASE 0x7ffabdcd3e49
2 CxxThrowException VCRUNTIME140D 0x7ffaaa97b230
3 ngraph::op::v0::ConvolutionBias::ConvolutionBias ngraphd 0x7ffa67d7efe6
4 ngraph::op::v0::ConvolutionBias::ConvolutionBias ngraphd 0x7ffa6694c2a0
5 ngraph::op::v0::ConvolutionBias::ConvolutionBias ngraphd 0x7ffa6679a70c
6 ngraph::op::v0::ConvolutionBias::ConvolutionBias ngraphd 0x7ffa6694b0f9
7 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa7469ace1
8 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa746f78c4
9 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa74767d6e
10 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa7474588d
11 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa7473e2c9
12 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa7473a2a2
13 InferenceEngine::RangeLayer::RangeLayer inference_engine_ir_readerd 0x7ffa74791978
14 InferenceEngine::TensorDesc::getDims inference_engined 0x7ffa843e3e48
15 InferenceEngine::TensorDesc::getDims inference_engined 0x7ffa843e97f3
16 InferenceEngine::TensorDesc::getDims inference_engined 0x7ffa84380522
17 InferenceEngine::TensorDesc::getDims inference_engined 0x7ffa843c015a
18 loadNetwork inference_helpers.cpp 77 0x7ff755d34ea3
19 PointerTrackerThread::loadNetworkModel pointertrackerthread.cpp 15 0x7ff755d878c2
20 PointerTrackerThread::start pointertrackerthread.cpp 45 0x7ff755d87ce4

I have found out that

tf.nn.convolution

causes the Inference Engine to crash.

I have tried instead

tf.nn.conv2d

 without success.

I have also found out that

tf.nn.convolution

 is successfully parsed by Model Optimizer:

		<layer id="2" name="StatefulPartitionedCall/functional_3/siamese_conv_embedding/conv2d/Conv2D_1" type="Convolution" version="opset1">
			<data auto_pad="valid" dilations="1,1" output_padding="0,0" pads_begin="0,0" pads_end="0,0" strides="2,2"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>3</dim>
					<dim>255</dim>
					<dim>255</dim>
				</port>
				<port id="1">
					<dim>96</dim>
					<dim>3</dim>
					<dim>11</dim>
					<dim>11</dim>
				</port>
			</input>
			<output>
				<port id="2" precision="FP32">
					<dim>1</dim>
					<dim>96</dim>
					<dim>123</dim>
					<dim>123</dim>
				</port>
			</output>
		</layer>

Is there anything that I missed or any workaround to make this network work?

0 Kudos
1 Solution
Max_L_Intel
Moderator
721 Views

Hi @Tai-Min 

We have started to provide the support for your case within Github thread that you opened https://github.com/openvinotoolkit/openvino/issues/2092

If there is any news from our side about dynamic weights model support, we'll share it there.

Thank you.

View solution in original post

0 Kudos
3 Replies
Sahira_Intel
Moderator
734 Views

Hi Tai-Min,

Can you please post the errors you are getting so I can understand the problem a little better?

Sincerely,

Sahira 

0 Kudos
Max_L_Intel
Moderator
722 Views

Hi @Tai-Min 

We have started to provide the support for your case within Github thread that you opened https://github.com/openvinotoolkit/openvino/issues/2092

If there is any news from our side about dynamic weights model support, we'll share it there.

Thank you.

0 Kudos
Max_L_Intel
Moderator
698 Views

This thread will no longer be monitored. If you need any additional information from Intel, please submit a new question.

0 Kudos
Reply