Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

IR Model from Tensorflow

Sho_I_
Beginner
1,087 Views

Hi

When converting from TensorFlow to IR model, Reshape-layer is required between Convolution-layer and Fully Connected-layer.

however, when converting Caffe to an IR model, Reshape-layer is not necessary between Convolution-layer and Fully Connected-layer.

When converting from TensorFlow to IR model, please tell me how to remove Reshape-layer between Convolution-layer and Fully Connected-layer.

 

TensorFlow sample※

    conv = tf.nn.conv2d(pool,
                        conv2_weights,
                        strides=[1, 1, 1, 1],
                        padding='SAME')
    relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
    pool = tf.nn.max_pool(relu,
                          ksize=[1, 2, 2, 1],
                          strides=[1, 2, 2, 1],
                          padding='SAME')
    # Reshape the feature map cuboid into a 2D matrix to feed it to the
    # fully connected layers.
    pool_shape = pool.get_shape().as_list()
    reshape = tf.reshape(
        pool,
        [pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
    # Fully connected layer. Note that the '+' operation automatically
    # broadcasts the biases.
    hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)

※https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py

IR

		<layer id="4" name="Conv2D_3" precision="FP32" type="Convolution">
			<data auto_pad="same_upper" dilations="1,1" group="1" kernel="5,5" output="64" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>32</dim>
					<dim>14</dim>
					<dim>14</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>64</dim>
					<dim>64</dim>
					<dim>14</dim>
					<dim>14</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3328" size="204800"/>
				<biases offset="208128" size="256"/>
			</blobs>
		</layer>
		<layer id="5" name="Relu_4" precision="FP32" type="ReLU">
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>64</dim>
					<dim>14</dim>
					<dim>14</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>64</dim>
					<dim>64</dim>
					<dim>14</dim>
					<dim>14</dim>
				</port>
			</output>
		</layer>
		<layer id="6" name="MaxPool_3" precision="FP32" type="Pooling">
			<data auto_pad="same_upper" exclude-pad="true" kernel="2,2" pads_begin="0,0" pads_end="0,0" pool-method="max" strides="2,2"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>64</dim>
					<dim>14</dim>
					<dim>14</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>64</dim>
					<dim>64</dim>
					<dim>7</dim>
					<dim>7</dim>
				</port>
			</output>
		</layer>
		<layer id="7" name="Reshape_1/shape/Output_0/Data__const" precision="FP32" type="Const">
			<output>
				<port id="1">
					<dim>2</dim>
				</port>
			</output>
			<blobs>
				<custom offset="208384" size="8"/>
			</blobs>
		</layer>
		<layer id="8" name="Reshape_1" precision="FP32" type="Reshape">
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>64</dim>
					<dim>7</dim>
					<dim>7</dim>
				</port>
				<port id="1">
					<dim>2</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>64</dim>
					<dim>3136</dim>
				</port>
			</output>
		</layer>
		<layer id="9" name="MatMul_2" precision="FP32" type="FullyConnected">
			<data out-size="512"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>3136</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>64</dim>
					<dim>512</dim>
				</port>
			</output>
			<blobs>
				<weights offset="208392" size="6422528"/>
				<biases offset="6630920" size="2048"/>
			</blobs>
		</layer>

 

Caffe sample※※

layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

※※https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet.prototxt

IR

		<layer id="3" name="conv2" precision="FP32" type="Convolution">
			<data dilations="1,1" group="1" kernel="5,5" output="50" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>20</dim>
					<dim>12</dim>
					<dim>12</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>64</dim>
					<dim>50</dim>
					<dim>8</dim>
					<dim>8</dim>
				</port>
			</output>
			<blobs>
				<weights offset="2080" size="100000"/>
				<biases offset="102080" size="200"/>
			</blobs>
		</layer>
		<layer id="4" name="pool2" precision="FP32" type="Pooling">
			<data exclude-pad="true" kernel="2,2" pads_begin="0,0" pads_end="0,0" pool-method="max" rounding_type="ceil" strides="2,2"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>50</dim>
					<dim>8</dim>
					<dim>8</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>64</dim>
					<dim>50</dim>
					<dim>4</dim>
					<dim>4</dim>
				</port>
			</output>
		</layer>
		<layer id="5" name="ip1" precision="FP32" type="FullyConnected">
			<data out-size="500"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>50</dim>
					<dim>4</dim>
					<dim>4</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>64</dim>
					<dim>500</dim>
				</port>
			</output>
			<blobs>
				<weights offset="102280" size="1600000"/>
				<biases offset="1702280" size="2000"/>
			</blobs>
		</layer>

 

 

0 Kudos
1 Solution
Shubha_R_Intel
Employee
1,087 Views

Dear Sho I.,

i did some research on your issue. 

If the command line parameter “--keep_shape_ops” is specified then the Model Optimizer will keep operations of type “Shape” and all operations that use the result of these Shape operations automatically appear in the IR also. This command line parameter doesn’t affect the “Reshape” layers.

In the IR  which you generated above with the –keep_shape_ops I see “Shape” layer (for example, with name "InceptionResnetV1/Bottleneck/BatchNorm/Shape") so the command line parameter worked as expected.

Unfortunately, the only option for you  is to somehow change the model and get rid of the "Reshape"  (and re-train it!).

Thanks and I hope it helps,


Shubha

View solution in original post

0 Kudos
8 Replies
HemanthKum_G_Intel
1,087 Views

Hi,

There are three ways to achieve this kind of operation:

1. Cut model part containing such layers in Model Optimizer (for start or end of the model)

2.  Sub-Graph Replacement in the Model Optimizer

3. Edit network in original framework to exclude such layers. (refer API's from respective frameworks like Tensorflow, Caffe etc.,)

0 Kudos
Sho_I_
Beginner
1,087 Views

I defined a simple network without using Reshape function.

However, 「type = "Reshape"」 is added.

I want to use dynamic batch processing but I can not use it because there is Reshape-Layer.

Specifically, I want to use dynamic batch processing with the Tensorflow FaceNet model.

 

 

Simple network

import tensorflow.contrib.slim as slim

conv = slim.conv2d(data, 32, 5)
dence = slim.fully_connected(conv,NUM_LABELS)
Convert to IR
	<layers>
		<layer id="0" name="Placeholder_2" precision="FP32" type="Input">
			<output>
				<port id="0">
					<dim>64</dim>
					<dim>1</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</output>
		</layer>
		<layer id="1" name="Conv/Conv2D" precision="FP32" type="Convolution">
			<data auto_pad="same_upper" dilations="1,1" group="1" kernel="5,5" output="32" pads_begin="2,2" pads_end="2,2" strides="1,1"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>1</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>64</dim>
					<dim>32</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</output>
			<blobs>
				<weights offset="0" size="3200"/>
				<biases offset="3200" size="128"/>
			</blobs>
		</layer>
		<layer id="2" name="Conv/Relu" precision="FP32" type="ReLU">
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>32</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>64</dim>
					<dim>32</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</output>
		</layer>
		<layer id="3" name="Conv/Relu/Permute_" precision="FP32" type="Permute">
			<data order="0,2,3,1"/>
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>32</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>64</dim>
					<dim>28</dim>
					<dim>28</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="4" name="fully_connected/Tensordot/Reshape/shape/Output_0/Data__const" precision="FP32" type="Const">
			<output>
				<port id="1">
					<dim>2</dim>
				</port>
			</output>
			<blobs>
				<custom offset="3328" size="8"/>
			</blobs>
		</layer>
		<layer id="5" name="fully_connected/Tensordot/Reshape" precision="FP32" type="Reshape">
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>28</dim>
					<dim>28</dim>
					<dim>32</dim>
				</port>
				<port id="1">
					<dim>2</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>50176</dim>
					<dim>32</dim>
				</port>
			</output>
		</layer>
		<layer id="6" name="fully_connected/Tensordot/MatMul" precision="FP32" type="FullyConnected">
			<data out-size="10"/>
			<input>
				<port id="0">
					<dim>50176</dim>
					<dim>32</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>50176</dim>
					<dim>10</dim>
				</port>
			</output>
			<blobs>
				<weights offset="3336" size="1280"/>
			</blobs>
		</layer>
		<layer id="7" name="fully_connected/Tensordot/shape/Output_0/Data__const" precision="FP32" type="Const">
			<output>
				<port id="1">
					<dim>4</dim>
				</port>
			</output>
			<blobs>
				<custom offset="4616" size="16"/>
			</blobs>
		</layer>
		<layer id="8" name="fully_connected/Tensordot" precision="FP32" type="Reshape">
			<input>
				<port id="0">
					<dim>50176</dim>
					<dim>10</dim>
				</port>
				<port id="1">
					<dim>4</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>64</dim>
					<dim>10</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</output>
		</layer>
		<layer id="9" name="ScaleShift/fully_connected/BiasAdd" precision="FP32" type="ScaleShift">
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>10</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>64</dim>
					<dim>10</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</output>
			<blobs>
				<weights offset="4632" size="40"/>
				<biases offset="4672" size="40"/>
			</blobs>
		</layer>
		<layer id="10" name="fully_connected/Relu" precision="FP32" type="ReLU">
			<input>
				<port id="0">
					<dim>64</dim>
					<dim>10</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>64</dim>
					<dim>10</dim>
					<dim>28</dim>
					<dim>28</dim>
				</port>
			</output>
		</layer>
	</layers>
	<edges>
		<edge from-layer="0" from-port="0" to-layer="1" to-port="0"/>
		<edge from-layer="1" from-port="3" to-layer="2" to-port="0"/>
		<edge from-layer="2" from-port="1" to-layer="3" to-port="0"/>
		<edge from-layer="3" from-port="1" to-layer="5" to-port="0"/>
		<edge from-layer="4" from-port="1" to-layer="5" to-port="1"/>
		<edge from-layer="5" from-port="2" to-layer="6" to-port="0"/>
		<edge from-layer="6" from-port="2" to-layer="8" to-port="0"/>
		<edge from-layer="7" from-port="1" to-layer="8" to-port="1"/>
		<edge from-layer="8" from-port="2" to-layer="9" to-port="0"/>
		<edge from-layer="9" from-port="3" to-layer="10" to-port="0"/>
	</edges>

 

0 Kudos
Shubha_R_Intel
Employee
1,087 Views

Dear Sho I.,

In your model optimizer command, can you add the switch --keep_shape_ops ? Please see The MO Dev Guide for more information. But --keep_shape_ops allows Reshape to happen at the Inference Engine rather than at the MO.

Can you try it ? Maybe you'll get rid of the Reshape from your IR if you do this.

Let us know what happens on this forum.

Thanks,

Shubha

 

0 Kudos
Sho_I_
Beginner
1,087 Views

Dear Shubha R.,

I tried --keep_shape_ops but it didn't work.

facenet Pre-trained model
20180402-114759

I tried conversion command

python mo_tf.py --input_model 20180402-114759.pb --freeze_placeholder_with_value "phase_train->False" -b 1 --keep_shape_ops

20180402-114759.xml

・・・
		<layer id="309" name="InceptionResnetV1/Block8/Conv2d_1x1/Conv2D" precision="FP32" type="Convolution">
			<data auto_pad="same_upper" dilations="1,1" group="1" kernel="1,1" output="1792" pads_begin="0,0" pads_end="0,0" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>384</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</output>
			<blobs>
				<weights offset="87439552" size="2752512"/>
				<biases offset="90192064" size="7168"/>
			</blobs>
		</layer>
		<layer id="310" name="InceptionResnetV1/Block8/add" precision="FP32" type="Eltwise">
			<data operation="sum"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</output>
		</layer>
		<layer id="311" name="InceptionResnetV1/Logits/AvgPool_1a_8x8/AvgPool" precision="FP32" type="Pooling">
			<data auto_pad="valid" exclude-pad="true" kernel="3,3" pads_begin="0,0" pads_end="0,0" pool-method="avg" strides="1,1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>3</dim>
					<dim>3</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
			</output>
		</layer>
		<layer id="312" name="InceptionResnetV1/Logits/Flatten/flatten/Reshape/DimData_const" precision="FP32" type="Const">
			<output>
				<port id="1">
					<dim>2</dim>
				</port>
			</output>
			<blobs>
				<custom offset="90199232" size="8"/>
			</blobs>
		</layer>
		<layer id="313" name="InceptionResnetV1/Logits/Flatten/flatten/Reshape" precision="FP32" type="Reshape">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1792</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
				<port id="1">
					<dim>2</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>1792</dim>
				</port>
			</output>
		</layer>
		<layer id="314" name="InceptionResnetV1/Bottleneck/MatMul" precision="FP32" type="FullyConnected">
			<data out-size="512"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>1792</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
				</port>
			</output>
			<blobs>
				<weights offset="90199240" size="3670016"/>
			</blobs>
		</layer>
		<layer id="315" name="InceptionResnetV1/Bottleneck/BatchNorm/Reshape/shape/Output_0/Data__const" precision="FP32" type="Const">
			<output>
				<port id="1">
					<dim>4</dim>
				</port>
			</output>
			<blobs>
				<custom offset="93869256" size="16"/>
			</blobs>
		</layer>
		<layer id="316" name="InceptionResnetV1/Bottleneck/BatchNorm/Reshape" precision="FP32" type="Reshape">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
				</port>
				<port id="1">
					<dim>4</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
			</output>
		</layer>
		<layer id="317" name="Mul1_26917/Fused_Mul_/FusedScaleShift_" precision="FP32" type="ScaleShift">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>512</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
			</output>
			<blobs>
				<weights offset="93869272" size="2048"/>
				<biases offset="93871320" size="2048"/>
			</blobs>
		</layer>
		<layer id="318" name="InceptionResnetV1/Bottleneck/BatchNorm/Shape" precision="FP32" type="Shape">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>2</dim>
				</port>
			</output>
		</layer>
		<layer id="319" name="InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1" precision="FP32" type="Reshape">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
					<dim>1</dim>
					<dim>1</dim>
				</port>
				<port id="1">
					<dim>2</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
				</port>
			</output>
		</layer>
		<layer id="320" name="normalize" precision="FP32" type="Normalize">
			<data across_spatial="0" channel_shared="0" eps="1e-10"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>512</dim>
				</port>
			</input>
			<output>
				<port id="2">
					<dim>1</dim>
					<dim>512</dim>
				</port>
			</output>
			<blobs>
				<weights offset="93873368" size="2048"/>
			</blobs>
		</layer>

・・・

When I infer using the dynamic batch function, I get an error in Loading model to the plugin.

Such topology cannot be compiled for dynamic batch!

0 Kudos
Shubha_R_Intel
Employee
1,087 Views

Dear Sho I.

Yes I agree --keep_shape_ops didn't seem to work as expected. I see that you have attached some files on Google Drive, which I don't have access to. Well, as you've discovered Reshape is not supported by Dynamic Batching - this is well-documented in OpenVino documentation. Let me investigate about --keep_shape_ops and get back to you.

Thanks for your patience,

Shubha

0 Kudos
Sho_I_
Beginner
1,087 Views

I want to convert the network without Reshape layer in order to use dynamic batch function in facenet model.
TensorFlow needed Reshape to connect Convolution layers to Fully Connected layers.
Caffe does not need Reshape.
Please explain how to use TensorFlow without Reshape or tell IR conversion options.

The following TensorFlow description confirms that the Reshape layer is added during conversion.
×slim.fully_connected()
×tf.layers.dense()

0 Kudos
Shubha_R_Intel
Employee
1,088 Views

Dear Sho I.,

i did some research on your issue. 

If the command line parameter “--keep_shape_ops” is specified then the Model Optimizer will keep operations of type “Shape” and all operations that use the result of these Shape operations automatically appear in the IR also. This command line parameter doesn’t affect the “Reshape” layers.

In the IR  which you generated above with the –keep_shape_ops I see “Shape” layer (for example, with name "InceptionResnetV1/Bottleneck/BatchNorm/Shape") so the command line parameter worked as expected.

Unfortunately, the only option for you  is to somehow change the model and get rid of the "Reshape"  (and re-train it!).

Thanks and I hope it helps,


Shubha

0 Kudos
Sho_I_
Beginner
1,087 Views

Dear Shubha, 

Thank you very much for your reply.

With the Reshape layer, you will not be able to use the dynamic batch feature, and the inference performance will also decrease.

I converted the same Classification Model (eg AlexNet) with Caffe and TensorFlow.

The model converted from TensorFlow has a Reshape layer added.

The inference speed is 20% slower just because there is a Reshape layer.

I would like the implementation method of TensorFlow or IR conversion method which removes Reshape layer when converting IR model from TensorFlow.

Thanks,

0 Kudos
Reply