Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

concat layer have not been support yet?

idata
Employee
1,114 Views

caffe----->NCS stick

 

I try to check MTCNN's o_12net ,

 

because o_12net have two output layer : conv4_2 and prob layers, and both their output is important,

 

so, we need two output layer,

 

but it seems that SDK2.05 only support 1 output layer,

 

so I decide to use concat layer to combine conv4_2 (output shape = 1,4,1,1) and prob (output shape=1,2,1,1) layer

 

but the output'dimension behavior incorrect,

 

mvNCCheck 12net.prototxt -w 12net.caffemodel -s 12 -on out

 

its output shapesize is 4,

 

mvNCCheck 12net.prototxt -w 12net.caffemodel -s 12 -on out -ec

 

it's output shapesize is 2,

 

if there are something wrong with concat layer in caffe?

0 Kudos
12 Replies
idata
Employee
873 Views

or if there have any way to get two layer's output

0 Kudos
idata
Employee
873 Views

ok now I can make sure that concat layer has bugs in ncsdk2

0 Kudos
idata
Employee
873 Views

@zufeifei Yes there is some bugs with the NCSDK's concat layer at the moment. Right now it doesn't support concat being the last layer. If you use a dummy reshape layer as seen in https://ncsforum.movidius.com/discussion/comment/2831/#Comment_2831 you can bypass this.

0 Kudos
idata
Employee
873 Views

@zufeifei Also, the NCSDK only supports 1 output at the moment.

0 Kudos
idata
Employee
873 Views

@Tome_at_Intel

 

Oh!

 

It is a great help,

 

I would try it immediately!
0 Kudos
idata
Employee
873 Views

@Tome_at_Intel

 

In caffe , a reshape layer follows the concat layer can bypass the problem,

 

but in tensorflow, a reshape layer seems could not help to bypass the concat layer
0 Kudos
idata
Employee
873 Views

@Tome_at_Intel

 

Hi,

 

I just finish test onet.prototxt,

 

I find if the concat layer has three bottom,

 

although there is a reshape layer followed,

 

an "IndexError: list index out of range" would happen,

 

if there any tricks to do with this problem?

 

,here is the prototxt, hope it can help you to deal with the problem:

 

,,

 

name: "48Net"

 

input: "data"

 

input_shape{

 

dim: 1

 

dim: 3

 

dim: 48

 

dim: 48

 

}

 

layer {

 

name: "conv1"

 

type: "Convolution"

 

bottom: "data"

 

top: "conv1"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

convolution_param {

 

num_output: 32

 

kernel_size: 3

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prelu1"

 

type: "PReLU"

 

bottom: "conv1"

 

top: "prelu1"

 

}

 

layer {

 

name: "pool1"

 

type: "Pooling"

 

bottom: "prelu1"

 

top: "pool1"

 

pooling_param {

 

pool: MAX

 

kernel_size: 3

 

stride: 2

 

}

 

}

 

layer {

 

name: "conv2"

 

type: "Convolution"

 

bottom: "pool1"

 

top: "conv2"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

convolution_param {

 

num_output: 64

 

kernel_size: 3

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prelu2"

 

type: "PReLU"

 

bottom: "conv2"

 

top: "prelu2"

 

}

 

layer {

 

name: "pool2"

 

type: "Pooling"

 

bottom: "prelu2"

 

top: "pool2"

 

pooling_param {

 

pool: MAX

 

kernel_size: 3

 

stride: 2

 

}

 

}

 

layer {

 

name: "conv3"

 

type: "Convolution"

 

bottom: "pool2"

 

top: "conv3"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

convolution_param {

 

num_output: 64

 

kernel_size: 3

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prelu3"

 

type: "PReLU"

 

bottom: "conv3"

 

top: "prelu3"

 

}

 

layer {

 

name: "pool3"

 

type: "Pooling"

 

bottom: "prelu3"

 

top: "pool3"

 

pooling_param {

 

pool: MAX

 

kernel_size: 2

 

stride: 2

 

}

 

}

 

layer {

 

name: "conv4"

 

type: "Convolution"

 

bottom: "pool3"

 

top: "conv4"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

convolution_param {

 

num_output: 128

 

kernel_size: 2

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prelu4"

 

type: "PReLU"

 

bottom: "conv4"

 

top: "prelu4"

 

}

 

layer {

 

name: "conv5"

 

type: "InnerProduct"

 

bottom: "prelu4"

 

top: "conv5"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

inner_product_param {

 

#kernel_size: 3

 

num_output: 256

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prelu5"

 

type: "PReLU"

 

bottom: "conv5"

 

top: "prelu5"

 

}

 

layer {

 

name: "conv6_1"

 

type: "InnerProduct"

 

bottom: "prelu5"

 

top: "conv6_1"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

inner_product_param {

 

#kernel_size: 1

 

num_output: 2

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "conv6_2"

 

type: "InnerProduct"

 

bottom: "prelu5"

 

top: "conv6_2"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

inner_product_param {

 

#kernel_size: 1

 

num_output: 4

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "conv6_3"

 

type: "InnerProduct"

 

bottom: "prelu5"

 

top: "conv6_3"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 1

 

}

 

inner_product_param {

 

#kernel_size: 1

 

num_output: 10

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prob1"

 

type: "Softmax"

 

bottom: "conv6_1"

 

top: "prob1"

 

}

 

layer{

 

name:"concat1_out"

 

type:"Concat"

 

bottom:"conv6_3"

 

bottom:"conv6_2"

 

bottom:"prob1"

 

top:"concat1_out"

 

concat_param{

 

axis:1

 

}

 

}

 

layer {

 

name: "output1"

 

type: "Reshape"

 

bottom: "concat1_out"

 

top: "output1"

 

reshape_param {

 

shape {

 

dim: 1

 

dim: 16

 

}

 

}

 

}
0 Kudos
idata
Employee
873 Views

@zufeifei Looks like you need to adjust the final reshape layer to:

 

layer { name: "output1" type: "Reshape" bottom: "concat1_out" top: "output1" reshape_param { shape { dim: 1 dim: 1 dim: 16 } } }
0 Kudos
idata
Employee
873 Views

@Tome_at_Intel

 

I apoligized for troubling you again,

 

I wonder if there any trick to bypass the concat layer which concats (1,2,7,3) and (1,4,7,3)

 

name: "PNet"

 

input: "data"

 

input_shape{

 

dim: 1

 

dim: 3

 

dim: 23

 

dim: 15

 

}

 

layer {

 

name: "conv1"

 

type: "Convolution"

 

bottom: "data"

 

top: "conv1"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 0

 

}

 

convolution_param {

 

num_output: 10

 

kernel_size: 3

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "PReLU1"

 

type: "PReLU"

 

bottom: "conv1"

 

top: "PReLU1"

 

}

 

layer {

 

name: "pool1"

 

type: "Pooling"

 

bottom: "PReLU1"

 

top: "pool1"

 

pooling_param {

 

pool: MAX

 

kernel_size: 2

 

stride: 2

 

}

 

}

 

layer {

 

name: "conv2"

 

type: "Convolution"

 

bottom: "pool1"

 

top: "conv2"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 0

 

}

 

convolution_param {

 

num_output: 16

 

kernel_size: 3

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "PReLU2"

 

type: "PReLU"

 

bottom: "conv2"

 

top: "PReLU2"

 

}

 

layer {

 

name: "conv3"

 

type: "Convolution"

 

bottom: "PReLU2"

 

top: "conv3"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 0

 

}

 

convolution_param {

 

num_output: 32

 

kernel_size: 3

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "PReLU3"

 

type: "PReLU"

 

bottom: "conv3"

 

top: "PReLU3"

 

}

 

layer {

 

name: "conv4-1"

 

type: "Convolution"

 

bottom: "PReLU3"

 

top: "conv4-1"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 0

 

}

 

convolution_param {

 

num_output: 2

 

kernel_size: 1

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "conv4_2"

 

type: "Convolution"

 

bottom: "PReLU3"

 

top: "conv4_2"

 

param {

 

lr_mult: 1

 

decay_mult: 1

 

}

 

param {

 

lr_mult: 2

 

decay_mult: 0

 

}

 

convolution_param {

 

num_output: 4

 

kernel_size: 1

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

value: 0

 

}

 

}

 

}

 

layer {

 

name: "prob1"

 

type: "Softmax"

 

bottom: "conv4-1"

 

top: "prob1"

 

}

 

layer {

 

name: "concat1"

 

type: "Concat"

 

bottom: "prob1"

 

bottom: "conv4_2"

 

top: "concat1"

 

}

 

layer {

 

name: "output"

 

type: "Reshape"

 

bottom: "concat1"

 

top: "output"

 

reshape_param{

 

shape{

 

dim:1

 

dim:6

 

dim:7

 

dim:3

 

}

 

}

 

}
0 Kudos
idata
Employee
873 Views

@zufeifei There isn't a way I can think of to bypass it if you need both the output from the conv4_2 layer and the prob1 layer. Since NCSDK does not support multiple output, using concat with a reshape is the only way I know of how to do this. Is there a specific reason why you want to bypass the concat layer?

0 Kudos
idata
Employee
873 Views

@Tome_at_Intel

 

It's one part of a face detect work, the Pnet returns ROI and the Possibility of the ROI, so I need to get both the prob1 layer(scores) and conv4_2 layer's output(ROI)

 

I will try to use two graphs to get Prob1 and conv4_2 layer's output seperately.

 

Thank you for your patient.
0 Kudos
idata
Employee
873 Views

@zufeifei What progress have you made with MTCNN? It will be grateful.

0 Kudos
Reply