Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Compiler errors with DW COnvolution?

idata
Employee
1,692 Views

Hi guys, i have a model in caffe for a separable convolution like the one in Mobile Net, but the compiler make this strange error:

 

`mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

 

Fusing depthconv and conv in conv2/dw and conv2/sep

 

Traceback (most recent call last):

 

File "/usr/local/bin/mvNCCompile", line 118, in

 

create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)

 

File "/usr/local/bin/mvNCCompile", line 101, in create_graph

 

net = parse_caffe(args, myriad_config)

 

File "/usr/local/bin/ncsdk/Controllers/CaffeParser.py", line 1394, in parse_caffe

 

network.attach(node)

 

File "/usr/local/bin/ncsdk/Models/Network.py", line 81, in attach

 

stage.attach_several(appropriate_nodes)

 

File "/usr/local/bin/ncsdk/Models/NetworkStage.py", line 689, in attach_several

 

parents.attach(self)

 

File "/usr/local/bin/ncsdk/Models/NetworkStage.py", line 412, in attach

 

taps[c,c*multiplier+i,y,x] = self.taps[y,x,c,i]

 

IndexError: index 10 is out of bounds for axis 2 with size 10

 

`

 

I say strange because my Network has this architecture: data- (conv/dw - conv/sep- BN- scal&bias) for three times, than a FC layer and a SoftMax activation, so i don't understand why the compiler locked on the second Convolution Separable Convolution, any suggestion?
0 Kudos
7 Replies
idata
Employee
1,419 Views

@Ryose If you could provide your model files, I'd like to help you debug your network. Thanks.

0 Kudos
idata
Employee
1,419 Views

@Tome_at_Intel sure, here is my .prototxt. I hope this helps!

 

name: "UNIPINET" layer { name: "data" type: "Input" top: "data" input_param { shape: { dim: 1 dim: 1 dim: 63 dim: 13 } } } layer { name: "conv1/dw" type: "Convolution" bottom: "data" top: "conv1/dw" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 1 bias_term: false pad: 0 kernel_h: 15 kernel_w: 3 group: 1 #engine: CAFFE stride: 1 weight_filler { type: "msra" } } } layer { name: "conv1/sep" type: "Convolution" bottom: "conv1/dw" top: "conv1/sep" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 16 bias_term: true pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "msra" } } } layer { name: "conv1/sep/bn" type: "BatchNorm" bottom: "conv1/sep" top: "conv1/sep" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "conv1/sep/bn/scale" type: "Scale" bottom: "conv1/sep" top: "conv1/sep" param { lr_mult: 1 decay_mult: 0 } param { lr_mult: 1 decay_mult: 0 } scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "relu1/sep" type: "ReLU" bottom: "conv1/sep" top: "conv1/sep" } layer { name: "conv2/dw" type: "Convolution" bottom: "conv1/sep" top: "conv2/dw" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 16 bias_term: false pad: 0 kernel_h: 10 kernel_w: 3 group: 16 #engine: CAFFE stride: 1 weight_filler { type: "msra" } } } layer { name: "conv2/sep" type: "Convolution" bottom: "conv2/dw" top: "conv2/sep" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 64 bias_term: true pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "msra" } } } layer { name: "conv2/sep/bn" type: "BatchNorm" bottom: "conv2/sep" top: "conv2/sep" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "conv2/sep/bn/scale" type: "Scale" bottom: "conv2/sep" top: "conv2/sep" param { lr_mult: 1 decay_mult: 0 } param { lr_mult: 1 decay_mult: 0 } scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "relu2/sep" type: "ReLU" bottom: "conv2/sep" top: "conv2/sep" } layer { name: "conv3/dw" type: "Convolution" bottom: "conv2/sep" top: "conv3/dw" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 64 bias_term: false pad: 0 kernel_h: 5 kernel_w: 3 group: 64 #engine: CAFFE stride: 1 weight_filler { type: "msra" } } } layer { name: "conv3/sep" type: "Convolution" bottom: "conv3/dw" top: "conv3/sep" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 128 bias_term: true pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "msra" } } } layer { name: "conv3/sep/bn" type: "BatchNorm" bottom: "conv3/sep" top: "conv3/sep" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "conv3/sep/bn/scale" type: "Scale" bottom: "conv3/sep" top: "conv3/sep" param { lr_mult: 1 decay_mult: 0 } param { lr_mult: 1 decay_mult: 0 } scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "relu3/sep" type: "ReLU" bottom: "conv3/sep" top: "conv3/sep" } layer { name: "avg_pool" type: "Pooling" bottom: "conv3/sep" top: "avg_pool" pooling_param { pool: AVE global_pooling: true } } layer { name: "fc" type: "InnerProduct" bottom: "avg_pool" top: "fc" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 12 weight_filler { type: "msra" } bias_filler { type: "constant" value: 0 } } } layer { name: "fc/bn" type: "BatchNorm" bottom: "fc" top: "fc" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "fc/bn/scale" type: "Scale" bottom: "fc" top: "fc" param { lr_mult: 1 decay_mult: 0 } param { lr_mult: 1 decay_mult: 0 } scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "output" type: "Softmax" bottom: "fc" top: "output" }
0 Kudos
idata
Employee
1,419 Views

@Ryose I see that your network is using non-square convolutions (i.e. in layer conv1/dw: 15 x 3). At the moment, we only have support for square convolutions on the NCS. Please visit https://movidius.github.io/ncsdk/Caffe.html for more details about limitations and known issues.

0 Kudos
idata
Employee
1,419 Views

thanks for the reference @Tome_at_Intel

0 Kudos
idata
Employee
1,419 Views

@Ryose I apologize, the limitations from the Caffe site is not up to date. We do support non-square convolutions, but for depth-wise convolutions we only support 3x3 convolutions at the moment.

0 Kudos
idata
Employee
1,419 Views

what about simple 1*1 dwise?

 

I think thats not too hard to implement.
0 Kudos
idata
Employee
1,419 Views

@jokilokis A 1x1 depthwise convolution seems strange. You are taking a single input pixel from a single input channel and multiplying it by a number, effectively scaling the image. It does not seem provide any filtering like a 3x3 depthwise convolution. I'm curious to know what kind of network you are using.

0 Kudos
Reply