Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

No space for convolution

idata
Employee
703 Views
I compiled a network and tried to check if it can be downloaded to the USB stick for execution using mvNCCheck. The error I get is _[Error 25] Myriad Error: "Cannot Allocate space for convolution."_. Reduced the size of the network by half and more and still have problems downloading the network to the USB stick. 1) Is there a way to determine what is the size of the allowable network during design phase? 2) What are the limitations in terms of number of operations allowed on a device ? 3) Can a large network be partitioned into several usb sticks? Start ========= deploy.prototxt ======== name: "aaa" input: "data" input_shape { dim: 1 dim: 3 dim: 256 dim: 512 } layer { name: "Convolution1" type: "Convolution" bottom: "data" top: "Convolution1" convolution_param { num_output: 16 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm1" type: "BatchNorm" bottom: "Convolution1" top: "Convolution1" } layer { name: "Scale1" type: "Scale" bottom: "Convolution1" top: "Convolution1" scale_param { bias_term: true } } layer { name: "block11" type: "ReLU" bottom: "Convolution1" top: "Convolution1" } layer { name: "Convolution2" type: "Convolution" bottom: "Convolution1" top: "Convolution2" convolution_param { num_output: 16 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm2" type: "BatchNorm" bottom: "Convolution2" top: "Convolution2" } layer { name: "Scale2" type: "Scale" bottom: "Convolution2" top: "Convolution2" scale_param { bias_term: true } } layer { name: "block12" type: "ReLU" bottom: "Convolution2" top: "Convolution2" } layer { name: "pool13" type: "Pooling" bottom: "Convolution2" top: "pool13" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "Convolution3" type: "Convolution" bottom: "pool13" top: "Convolution3" convolution_param { num_output: 32 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm3" type: "BatchNorm" bottom: "Convolution3" top: "Convolution3" } layer { name: "Scale3" type: "Scale" bottom: "Convolution3" top: "Convolution3" scale_param { bias_term: true } } layer { name: "block21" type: "ReLU" bottom: "Convolution3" top: "Convolution3" } layer { name: "Convolution4" type: "Convolution" bottom: "Convolution3" top: "Convolution4" convolution_param { num_output: 32 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm4" type: "BatchNorm" bottom: "Convolution4" top: "Convolution4" } layer { name: "Scale4" type: "Scale" bottom: "Convolution4" top: "Convolution4" scale_param { bias_term: true } } layer { name: "block22" type: "ReLU" bottom: "Convolution4" top: "Convolution4" } layer { name: "pool23" type: "Pooling" bottom: "Convolution4" top: "pool23" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "Convolution5" type: "Convolution" bottom: "pool23" top: "Convolution5" convolution_param { num_output: 64 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm5" type: "BatchNorm" bottom: "Convolution5" top: "Convolution5" } layer { name: "Scale5" type: "Scale" bottom: "Convolution5" top: "Convolution5" scale_param { bias_term: true } } layer { name: "block31" type: "ReLU" bottom: "Convolution5" top: "Convolution5" } layer { name: "Convolution6" type: "Convolution" bottom: "Convolution5" top: "Convolution6" convolution_param { num_output: 64 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm6" type: "BatchNorm" bottom: "Convolution6" top: "Convolution6" } layer { name: "Scale6" type: "Scale" bottom: "Convolution6" top: "Convolution6" scale_param { bias_term: true } } layer { name: "block32" type: "ReLU" bottom: "Convolution6" top: "Convolution6" } layer { name: "pool34" type: "Pooling" bottom: "Convolution6" top: "pool34" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "Convolution7" type: "Convolution" bottom: "pool34" top: "Convolution7" convolution_param { num_output: 128 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm7" type: "BatchNorm" bottom: "Convolution7" top: "Convolution7" } layer { name: "Scale7" type: "Scale" bottom: "Convolution7" top: "Convolution7" scale_param { bias_term: true } } layer { name: "block41" type: "ReLU" bottom: "Convolution7" top: "Convolution7" } layer { name: "Convolution8" type: "Convolution" bottom: "Convolution7" top: "Convolution8" convolution_param { num_output: 128 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm8" type: "BatchNorm" bottom: "Convolution8" top: "Convolution8" } layer { name: "Scale8" type: "Scale" bottom: "Convolution8" top: "Convolution8" scale_param { bias_term: true } } layer { name: "block42" type: "ReLU" bottom: "Convolution8" top: "Convolution8" } layer { name: "pool44" type: "Pooling" bottom: "Convolution8" top: "pool44" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "Convolution9" type: "Convolution" bottom: "pool44" top: "Convolution9" convolution_param { num_output: 128 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm9" type: "BatchNorm" bottom: "Convolution9" top: "Convolution9" } layer { name: "Scale9" type: "Scale" bottom: "Convolution9" top: "Convolution9" scale_param { bias_term: true } } layer { name: "block51" type: "ReLU" bottom: "Convolution9" top: "Convolution9" } layer { name: "block52" type: "Convolution" bottom: "Convolution9" top: "block52" convolution_param { num_output: 2 pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "block53" type: "Deconvolution" bottom: "block52" top: "block53" convolution_param { num_output: 2 bias_term: false pad: 8 kernel_size: 32 stride: 16 weight_filler { type: "bilinear" } } } layer { name: "prob" type: "Softmax" bottom: "block53" top: "prob" } End ========= deploy.prototxt ========= Thanks, Yair
0 Kudos
6 Replies
idata
Employee
434 Views

@yairh We looked into this issue and we are currently testing a release candidate at the moment. I'll ping you and let you know as soon as we have something for you. Thanks.

0 Kudos
idata
Employee
434 Views

@Tome,

 

Any update on this issue ?

 

Thanks,

 

Yair
0 Kudos
idata
Employee
434 Views

As far as I can tell on this one, it isn't the size of the network that is the problem (that seems to be determined at compile time), it is the size of the individual convolutions. I fixed this in my case by cutting down the number of channels in my convolutions.

0 Kudos
idata
Employee
434 Views

The number of channels affect the amount of convolution units, which translates to network size. Can you please share the number of channels you used in each layer.

 

Thanks,

 

Yair
0 Kudos
idata
Employee
434 Views

Found out that is I reduce the image resolution to 128x256 the network is runnable.

 

Is there a limitation on the image size input / resolution? If yes then what is the limitation.

 

Thanks,

 

Yair
0 Kudos
idata
Employee
434 Views

@yairh The convolutions are just too big after the convolution5 layer. As a workaround, you can add a batch norm params eps value to each BatchNorm layer like so:

 

layer {

 

name: "BatchNorm1"

 

type: "BatchNorm"

 

bottom: "Convolution1"

 

top: "Convolution1"

 

batch_norm_param {

 

eps: 1e-2

 

}

 

}

 

Using this w/a, you can compile up to the block 53 layer, but the softmax layer will have to be done on the CPU.

0 Kudos
Reply