Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Convoluiton Error

idata
Employee
718 Views
[Error 25] Myriad Error: "Cannot Allocate space for convolution.".

 

Why is this occuring? My graph sieze is 290mb

 

Please see attached the relevant prototxt

 

name: "VGG_FACE_16_layer" input: "data" input_shape { dim: 1 dim: 3 dim: 227 dim: 227 } layer { bottom: "data" top: "conv1_1" name: "conv1_1" type: "Convolution" convolution_param { num_output: 64 pad: 1 kernel_size: 3 } } layer { bottom: "conv1_1" top: "conv1_1" name: "relu1_1" type: "ReLU" } layer { bottom: "conv1_1" top: "conv1_2" name: "conv1_2" type: "Convolution" convolution_param { num_output: 64 pad: 1 kernel_size: 3 } } layer { bottom: "conv1_2" top: "conv1_2" name: "relu1_2" type: "ReLU" } layer { bottom: "conv1_2" top: "pool1" name: "pool1" type: "Pooling" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { bottom: "pool1" top: "conv2_1" name: "conv2_1" type: "Convolution" convolution_param { num_output: 128 pad: 1 kernel_size: 3 } } layer { bottom: "conv2_1" top: "conv2_1" name: "relu2_1" type: "ReLU" } layer { bottom: "conv2_1" top: "conv2_2" name: "conv2_2" type: "Convolution" convolution_param { num_output: 128 pad: 1 kernel_size: 3 } } layer { bottom: "conv2_2" top: "conv2_2" name: "relu2_2" type: "ReLU" } layer { bottom: "conv2_2" top: "pool2" name: "pool2" type: "Pooling" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { bottom: "pool2" top: "conv3_1" name: "conv3_1" type: "Convolution" convolution_param { num_output: 256 pad: 1 kernel_size: 3 } } layer { bottom: "conv3_1" top: "conv3_1" name: "relu3_1" type: "ReLU" } layer { bottom: "conv3_1" top: "conv3_2" name: "conv3_2" type: "Convolution" convolution_param { num_output: 256 pad: 1 kernel_size: 3 } } layer { bottom: "conv3_2" top: "conv3_2" name: "relu3_2" type: "ReLU" } layer { bottom: "conv3_2" top: "conv3_3" name: "conv3_3" type: "Convolution" convolution_param { num_output: 256 pad: 1 kernel_size: 3 } } layer { bottom: "conv3_3" top: "conv3_3" name: "relu3_3" type: "ReLU" } layer { bottom: "conv3_3" top: "pool3" name: "pool3" type: "Pooling" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { bottom: "pool3" top: "conv4_1" name: "conv4_1" type: "Convolution" convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layer { bottom: "conv4_1" top: "conv4_1" name: "relu4_1" type: "ReLU" } layer { bottom: "conv4_1" top: "conv4_2" name: "conv4_2" type: "Convolution" convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layer { bottom: "conv4_2" top: "conv4_2" name: "relu4_2" type: "ReLU" } layer { bottom: "conv4_2" top: "conv4_3" name: "conv4_3" type: "Convolution" convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layer { bottom: "conv4_3" top: "conv4_3" name: "relu4_3" type: "ReLU" } layer { bottom: "conv4_3" top: "pool4" name: "pool4" type: "Pooling" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { bottom: "pool4" top: "conv5_1" name: "conv5_1" type: "Convolution" convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layer { bottom: "conv5_1" top: "conv5_1" name: "relu5_1" type: "ReLU" } layer { bottom: "conv5_1" top: "conv5_2" name: "conv5_2" type: "Convolution" convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layer { bottom: "conv5_2" top: "conv5_2" name: "relu5_2" type: "ReLU" } layer { bottom: "conv5_2" top: "conv5_3" name: "conv5_3" type: "Convolution" convolution_param { num_output: 512 pad: 1 kernel_size: 3 } } layer { bottom: "conv5_3" top: "conv5_3" name: "relu5_3" type: "ReLU" } layer { bottom: "conv5_3" top: "pool5" name: "pool5" type: "Pooling" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { bottom: "pool5" top: "fc6" name: "fc6" type: "InnerProduct" inner_product_param { num_output: 4096 } } layer { bottom: "fc6" top: "fc6" name: "relu6" type: "ReLU" } layer { bottom: "fc6" top: "fc6" name: "drop6" type: "Dropout" dropout_param { dropout_ratio: 0.5 } } layer { bottom: "fc6" top: "fc7" name: "fc7" type: "InnerProduct" inner_product_param { num_output: 4096 } } layer { bottom: "fc7" top: "fc7" name: "relu7" type: "ReLU" } layer { bottom: "fc7" top: "fc7" name: "drop7" type: "Dropout" dropout_param { dropout_ratio: 0.5 } } layer { bottom: "fc7" top: "fc8" name: "fc8" type: "InnerProduct" inner_product_param { num_output: 2622 } } layer { bottom: "fc8" top: "prob" name: "prob" type: "Softmax" }
0 Kudos
4 Replies
idata
Employee
436 Views

@Tome_at_Intel Can you assist here?

0 Kudos
idata
Employee
436 Views

@gauravmoon VGG16 isn't supported by the current version of the SDK (1.10.00), although we are currently testing a release candidate that should add support for VGG16 soon.

0 Kudos
idata
Employee
436 Views

what is so specific about vgg16 that is not supported? Can you please help me understand? If I make a custom net what will prevent it from running?

0 Kudos
idata
Employee
436 Views

@gauravmoon VGG uses larger convolutions.

0 Kudos
Reply