Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Community Manager
290 Views

[Error 35] Setup Error: Not enough resources on Myriad to process this network.

I encountered NCS resources is not enough when I transferred my model (using mvNCProfile, mvNCCompile).

 

What can I do for it?

 

Can I use multiple NCSs to provide more resources for one model? How to do it?

 

or Does it have other method to run big model?

 

In addition, how to calculate model's size as mentioned before: https://ncsforum.movidius.com/discussion/comment/2277/#Comment_2277 to know the gap between model size and limits size.

0 Kudos
5 Replies
Highlighted
Community Manager
6 Views

Re: [Error 35] Setup Error: Not enough resources on Myriad to process this network.

_urgent question:_

 

Can I use multiple Movidius neural compute sticks to convert one model and do inference with one model, to solve above " Not enough resources " error? Have someone tried that? Thanks.
0 Kudos
Highlighted
Community Manager
6 Views

Re: [Error 35] Setup Error: Not enough resources on Myriad to process this network.

@miily I don't know of anyone who has tried this before and I don't know if it's possible. Maybe if you can split up the network into two networks, you can load the first half of the network, save the intermediate result and maybe use that as input to the second half of the network and then load the second half of the network. Let me know if you are successful.

0 Kudos
Highlighted
Community Manager
6 Views

Re: [Error 35] Setup Error: Not enough resources on Myriad to process this network.

@Tome_at_Intel Thanks.

 

I try to split up my deploy.prototxt become two parts.

 

The last layer's blob size of the first half of the network is (6,130,130)

 

So in the deploy.prototxt of the second half network, I added an input data layer

 

input: data

 

input_shape {

 

dim: 1

 

dim: 6

 

dim: 130

 

dim: 130

 

}

 

I converted the two parts of deploy.prototxt to movidius model, and they can do inference.

 

But, the movidius' output result of the first half of the network is one dimension like 101400 (= 6 * 130 * 130),

 

I wonder how to correspond output of the first half of the network to the input data of the second half of the network, if the input data shape of the second half of the network is (6, 130, 130)

 

I use reshape_output = output.reshape(6,130,130) to convert the dimension of output result of the first half network, and put _reshape_output_ to the second half network, but the final inference result of the second half network seems wrong.

0 Kudos
Highlighted
Community Manager
6 Views

Re: [Error 35] Setup Error: Not enough resources on Myriad to process this network.

@miily It sounds like you are doing everything right. Can you provide the model files/python application that you're working with so I can help with debugging?

0 Kudos
Highlighted
Community Manager
6 Views

Re: [Error 35] Setup Error: Not enough resources on Myriad to process this network.

@Tome_at_Intel

 

I use above method and successfully divide AlexNet, and the result is right. This is my code to share Movidius' users: https://github.com/PieBoo/Movidius_divided-AlexNet

 

but sadly, the result is not right in my own network architecture, there are conv -> PReLU -> BatchNorm -> Scale, like below.

 

Does these layers design cause the wrong result?

layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 10 kernel_size: 5 stride: 2 pad: 2 weight_filler { type: "xavier" } } } layer { name: "relu_conv1" type: "PReLU" prelu_param {filler {type: "constant" value: 0.3} channel_shared: false} bottom: "conv1" top: "relu_conv1" } layer { name: "bn_conv1" type: "BatchNorm" bottom: "relu_conv1" top: "bn_conv1" param { lr_mult: 0 decay_mult: 0} param { lr_mult: 0 decay_mult: 0} param { lr_mult: 0 decay_mult: 0} batch_norm_param { use_global_stats: true } include { phase: TEST } } layer { name: "scale_conv1" type: "Scale" bottom: "bn_conv1" top: "scale_conv1" scale_param { bias_term: true } }

 

 

0 Kudos