Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
104 Views

NCS2 supported operations

Jump to solution

Hello,

I have a trained PyTorch model and I would like to use the Intel NCS2 to run the inference of the model. I know that I have to export the model to ONNX and I have successfully done it, but I am not sure if all the operations of the model are supported by the NCS2. Is there any detailed reference about the operations supported by the device?

I've found the operations supported by OpenVINO: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_IR_V10_opset1.html but about the NCS2 I only found this: https://software.intel.com/en-us/articles/neural-compute-stick-2-optimizing-networks

Are all the attributes from OpenVINO Convolutions supported by the NCS2? I am especially interested in dilated convolutions and 3D convolutions.

Best regards,

David

0 Kudos

Accepted Solutions
Highlighted
104 Views

Hi Javier,

I could make a model using 2D+1D convolutions instead of 3D convolutions which seems to run fast enough. For 103 inferences, it needed 9s in the NCS2 with the VPU optimization, 18s without it and 6.25s in the CPU. 103 inferences correspond to processing 20s of data, so I think I'll be able to build my real-time application in a Raspberry Pi using the NCS2.

Thank you very much for your support during this time,
David

View solution in original post

0 Kudos
12 Replies
Highlighted
104 Views

Hi David,

 

Thank you for reaching out.

 

The Myriad Plugin supports the networks specified in the documentation here:

 

https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_MYRIAD.html

 

However, there is a chance your ONNX model may work, are you seeing errors?

 

Please see the Supported Layers by the Myriad Plugin (VPU) in the following link:

 

https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#support...

 

Regards,

 

Javier A.

0 Kudos
Highlighted
104 Views

Hi Javier,

Thank you very much for your response, those links were exactly what I was looking for.

Finally, I was able to generate the Intermediate Representation of my onnx model with the openVINO model optimizer and then I executed it in the NCS2 as done in this example: https://github.com/movidius/ncappzoo/blob/master/networks/age_gender_net/age_gender_net.py I have not yet tested whether the outputs are correct, but at least it runs without errors.

However, according to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#support... the 3D convolutional layers of my model are not supported by the NCS2, so I guess that some parts of my model are running on CPU and others in the NCS2; I think I read somewhere that openVINO could do that. ¿Is it possible to know what operations are running on NCS2 and what are running on the CPU?

Best regards,

David

0 Kudos
Highlighted
104 Views

Hi David,

 

I am glad to hear you were able to generate the IR format of your model and run it in the Intel® Neural Compute Stick 2.

 

When you run the command with the "-d" flag, it will inference only in that specific device. In your case, if you run the command with the flag "-d MYRIAD", OpenVINO™ toolkit will inference in the Intel® Neural Compute Stick 2 only.

 

However, the OpenVINO™ toolkit has the option to inference in multiple devices at the same time, you can check that on this link.

 

Regards,

 

Javier A.

0 Kudos
Highlighted
104 Views

Thanks again for your reply Javier.

I was running my model through a custom python script (based on this example) so I can't use the "-d" flag. Since the inputs of my model are not images, I can't use benchmark_app and most of the openVINO scripts which are computer vision oriented. Anyway, I loaded it with the following functions, so I guess this is equivalent to use the "-d MYRIAD" flag.

ie = IECore()
my_model_net = IENetwork(model = 'my_model.xml', weights = 'my_model.bin')
my_model_exec_net = ie.load_network(network = my_model_net , device_name = "MYRIAD")

In addition, I've used the following openVINO functions to check what layers are supported by the NCS2:

supported_layers = ie.query_network(net, "MYRIAD")
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]

And I got the following results:

not_supported_layers                                                  
Out[7]:
['input',
 'Constant_2742',
 'Constant_2737',
 'Constant_2735',
 'Constant_2730',
 '99/Cast_16480_const',
 '74/Cast_16432_const',
 '54/Cast_16414_const',
 '459/Cast_16412_const',
 '448/Cast_16408_const',
 '434/Cast_16424_const',
 '414/Cast_16470_const',
 '389/Cast_16430_const',
 '369/Cast_16444_const',
 '344/Cast_16484_const',
 '324/Cast_16446_const',
 '299/Cast_16404_const',
 '279/Cast_16422_const',
 '254/Cast_16454_const',
 '234/Cast_16478_const',
 '209/Cast_16452_const',
 '189/Cast_16426_const',
 '164/Cast_16390_const',
 '144/Cast_16482_const',
 '119/Cast_16428_const']

In [8]: supported_layers                                                      
Out[8]:
{'100': 'MYRIAD',
 '103': 'MYRIAD',
 '119': 'MYRIAD',
 '120': 'MYRIAD',
 '121': 'MYRIAD',
 '122': 'MYRIAD',
 '144': 'MYRIAD',
 '145': 'MYRIAD',
 '148': 'MYRIAD',
 '164': 'MYRIAD',
 '165': 'MYRIAD',
 '166': 'MYRIAD',
 '167': 'MYRIAD',
 '189': 'MYRIAD',
 '190': 'MYRIAD',
 '193': 'MYRIAD',
 '209': 'MYRIAD',
 '210': 'MYRIAD',
 '211': 'MYRIAD',
 '212': 'MYRIAD',
 '234': 'MYRIAD',
 '235': 'MYRIAD',
 '238': 'MYRIAD',
 '2425/Split': 'MYRIAD',
 '2430/Split': 'MYRIAD',
 '2435/Split': 'MYRIAD',
 '2440/Split': 'MYRIAD',
 '2445/Split': 'MYRIAD',
 '2460/Split': 'MYRIAD',
 '2465/Split': 'MYRIAD',
 '2470/Split': 'MYRIAD',
 '2485/Split': 'MYRIAD',
 '254': 'MYRIAD',
 '255': 'MYRIAD',
 '256': 'MYRIAD',
 '257': 'MYRIAD',
 '279': 'MYRIAD',
 '280': 'MYRIAD',
 '283': 'MYRIAD',
 '299': 'MYRIAD',
 '300': 'MYRIAD',
 '301': 'MYRIAD',
 '302': 'MYRIAD',
 '324': 'MYRIAD',
 '325': 'MYRIAD',
 '328': 'MYRIAD',
 '344': 'MYRIAD',
 '345': 'MYRIAD',
 '346': 'MYRIAD',
 '347': 'MYRIAD',
 '369': 'MYRIAD',
 '370': 'MYRIAD',
 '373': 'MYRIAD',
 '389': 'MYRIAD',
 '390': 'MYRIAD',
 '391': 'MYRIAD',
 '392': 'MYRIAD',
 '414': 'MYRIAD',
 '415': 'MYRIAD',
 '418': 'MYRIAD',
 '434': 'MYRIAD',
 '435': 'MYRIAD',
 '436': 'MYRIAD',
 '437': 'MYRIAD',
 '438': 'MYRIAD',
 '448': 'MYRIAD',
 '449': 'MYRIAD',
 '459': 'MYRIAD',
 '460': 'MYRIAD',
 '461': 'MYRIAD',
 '462': 'MYRIAD',
 '462/new': 'MYRIAD',
 '462/reshape_begin': 'MYRIAD',
 '463': 'MYRIAD',
 '464': 'MYRIAD',
 '464/new': 'MYRIAD',
 '464/reshape_begin': 'MYRIAD',
 '465': 'MYRIAD',
 '54': 'MYRIAD',
 '55': 'MYRIAD',
 '58': 'MYRIAD',
 '74': 'MYRIAD',
 '75': 'MYRIAD',
 '76': 'MYRIAD',
 '77': 'MYRIAD',
 '99': 'MYRIAD',
 'output': 'MYRIAD'}

Taking a look at the IR xml file, the not supported cast layers are castings to int64 whose outputs are used as parameters of reshaping layers, but my main concern is the 3D convolutional layers. I think the following snippet of the xml file represents one of these layers:

<layer id="13" name="75/WithoutBiases" type="Convolution" version="opset1">
	<data dilations="1,1,1" output_padding="0,0,0" pads_begin="0,0,0" pads_end="0,0,0" strides="1,1,1"/>
	<input>
		<port id="0">
			<dim>1</dim>
			<dim>3</dim>
			<dim>37</dim>
			<dim>20</dim>
			<dim>36</dim>
		</port>
		<port id="1">
			<dim>32</dim>
			<dim>3</dim>
			<dim>5</dim>
			<dim>5</dim>
			<dim>5</dim>
		</port>
	</input>
	<output>
		<port id="2" precision="FP32">
			<dim>1</dim>
			<dim>32</dim>
			<dim>33</dim>
			<dim>16</dim>
			<dim>32</dim>
		</port>
	</output>
</layer>
<layer id="14" name="75/Dims2310/copy_const" type="Const" version="opset1">
	<data element_type="f32" offset="48136" shape="1,32,1,1,1" size="128"/>
	<output>
		<port id="1" precision="FP32">
			<dim>1</dim>
			<dim>32</dim>
			<dim>1</dim>
			<dim>1</dim>
			<dim>1</dim>
		</port>
	</output>
</layer>
<layer id="15" name="75" type="Add" version="opset1">
	<input>
		<port id="0">
			<dim>1</dim>
			<dim>32</dim>
			<dim>33</dim>
			<dim>16</dim>
			<dim>32</dim>
		</port>
		<port id="1">
			<dim>1</dim>
			<dim>32</dim>
			<dim>1</dim>
			<dim>1</dim>
			<dim>1</dim>
		</port>
	</input>
	<output>
		<port id="2" precision="FP32">
			<dim>1</dim>
			<dim>32</dim>
			<dim>33</dim>
			<dim>16</dim>
			<dim>32</dim>
		</port>
	</output>
</layer>

'75' is in the supported_layers list, but I don't know whether this means that '75/WithoutBiases' and '75/Dims2310/copy_const' are also supported or not since they aren't in supported_layers or not_supported_layers. Since "75/WithoutBiases" is a 3D convolution, it should not be supported by the NCS2 according to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#support..., but it run nicely loading the network with device_name = "MYRIAD" and it even ran in a Raspberry Pi, whose ARM CPU is not supported by the openVINO inference engine.

Do you think that the 3D convolutions are running in the NCS2 or in the CPU?

Best regards,

David

0 Kudos
Highlighted
104 Views

Hi David,

 

Did you try to run your model on the Intel® Neural Compute Stick 2? If so, did you get the expected results?

Also, the model will run on the specified device, in this case, MYRIAD.

 

Regards,

 

Javier A.

0 Kudos
Highlighted
104 Views

Hi Javier,

I finally ran the model in the Intel NCS2 and I got the same results than I had gotten in Pytorch; however, it is terribly slow... Performing 103 inferences took more than 40 minutes when running it in CPU with OpenVINO only took about 18 seconds. I have been using a Virtual Machine, so I do not know if the issue is due to the VM or to be using 3D convolutional layers, which may not be optimized in the NCS2.

Best regards,

David

0 Kudos
Highlighted
104 Views

Hi David,

 

Could you please try to turn off the VPU optimization and tell us if that makes a difference?

 

To turn off the VPU optimization, please do the following:

  • Turn off the VPU_HW_STAGES_OPTIMIZATION and re-test.
  • Add the following after declaring IECore():

if args.device == "MYRIAD":

     ie.set_config({'VPU_HW_STAGES_OPTIMIZATION': 'NO'}, "MYRIAD")

 

Regards,

 

Javier A.

0 Kudos
Highlighted
104 Views

Thanks for your reply Javier,

Turning off the VPU optimization the 103 inferences took 38 minutes instead of 44, but it is still orders of magnitude slower than running it in the CPU (18 seconds). I've run a model from ncappzoo in the same Virtual Machine and the difference between the MYRIAD and the CPU devices is not so big, so I guess the issue are the 3D convolutions, which, actually, are not supported by MYRIAD according to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#support...

I'll try to train a new model using 2D+1D convolutions to see if I can get a good performance in the NCS2 without increasing too much its error.

Best regards,

David

0 Kudos
Highlighted
104 Views

Hi David,

 

Thanks for your reply.

This performance issue is maybe caused by the 3D convolution layer as you mentioned. Let us know if you get better results using the new model trained with 2D+1D convolution layers.

 

Regards,

Javier A.

0 Kudos
Highlighted
105 Views

Hi Javier,

I could make a model using 2D+1D convolutions instead of 3D convolutions which seems to run fast enough. For 103 inferences, it needed 9s in the NCS2 with the VPU optimization, 18s without it and 6.25s in the CPU. 103 inferences correspond to processing 20s of data, so I think I'll be able to build my real-time application in a Raspberry Pi using the NCS2.

Thank you very much for your support during this time,
David

View solution in original post

0 Kudos
Highlighted
Beginner
104 Views

Hi, David, Javier

 

I have read your talk carefully. However, I also have some questions and was wondering if it was possible for you to help me.

 

As I need to use Intel NCS2, I download ncappzoo and would like to run examples in the files. However, every file in the ncappzoo when I run them says Makefile:21: recipe for target 'compile_default_model' failed make: *** [compile_default_model] Error 255 or just like this.

 

F:\ncappzoo\apps\benchmark_ncs>C:\MinGW\bin\make run
'\033[1;33m''\n'benchmark_ncs": No data needed."'\033[0m'
'\033[1;33m''\n'benchmark_ncs": Compiling the model"'\033[0m'
此时不应有 googlenet-v1.xml。
Makefile:21: recipe for target 'compile_default_model' failed
make: *** [compile_default_model] Error 255

F:\ncappzoo\apps\benchmark_ncs>C:\MinGW\bin\make clean
'\033[1;33m''\n'benchmark_ncs": Cleaning up files...""'\033[0m'"
rm -f googlenet-v1.*
process_begin: CreateProcess(NULL, rm -f googlenet-v1.*, ...) failed.
make (e=2):
Makefile:96: recipe for target 'clean' failed
make: *** [clean] Error 2

 

However, when I ran make help or make install-reqs, it can work.

F:\ncappzoo\apps\benchmark_ncs>C:\MinGW\bin\make install-reqs
'\033[1;33m'"\n"benchmark_ncs": Checking installation requirements..."'\033[0m'
"No requirements needed."
 

F:\ncappzoo\apps\benchmark_ncs>C:\MinGW\bin\make help
"\nPossible make targets: ";
'\033[1;33m'"  make run or run_py"'\033[0m'"- runs the application";
'\033[1;33m'"  make help "'\033[0m'"- shows this message";
'\033[1;33m'"  make all "'\033[0m'"- makes everything needed to run but doesn't run";
'\033[1;33m'"  make data "'\033[0m'"- downloads data as needed";
'\033[1;33m'"  make deps "'\033[0m'"- makes/prepares dependencies";
'\033[1;33m'"  make install-reqs "'\033[0m'"- Installs requirements needed to run this sample on your system.";
'\033[1;33m'"  make uninstall-reqs "'\033[0m'"- Uninstalls requirements that were installed by the sample program.";
'\033[1;33m'"  make default_model "'\033[0m'"- compiles a default model to use when running";
'\033[1;33m'"  make clean "'\033[0m'"- removes all created content";
""

 

Thank you for your help!!

 

 

0 Kudos
Highlighted
104 Views

Hi 李, 朝辉,

 

Thanks for reaching out.

As your issue is not related to the original discussion, please start a new thread and we will assist you there.

 

Regards,

Javier A.

0 Kudos