Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Why does the Myriad processor have an accuracy decrease relative to the CPU on the same device?

Srush1
Beginner
1,528 Views

Hey there, I'm doing my graduation project with the Intel Myriad processor.

I just found out that the Myriad processor on the dedicated device always has a slight accuracy decrease over the CPU/GPU on my Windows computer.

 

The model used is an XCeption model from the Keras applications library which is trained by myself to around 25%, 50% and 65% (I didn't have the time to train any further than 65%). As dataset I used the Stanford Dogs Dataset (https://www.kaggle.com/jessicali9530/stanford-dogs-dataset) with cropped dogs (dataset can be found in my google drive link in the footnote, including all other files including the networks used).

 

The exact specifications of my system are:

 

This is the code I use to load the model:

from openvino.inference_engine import IENetwork, IEPlugin model_path = "./model/frozen_model" # This isn't in my code, I included this just to make testing easier model_xml = model_path + ".xml" model_bin = model_path + ".bin" plugin_dir = None plugin = IEPlugin("MYRIAD", plugin_dirs=plugin_dir) net = IENetwork(model=model_xml, weights=model_bin) input_name = next(iter(net.inputs)) exec_net = plugin.load(network=net)

 

 

The problem occurs when I use the Myriad processor. If I change the "MYRIAD" in the previous code to "CPU", then the accuracy goes back up to what it is on my GPU(GTX1080). This device's CPU has exactly the same results as my GPU/CPU on my Windows computer, so I assume there is nothing wrong with my compilation process.

 

The model is compiled with the following code:

python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /root/projects/model/frozen_model.pb --output_dir /root.projects/model --input_shape [1,331,331,3] --data_type FP32

 

I've tried different networks, different accuracies within networks, the OpenCV DNN extension, different image sizes and different datatypes. Nothing increased the accuracy to what the CPU has.

 

I am wondering if I'm doing something wrong, or if I'm not, what the cause is of this decrease in accuracy is. Is it always the same decrease (for instance 10% decrease relative to the original) or do specific layers affect the decrease? The only logical conclusion I could reach was that this has something to do with the architecture of the Myriad processor, that it gets a lot more efficienct for a small decrease in accuracy.

 

 

Thanks in advance for reading (and answering). I hope this is enough information for resolvement.

 

 

PS: The NASNetLarge model takes around 35 minutes to load into the AI Core X's memory, I think this isn't a normal amount of time.

NOTE: I'm currently unable to upload my post on the OpenVINO section of the forum because "The form has become outdated. Copy any unsaved work in the form below and then reload this page." All the files I've talked about can be found on my google drive https://drive.google.com/drive/folders/1p_gaoi2fbrBfq54XNnF6rMO_df7yGzL2?usp=sharing

 

Edit: Formatting

0 Kudos
6 Replies
David_C_Intel
Employee
1,396 Views

Hi Srush1,

 

Thanks for reaching out.

It is odd that you managed to get it work with the CPU plugin, as you can check in the OpenVINO's system requirements, AMD processors are not supported.

 

Regarding with running inference on Myriad, as you are using a tensorflow framework, try using the mo_tf.py. Also, remember the Myriad VPU does not support FP32 datatypes, so change the flag to --data_type FP16.

 

Please try those changes and let us know if there issue persists.

 

Regards,

David C.

 

0 Kudos
Srush1
Beginner
1,396 Views

Thanks for replying,

 

I think it still works on my AMD processor because it's based on the X86 architecture, and that Intel processors are recommended because all those listed have IGPUs. The link you provided specifically states that the 5th gen Xeon processors are excluded because of their lack of IGPUs.

 

I've tried using the mo vs mo_tf, and FP16 vs FP32. Weirdly enough, the results don't change at all when switching from mo to mo_tf. The switch from FP16 to FP32 does change the results slightly (as can be seen in the picture) in both mo and mo_tf.

 

What I find weird is that the results from the Myriad apparently do get affected (not by much, but still) by changing the floating point precision.

0 Kudos
David_C_Intel
Employee
1,396 Views

Hi Srush1,

 

Thanks for your reply.

Could you please answer the following:

  • Please give us a full script to test on our end, as the snippet of code given does not show results.
  • Could you provide an expected sample output?
  • Also, could you try using the latest OpenVINO™ toolkit version : 2020.2?

 

Best regards,

David C.

0 Kudos
Srush1
Beginner
1,396 Views

Hi David C,

 

I have 2 scripts.

The first (model.py) basically handles the model. I let it run on the CPU or Myriad by changing the "MYRIAD" to "CPU" on line 139 within the Plugin initialization.

The second (system_test_forward.py) creates a generator from the testdata and feeds it into the network.

 

The expected output is attached. It is simply that the Myriad and the CPU (both the AMD CPU and my own i7 6700 CPU give the same result) should be having the same result seeing as the network and the inputs are exactly the same.

I can only attach 1 file to this response, so I've added the expected output. The code can be found on my google drive https://drive.google.com/drive/folders/1p_gaoi2fbrBfq54XNnF6rMO_df7yGzL2?usp=sharing

 

I've tried OpenVINO 2020.2 but there is no significant difference relative to 2020.1

 

Thanks in advance,

Shrush1

0 Kudos
David_C_Intel
Employee
1,396 Views

Hi Srush1,

 

Thanks for the information given.

We are currently looking into your issue and will come back to you as soon as possible.

 

Best regards,

David C.

0 Kudos
David_C_Intel
Employee
1,396 Views

Hi Srush1

 

We tested the code provided with the same dataset data and noticed an average accuracy difference of 6.3% with both FP16 and FP32 datatypes. With some images, the accuracy was the same, in others MYRIAD was more accurate than CPU, and in others CPU was more accurate.

This may be due to certain layers in your model that get processed differently with the Myriad vs the CPU.

 

Best regards,

David C.

0 Kudos
Reply