Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Myriad processor accuracy decrease over CPU

Srush1
Beginner
411 Views

Hey there, I'm doing my graduation project with the Intel Myriad processor.

I just found out that the Myriad processor on the dedicated device always has a slight accuracy decrease over the CPU/GPU on my Windows computer.

The model used is an XCeption model from the Keras applications library which is trained by myself to around 25%, 50% and 65% (I didn't have the time to train any further than 65%). As dataset I used the Stanford Dogs Dataset (https://www.kaggle.com/jessicali9530/stanford-dogs-dataset) with cropped dogs (dataset can be found in my google drive link in the footnote, including all other files including the networks used).

The exact specifications of my system are:

  • CPU: AMD Embedded G series GX-412TC
  • RAM: 2GB
  • TPU: AI Core X (https://www.aaeon.com/en/p/ai-edge-computing-board-ai-core-x)
  • OS: Linux Ubuntu 18.04 LTS
  • OpenVINO version: 2020.1
  • Programming language: Python

This is the code I use to load the model:

from openvino.inference_engine import IENetwork, IEPlugin
model_path = "./model/frozen_model" # This isn't in my code, I included this just to make testing easier
model_xml = model_path + ".xml"
model_bin = model_path + ".bin"
plugin_dir = None
plugin = IEPlugin("MYRIAD", plugin_dirs=plugin_dir)
net = IENetwork(model=model_xml, weights=model_bin)
input_name = next(iter(net.inputs))
exec_net = plugin.load(network=net)

The problem occurs when I use the Myriad processor. If I change the "MYRIAD" in the previous code to "CPU", then the accuracy goes back up to what it is on my GPU(GTX1080). This device's CPU has exactly the same results as my GPU/CPU on my Windows computer, so I assume there is nothing wrong with my compilation process.

The model is compiled with the following code:

python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /root/projects/model/frozen_model.pb --output_dir /root.projects/model --input_shape [1,331,331,3] --data_type FP32

I've tried different networks, different accuracies within networks, the OpenCV DNN extension, different image sizes and different datatypes. Nothing increased the accuracy to what the CPU has.

 

I am wondering if I'm doing something wrong, or if I'm not, what the cause is of this decrease in accuracy is. Is it always the same decrease (for instance 10% decrease relative to the original) or do specific layers affect the decrease? The only logical conclusion I could reach was that this has something to do with the architecture of the Myriad processor, that it gets a lot more efficienct for a small decrease in accuracy.

Thanks in advance for reading (and answering). I hope this is enough information for resolvement.

 

 

PS: The NASNetLarge model takes around 35 minutes to load into the AI Core X's memory, I think this isn't a normal amount of time.

Edit/NOTE: I'm currently unable to upload my plots because of an AJAX HTTP error (see https://software.intel.com/en-us/forums/watercooler-catchall/topic/404053). All the files I've talked about can be found on my google drive https://drive.google.com/drive/folders/1p_gaoi2fbrBfq54XNnF6rMO_df7yGzL2?usp=sharing

0 Kudos
1 Reply
David_C_Intel
Employee
411 Views

Hi Sander, 

This issue is currently being discussed in this thread.

Best regards,

David

0 Kudos
Reply