Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6161 Discussions

Cross Compile for ARM and run on ASUS TinkerBoard

Is it possible to run the inference engine on an ARM Chipset like this and potentially leverage an attached NCS Myriad?
processor : 0
model name : ARMv7 Processor rev 1 (v7l)
BogoMIPS : 48.00
Features : half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc0d
CPU revision : 1
When I read the docs it seems that the is needed and then optionally the
Both of these appear to ship as intel64 only and the binaries of the sample app compiled in Ubuntu 16.04 in my VM wont run on the Arm machine as expected.
I cant install the OpenVINO SDK on the Arm machine because it says:

The IA-32 architecture host installation is no longer supported.
The product cannot be installed on this system.
Please refer to product documentation for more information.


I also assume the Python API is just a wrapper which interfaces with these existing linux so files in which case it won't help either.

I would like to know if this is possible to recompile the libs / example code to run on an ARM chip or if this is actually not possible for some technical reason such as the reliance on something like AVX in the core code?

Does Intel plan to support devices like the ASUS TinkerBoard or Raspberry Pi for Edge Inference with their Myriad hardware?

Or should I be looking elsewhere for non Intel based Edge Inference such as with MACE:

0 Kudos
3 Replies

Hi Madhava,

In theory, this is possible but I think you can't do it in this environment.

OpenVINO is built on Intel platform and optimize for its hardware, so the inference engine can only run on Intel platform; NCS Myriad works as a plugin to OpenVINO, so if the OpenVINO inference engine can't run, you can't run inference on NCS Myrid.

Does this answer your question?



Hi Mark,

I ask because @Tome_at_Intel in the NCS forums says that OpenVINO isnt exclusively for Intel CPUs.
I guess he is wrong as you have pointed out, OpenVINO IS Exclusively for Intel CPUs which I guess means its not really "OPEN".

Or is he correct and there is a way we can compile and run OpenVINO on non Intel hardware?

@madhavajay Thank you for the feedback. We wholeheartedly understand your point of view regarding Tensorflow SSD MobileNet. With regards to training SSD MobileNet on Caffe, have you tried using for training? This branch supports CUDNNv9 which has support for training acceleration for depthwise(grouped) convolutions. This could help speed up your training on Caffe.

Additionally, I want to add that OpenVINO isn't exclusive to Intel CPUs and currently can be used with other devices/hardware (including NCS devices). Please see for more information.



Hi Madhava,

I can read his comment and I think he should say "...currently can be used with other Intel devices/hardware".

You can see this clear in the page he pointed to:

"Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API"

Let me know if you have different opinion.