- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The IA-32 architecture host installation is no longer supported.
The product cannot be installed on this system.
Please refer to product documentation for more information.
Quitting!
I also assume the Python API is just a wrapper which interfaces with these existing linux so files in which case it won't help either.
I would like to know if this is possible to recompile the libs / example code to run on an ARM chip or if this is actually not possible for some technical reason such as the reliance on something like AVX in the core code?
Does Intel plan to support devices like the ASUS TinkerBoard or Raspberry Pi for Edge Inference with their Myriad hardware?
Or should I be looking elsewhere for non Intel based Edge Inference such as with MACE:
https://github.com/XiaoMi/mace
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Madhava,
In theory, this is possible but I think you can't do it in this environment.
OpenVINO is built on Intel platform and optimize for its hardware, so the inference engine can only run on Intel platform; NCS Myriad works as a plugin to OpenVINO, so if the OpenVINO inference engine can't run, you can't run inference on NCS Myrid.
Does this answer your question?
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Mark,
I ask because @Tome_at_Intel in the NCS forums says that OpenVINO isnt exclusively for Intel CPUs.
I guess he is wrong as you have pointed out, OpenVINO IS Exclusively for Intel CPUs which I guess means its not really "OPEN".
Or is he correct and there is a way we can compile and run OpenVINO on non Intel hardware?
@madhavajay Thank you for the feedback. We wholeheartedly understand your point of view regarding Tensorflow SSD MobileNet. With regards to training SSD MobileNet on Caffe, have you tried using https://github.com/listenlink/caffe/tree/ssd for training? This branch supports CUDNNv9 which has support for training acceleration for depthwise(grouped) convolutions. This could help speed up your training on Caffe.
Additionally, I want to add that OpenVINO isn't exclusive to Intel CPUs and currently can be used with other devices/hardware (including NCS devices). Please see https://software.intel.com/en-us/openvino-toolkit for more information.
https://ncsforum.movidius.com/discussion/comment/3219/#Comment_3219
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Madhava,
I can read his comment and I think he should say "...currently can be used with other Intel devices/hardware".
You can see this clear in the page he pointed to:
"Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API"
Let me know if you have different opinion.
Mark
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page