- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Nvidia is pip install tensorflow-gpu then run the python code
Amd is pip install tensorflow-rocm then run the python code
https://pypi.org/project/tensorflow-rocm/
We need Intel support pip install tensorflow-vpu
now vpu need use openvino to transfer model and load ir model
please let us use tensorflow easier
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
please support follows
pip install tensorflow-vpu
pip install caffe-vpu
pip install pytorch-vpu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear stone, jesse,
This inquiry doesn't seem like it has anything to do with OpenVino ? It appears to me that in those cases you mention, those vendors have an optimized version of tensorflow that runs on AMD rocm or Nvidia GPU. In fact Intel offers the same - a version of Tensorflow optimized to execute much faster on Intel CPU, And sure, with Intel's version you can pip install it also.
It's right here:
https://pypi.org/project/intel-tensorflow/
Do we have one for VPU ? No but that's because tensorflow doesn't directly run on VPU. OpenVino does.
Hope it helps,
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jesse,
I think what you ask for is completely valid. OpenVINO must target all the IPs that Intel owns, and if an IP is well suited for machine learning, it should very well support the frameworks indirectly by converting to OpenVINO. But in an ideal world, there should be no conversion into an intermediate OpenVINO representation. This can be done by adding all the IPs as backends to the frameworks, which will not happen due to the growing number of frameworks, and Intel's stringent development resources. This hoewever may be possible if we have libraries for each IP that is exposed to public, that any keen open source developer would be able to develop the backends. Alternative approach that is possible within Intel and have value for Intel is to use OpenVINO transparently as an intermediate representation for the frameworks, but hidden away from the user. So, what you should expect is Tensorflow, Caffe, PyTorch and other frameworks to run on Intel GPU, on VPU and CPU. I am not sure when this will happen, but I am looking forward for that day.
Kind regards,
Sriram
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page