- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Can you use device placement with Tensorflow on NCS2?
I mean I would like to run my neural network and choose NCS2 as a default device to be used.
Thanks,
benny
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi Benny,
Are you trying to run your custom network in Tensorflow on the NCS2? If so, you will need to convert your .pb file to IR format using the model optimizer included in the OpenVINO toolkit. Take a look at the getting started guide for NCS 2.
https://software.intel.com/en-us/articles/get-started-with-neural-compute-stick
I recommend looking at the sample applications to understand how to load the model to the neural compute stick.
Sincerely,
Sahira
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Sahira,
I saw the documentation about the model optimizer. My question is different.
I would like to use the device placement feature of TensorFlow.
I'm using C++ API, but it is similar in Python:
with tf.device('/GPU:0'):
...
See https://www.tensorflow.org/guide/using_gpu
benny
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Dearest Friedman, Benny,
The specific device placement feature of TensorFlow is not supported by OpenVino. There is a reason for this. OpenVino avoids the programming paradigm of "hardcoding" devices. It's actually very elegantly designed. The Model Optimizer is completely device and framework agnostic, while even the Inference Engine is abstracted away from hardware. All hardware functionality is encapsulated in "plugins", but Inference Engine itself is not tied to hardware. So the very nature of this "device placement" feature goes against the grain of how OpenVino was designed.
That said, of course within Inference Engine you can perform "device placement" straight into the code similarly to your Tensorflow example shown above. It's not Tensorflow's Device Placement but it's OpenVino's method of Device Placement. If you study OpenVino samples it will become clear to you how this can be done. The samples expect you to pass in the device via the "-d" gflags switch but if you want to hard code GPU, CPU, MYRIAD, FPGA in that's fine - OpenVino will support it.
Hope it helps,
Thanks,
Shubha
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Shubha,
Thank you for your answer.
I think I now understand.
As I wrote, I'm using C++ tensorflow API, so my plan is first to implement freeze and save model to pb file and than I'll try to use OpenVino to run it.
I'm planning to do everything in C++, w/o any Python or command line.
benny
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Dearest Friedman, Benny,
Seems like a sound plan. By the way, my guess is that if you have Tensorflow Device Placement stuff in your model, Model Optimizer will just simply ignore it. Model optimizer doesn't care about training related artifacts within a Model - and such things as "Tensorflow Device Placement" is in fact a training related thing. You won't see anything related to training stuff in the Model Optimizer generated IR. And remember if you use a "Device Placement Kind of Thing" within Inference Engine (as I described above, the "-d" switch in the samples), it's not for training but rather for inference.
Good luck and thanks for using OpenVino !
Shubha
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Shubha,
Device placement is not my goal. My goal is to use my model on a GPU (NCS2).
I hope I can also train the model using OpenVino and not only Inference.
benny
