Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

NCS2 and TensorFlow

Friedman__Benny
Beginner
1,417 Views

Can you use device placement with Tensorflow on NCS2?

I mean I would like to run my neural network and choose NCS2 as a default device to be used.

Thanks,

benny

0 Kudos
6 Replies
Sahira_Intel
Moderator
1,417 Views

Hi Benny,

Are you trying to run your custom network in Tensorflow on the NCS2? If so, you will need to convert your .pb file to IR format using the model optimizer included in the OpenVINO toolkit. Take a look at the getting started guide for NCS 2.

https://software.intel.com/en-us/articles/get-started-with-neural-compute-stick

I recommend looking at the sample applications to understand how to load the model to the neural compute stick.

Sincerely,

Sahira

0 Kudos
Friedman__Benny
Beginner
1,417 Views

Sahira,

I saw the documentation about the model optimizer. My question is different.

I would like to use the device placement feature of TensorFlow.

I'm using C++ API, but it is similar in Python:

with tf.device('/GPU:0'):

...

See https://www.tensorflow.org/guide/using_gpu

 

benny

0 Kudos
Shubha_R_Intel
Employee
1,417 Views

Dearest Friedman, Benny,

The specific device placement feature of TensorFlow is not supported by OpenVino. There is a reason for this. OpenVino avoids the programming paradigm of "hardcoding" devices. It's actually very elegantly designed. The Model Optimizer is completely device and framework agnostic, while even the Inference Engine is abstracted away from hardware. All hardware functionality is encapsulated in "plugins", but Inference Engine itself is not tied to hardware. So the very nature of this "device placement" feature goes against the grain of how OpenVino was designed.

That said, of course within Inference Engine you can perform "device placement" straight into the code similarly to your Tensorflow example shown above. It's not Tensorflow's Device Placement but it's OpenVino's method of Device Placement. If you study OpenVino samples it will become clear to you how this can be done. The samples expect you to pass in the device via the "-d" gflags switch but if you want to hard code GPU, CPU, MYRIAD, FPGA in that's fine - OpenVino will support it.

Hope it helps,

Thanks,

Shubha

0 Kudos
Friedman__Benny
Beginner
1,417 Views

Shubha,

Thank you for your answer.

I think I now understand.

As I wrote, I'm using C++ tensorflow API, so my plan is first to implement freeze and save model to pb file and than I'll try to use OpenVino to run it.

I'm planning to do everything in C++, w/o any Python or command line.

benny

0 Kudos
Shubha_R_Intel
Employee
1,417 Views

Dearest Friedman, Benny,

Seems like a sound plan. By the way, my guess is that if you have Tensorflow Device Placement stuff in your model, Model Optimizer will just simply ignore it.  Model optimizer doesn't care about training related artifacts within a Model - and such things as "Tensorflow Device Placement" is in fact a training related thing. You won't see anything related to training stuff in the Model Optimizer generated IR. And remember if you use a "Device Placement Kind of Thing" within Inference Engine (as I described above, the "-d" switch in the samples), it's not for training but rather for inference.

Good luck and thanks for using OpenVino !

Shubha

 

0 Kudos
Friedman__Benny
Beginner
1,417 Views

Shubha,

Device placement is not my goal. My goal is to use my model on a GPU (NCS2).

I hope I can also train the model using OpenVino and not only Inference.

benny

0 Kudos
Reply