Are you trying to run your custom network in Tensorflow on the NCS2? If so, you will need to convert your .pb file to IR format using the model optimizer included in the OpenVINO toolkit. Take a look at the getting started guide for NCS 2.
I recommend looking at the sample applications to understand how to load the model to the neural compute stick.
I saw the documentation about the model optimizer. My question is different.
I would like to use the device placement feature of TensorFlow.
I'm using C++ API, but it is similar in Python:
Dearest Friedman, Benny,
The specific device placement feature of TensorFlow is not supported by OpenVino. There is a reason for this. OpenVino avoids the programming paradigm of "hardcoding" devices. It's actually very elegantly designed. The Model Optimizer is completely device and framework agnostic, while even the Inference Engine is abstracted away from hardware. All hardware functionality is encapsulated in "plugins", but Inference Engine itself is not tied to hardware. So the very nature of this "device placement" feature goes against the grain of how OpenVino was designed.
That said, of course within Inference Engine you can perform "device placement" straight into the code similarly to your Tensorflow example shown above. It's not Tensorflow's Device Placement but it's OpenVino's method of Device Placement. If you study OpenVino samples it will become clear to you how this can be done. The samples expect you to pass in the device via the "-d" gflags switch but if you want to hard code GPU, CPU, MYRIAD, FPGA in that's fine - OpenVino will support it.
Hope it helps,
Thank you for your answer.
I think I now understand.
As I wrote, I'm using C++ tensorflow API, so my plan is first to implement freeze and save model to pb file and than I'll try to use OpenVino to run it.
I'm planning to do everything in C++, w/o any Python or command line.
Dearest Friedman, Benny,
Seems like a sound plan. By the way, my guess is that if you have Tensorflow Device Placement stuff in your model, Model Optimizer will just simply ignore it. Model optimizer doesn't care about training related artifacts within a Model - and such things as "Tensorflow Device Placement" is in fact a training related thing. You won't see anything related to training stuff in the Model Optimizer generated IR. And remember if you use a "Device Placement Kind of Thing" within Inference Engine (as I described above, the "-d" switch in the samples), it's not for training but rather for inference.
Good luck and thanks for using OpenVino !