Just to clarify, you are trying to convert your model to perform inference, correct? To deploy your network on any DL framework (Caffe, Tensorflow, etc), the Inference Engine will need to use the IR converted version of the original model. So you'd have to convert from .pb to IR to deploy your network to perform inference. You'll need to use the Model Optimizer to do this. Here is a link to how to prepare and optimize your trained model.
Please let me know if this information was helpful!
Thank you for the quick reply.
I understand that the Inference Engine will need to use the IR converted version of the original model.so, I have to convert .pb to IR to deploy the network to perform inference.
My question is can I covert IR back to Orginal model (.pb version)?
I had the same problem, and I don't know if you have eventually figured it our or not. But there seems to be a script just for IR -> tensorflow conversion. I haven't fully tested it yet, but thought it may be of use for you.
I hope it can still be helpful to you after such long time.
Dear madarapu, srikar,
Rizvi, Sahira is correct. Model Optimizer was not designed to "reverse engineer". But there is no reason you can't figure it out yourself. Model Optimizer hides nothing. It's 100% Python code and it's 100% open source. So you can study the code and see how it goes from pb->IR. Then you can determine how to do IR->pb just by studying the code. But understand that the pb you get will not be the true original frozen pb with weights and biases in the right place. Because Model Optimizer merges nodes and discards stuff and also reduces layers (all in the name of optimization), the pb you will get will never look like a bonafide original Tensorflow frozen pb. Morever the pb you get may not work within Tensorflow unless you re-add some of the stuff which Model Optimizer threw out.
Hope it helps,