- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I want to convert IR model(.XML and .bin) to Tensorflow (.pb).
please give your suggestions.
thanks in advance
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Srikar,
Just to clarify, you are trying to convert your model to perform inference, correct? To deploy your network on any DL framework (Caffe, Tensorflow, etc), the Inference Engine will need to use the IR converted version of the original model. So you'd have to convert from .pb to IR to deploy your network to perform inference. You'll need to use the Model Optimizer to do this. Here is a link to how to prepare and optimize your trained model.
Please let me know if this information was helpful!
Best Regards,
Sahira
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Rizvi,
Thank you for the quick reply.
I understand that the Inference Engine will need to use the IR converted version of the original model.so, I have to convert .pb to IR to deploy the network to perform inference.
My question is can I covert IR back to Orginal model (.pb version)?
Regards,
Srikar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Srikar,
I had the same problem, and I don't know if you have eventually figured it our or not. But there seems to be a script just for IR -> tensorflow conversion. I haven't fully tested it yet, but thought it may be of use for you.
https://github.com/PINTO0309/openvino2tensorflow
I hope it can still be helpful to you after such long time.
Regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Srikar,
The model optimizer was not designed to convert your models from IR to .pb. Is there a particular reason why you want to accomplish this?
Best Regards,
Sahira
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear madarapu, srikar,
Rizvi, Sahira is correct. Model Optimizer was not designed to "reverse engineer". But there is no reason you can't figure it out yourself. Model Optimizer hides nothing. It's 100% Python code and it's 100% open source. So you can study the code and see how it goes from pb->IR. Then you can determine how to do IR->pb just by studying the code. But understand that the pb you get will not be the true original frozen pb with weights and biases in the right place. Because Model Optimizer merges nodes and discards stuff and also reduces layers (all in the name of optimization), the pb you will get will never look like a bonafide original Tensorflow frozen pb. Morever the pb you get may not work within Tensorflow unless you re-add some of the stuff which Model Optimizer threw out.
Hope it helps,
Shubha

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page