- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Carmine,
You might get error of the node name, how did you get it?
I am trying to reproduce your issue but I don't have any clue which module you are using and what is your process.
It looks like an issue at the input of the model and the model name was wrong in function partial_infer(graph), you can try to put a sentence "print("abc")" at line 121 of file ~/deployment_tools/model_optimizer/mo/middle/passes/infer.py, if it doesn't print "abc", it means the line 120 had an exception since the node_name was wrong.
There are several ways to freeze the model, so put the code in the python module might not a solid method. If you are using TF models from GitHub, you might try following after "git clone":
- Under directory ~/models/research/slim, using export_inference_graph.py to export the model
- Get the summary for the name of output_node_names with ~/deployment_tools/model_optimizer/mo/utils/summarize_graph.py
- Freeze the output of #1 with /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py
- Convert the frozen model with mo_tf.py
git clone https://github.com/tensorflow/models.git cd models/research/slim python3 export_inference_graph.py --alsologtostderr --model_name=inception_v3 --output_file=<output directory>/inception_v3_inf_graph.pb cd <output directory> python3 ~/deployment_tools/model_optimizer/mo/utils/summarize_graph.py --input_model=inception_v3_inf_graph.pb python3 /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph inception_v3_inf_graph.pb --input_binary --input_checkpoint inception_v3.ckpt --output_node_names InceptionV3/Predictions/Reshape_1 --output_graph inception_v3_frozen.pb python3 /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/mo_tf.py --input_model=inception_v3_frozen.pb --input_shape [1,299,299,3]
Note: freeze_graph.py can only be found in OpenVINO R2.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Carmine,
Unfortuantely, this documentation is incorrect as this is an error you get when your model isn't frozen and we will update it accordingly. Now, the easiest way for you to proceed is to freeze your graph using the ckpt.meta(your meta graph file) and your checkpoint file. You can do this by downloading the freeze_graph.py file from tensorflow's github and running the following command with it:
#INSTALL DIR = /opt/intel/computer_vision_sdk/ python3 freeze_graph.py --input_meta_graph <model.ckpt.meta> \ --input_binary \ --input_checkpoint model.ckpt \ --output_node_names outputnode \ --output_graph model_frozen.pb
Then you will have your model_frozen.pb which will be your frozen model that you can then convert to IR using Model Optimizer
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Carmine,
I see that Severine helped you identify your issue with the power layer in extensibility to Model Optimizer. Did you by chance write the extensibility code for the power layer for inference engine? Also, what hardware device are you planning on deploying this layer on cpu,gpu, fpga, or movidius? The reason I ask is because the issue may reside in the implementation of the power layer for inference engine.
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am thinking that maybe that possibly the way your power layer is implemented in your model is different from the layer implementation that the power layer in OpenVINO this could simply be different defaults etc. If this is the case then you could create a custom layer that has the correct implementation and use that with inference engine at run time and get the correct results.
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page