- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am unable to generate IR for MobileNet v1 and v2 Networks (In TensorFlow) implemented using Model Optimizer
I picked up the topologies present in the link:
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
On the Same Link I followed the instructions specified in the section "Loading Non-Frozen Models to the Model Optimizer"
Since tar model folder had .meta files (and no checkpoint file), I followed the command
mo_tf.py --input_meta_graph <INPUT_META_GRAPH>.meta
On running the command, I started getting following errors:
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Anmol,
Regarding mobilenet_v1_1.0_224,
I reproduced your issue and this is a bug that we will fix. In the meantime, you can run the following and convert the model:
mo_tf.py --input_model mobilenet_v1_1.0_224_frozen.pb --input_shape [1,224,224,3]
Regarding mobilenet_v2_1.0_224,
I reproduced your issue and this is a bug as well as there is an issue interpreting the 'InfeedEnqueueTuple' node before successfully freezing the model. I also tried to convert the frozen .pb file and ran into an error where the input shapes cannot be inferred from node "MobilenetV2/Conv/Conv2D". The work around for this model is to download freeze_graph.py from here. Run the following command to freeze the .pbtxt file:
python3 freeze_graph.py --input_graph mobilenet_v2_1.0_224_eval.pbtxt --input_checkpoint mobilenet_v2_1.0_224.ckpt --binary=false --output_node_names=MobilenetV2/Predictions/Reshape_1 --output_graph mobilenet_v2_frozen.pb
Now, you have your frozen mobilenet v2 model and can convert it running this:
python3 mo.py --input_model mobilenet_v2_frozen.pb --input_shape [1,224,224,3]
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey, I kind of have the same problem. I have a unfrozen model and I'm running the following command:
python3 mo_tf.py --input_meta_graph /Users/tarunkolla/Desktop/model/model.ckpt.meta
Following the documentation to load unfrozen model from https://docs.openvinotoolkit.org/2019_R1/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model
I get the following error:
[ ERROR ] The directory "/opt/intel/openvino_2019.1.090/deployment_tools/model_optimizer/." is not writable
Could you help. Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Kolla, Tarun,
You either need to open up the shell from which you instantiate the Model Optimizer python script as Administrator (or sudo) or you need to use the --output_dir switch to dump the generated IR into a directory where you have write permissions.
Thanks,
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page