- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, Sir:
I try to using Inception_V3 convert to IR, but got the error log show as below, it shows can't find node output name, need your help to check this, I also test Inception_V3 on movidius NCCompile command, it can convert successful. can you explain how IR work flow on tensorflow stage, does all tensorflow opertaion supported? , thanks.
SDK version: computer_vision_sdk_2018.0.234
command list: sudo python3 ../model_optimizer/mo.py --framework tf --input_model "/opt/intel/computer_vision_sdk_2018.0.234/deployment_tools/demo/Inception_V3/inception_v3.ckpt" --input=input --output=InceptionV3/Predictions/Reshape_1 --output_dir ir_inceptionV3 --data_type FP32 --model_name InceptionV3
model path: href="http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz"
BR,
Joseph
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Joseph,
Please check this link which is mentioning how to covert tensorflow models.
https://software.intel.com/en-us/articles/CVSDK-Using-TensorFlow
Looks like, you are trying to covert "unfrozen" tensorflow model.
MO supports only "frozen" tensorflow model.
The extension of frozen model file is ".pb".
Here is the step for how to freeze "unfrozen" model for MO input, shortly.
Please check the link I pasted for detail information.
- 1. Download the repository, including the models.
- 2. Export the inference graph for a model.
- extension of filename will be ".pb"
- 3. Download the archive with the checkpoint file - this is what you downloaded
- 4. To find the model output node name, use the "summarize_graph" utility
- 5. To freeze the graph, use the script "freeze_graph.py"
- you will use exported graph file ".pb" and checkpoint file ".ckpt" to freeze graph
- output file name will be ".pb" as well
Now, you will be able to covert frozen tensorflow model with MO.
ex) python3 mo_tf.py --input_model <INPUT_MODEL>.pb
Regards,
Peter.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, SENUGHYUK:
thanks for your helping. I followed your instruction seems like works fine. But We have two questions needs your help.
Q1: When we download official model already freeze graph named classify_image_graph_def.pb , can you explain what's difference between reference bazel_bin:freeze_graph output graph and classify_image_graph_def.pb.
model link: http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz
command list: sudo python3 ../model_optimizer/mo_tf.py --input_model /home/joseph/workspace/intel_cv/20180507_object/model/classify_image_graph_def.pb --input input --output classification --output_dir ir_inceptionV3_pb --data_type FP32 --model_name InceptionV3Pb
Q2: If we don't have ckpt file and graph_def(pb), but have pb file, how can we convert this pb file to frozen graph fit to mo.py script.
BR,
Joseph
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Joseph,
I might not be well understood your question but,
Here is my answer.
Q1:
- clasify_image_graph_def.pb is frozen graph which it has all the weight and bios values in it. So you don't have option can update these values to adjust performance.
- freeze_graph.py: as you know, it is python script you can apply any weights and bios data (checkpoint file) from various training in same topology. so if you have checkpoint files, you can create multiple frozen graphs with freeze_graph.py
Q2:
- checkpoint file contained all the weight, bios, and names information for unfrozen graph. you should freeze graph for MO anyways.
- if you don't have checkpoint file, there's no way to freeze graph
- please check this, (from https://www.tensorflow.org/extend/tool_developers/)
Freezing One confusing part about this is that the weights usually aren't stored inside the file format during training. Instead, they're held in separate checkpoint files, and there are Variable ops in the graph that load the latest values when they're initialized. It's often not very convenient to have separate files when you're deploying to production, so there's the freeze_graph.py script that takes a graph definition and a set of checkpoints and freezes them together into a single file. What this does is load the GraphDef, pull in the values for all the variables from the latest checkpoint file, and then replace each Variable op with a Const that has the numerical data for the weights stored in its attributes It then strips away all the extraneous nodes that aren't used for forward inference, and saves out the resulting GraphDef into an output file.
Regards,
Peter.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, SENUGHYUK:
Thanks for your replying, I have fixed the freeze graph problem due to operation not support, so that I successful generate frozen graph named Resnet_test_104407_pbtxt_and_ckpt.pb (file at the attachment) , This model output is from facenet training myself. I try to retry using mo.py to generate IR, but I got the error message show as below, do you have any advice to fix it??
running mo command: sudo python3 ../model_optimizer/mo_tf.py --input_model "/home/joseph/tensorflowGPU/tensorflow/Resnet_test_104407.pb" --input input --input_shape [1,160,160,3] --output embeddings --output_dir ir_facenet --data_type FP32 --model_name Facenet
model arch link: https://github.com/davidsandberg/facenet/blob/master/src/models/inception_resnet_v1.py
The source training process is handled by threshold, the function name is:
image_batch, label_batch = tf.train.batch_join(
images_and_labels, batch_size=batch_size,
capacity=4 * nrof_preprocess_threads * batch_size,
allow_smaller_final_batch=True)
[ ERROR ] Cannot convert type of placeholder "batch_size" because not all of its outputs are "Cast" to float operations: ['batch_join']. For more information please refer to Model Optimizer FAQ, question #49.
BR,
Joseph
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Joseph,
Sorry for the late answer.
But, it might take a time to take a look at this and I might need to have help from dev team.
Please expect some delay, thanks.
Regards,
Peter.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page