- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
i would like to convert tensorflow model based on Inception V3 to IR model.
but i need to add image input placeholder because the input of model is tf records
i use the code below and it works(inception_v3.py is in the file):
import tensorflow as tf import tensorflow.contrib.slim as slim import tensorflow.contrib.slim.nets as nets import inception_v3 as network with tf.Graph().as_default(): images_input=tf.placeholder(tf.float32,[None,299,299,3],'input') init_op_global=tf.global_variables_initializer() with slim.arg_scope(network.inception_v3_arg_scope()): net,endpoint=network.inception_v3(images_input,num_classes=26,is_training=False) saver=tf.train.Saver(tf.global_variables()) with tf.Session() as sess: sess.run(init_op_global) saver=tf.train.import_meta_graph('model.ckpt-1399202.meta') saver.restore(sess,'model.ckpt-1399202') saver.save(sess,'/my_path/my_model.ckpt') tf.train.write_graph(sess.graph,'.' ,'graph.pbtxt')
however, the output of the model with image input is significantly below my expectation
how do i correctly add placeholder in my model ?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Steven,
Since Model Optimizer doesn't support -1 as a dimension for your placeholder's shapes so you have to explicitly supply the shape for example python3 mo_tf.py --input_model <frozenmodel.pb> --input_shape [1,299,299,3] --input <input/placeholder layername>. For more information on the parameters that can be used to convert the model you can reference this page.
Kind Regards,
Monique Jones
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Steven,
When you say that the output of the model is below your expectations what specifically are you speaking about(output of the model when running inference?)?
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jone:
yes, there are different results when running inference
i think there is something wrong when adding placeholder in my model
if i do not add image placeholder and directly convert my model to IR model, the model optimizer cannot find the input
there are the information of two models from summarize_graph below
Original frozen model:
No inputs spotted.
No variables spotted.
Found 1 possible outputs: (name=InceptionV3/Predictions/Reshape_1, op=Reshape)
Found 21838868 (21.84M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 493 Const, 190 Identity, 95 Conv2D, 94 Relu, 94 FusedBatchNorm, 15 ConcatV2, 10 AvgPool, 4 MaxPool, 2 Add, 2 Mul, 2 Reshape, 1 Floor, 1 FIFOQueueV2, 1 QueueDequeueV2, 1 RandomUniform, 1 RealDiv, 1 BiasAdd, 1 Softmax, 1 Squeeze, 1 Sub
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/volume/inception_v3_frozen.pb --show_flops --input_layer= --input_layer_type= --input_layer_shape= --output_layer=InceptionV3/Predictions/Reshape_1
with image placeholder frozen model:
Found 1 possible inputs: (name=input, type=float(1), shape=[?,299,299,3])
No variables spotted.
Found 1 possible outputs: (name=InceptionV3/Predictions/Reshape_1, op=Reshape)
Found 21873291 (21.87M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 488 Const, 379 Identity, 95 Conv2D, 94 FusedBatchNorm, 94 Relu, 15 ConcatV2, 10 AvgPool, 4 MaxPool, 2 Reshape, 1 BiasAdd, 1 Placeholder, 1 Shape, 1 Softmax, 1 Squeeze
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/volume/theme_detection/theme_detection_frozen_2.pb --show_flops --input_layer=input --input_layer_type=float --input_layer_shape=-1,299,299,3 --output_layer=InceptionV3/Predictions/Reshape_1
how do i correctly add image placeholder in my model or directly convert my model to IR model?
THX
Besr Regards,
Steven Hung
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Steven,
Since Model Optimizer doesn't support -1 as a dimension for your placeholder's shapes so you have to explicitly supply the shape for example python3 mo_tf.py --input_model <frozenmodel.pb> --input_shape [1,299,299,3] --input <input/placeholder layername>. For more information on the parameters that can be used to convert the model you can reference this page.
Kind Regards,
Monique Jones

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page