- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config, debug=True)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 1007, in parse_tensor
output_size = [input_shape[0], 1, 1, outputs[3]]
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_shape.py", line 521, in getitem
return self._dims[key]
IndexError: list index out of range
This occurs if I pass either my TF meta or pb file like so:
mvNCCompile model.pb -w incept3_ft_weights.h5 -s 12 -in inception_v3_input -on dense_25/Softmax -o graph
What am I doing wrong?
- Tags:
- Keras
- Tensorflow
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I also see the following error on some other models:
[Error 5] Toolkit Error: Stage Details Not Supported: FusedBatchNorm inputs mean and variance are not defined. The graph is not created for inference.
This was of course me converting a Keras model -> to a TF protobuf and then trying to compile it, etc. Not sure if that is a valid work flow but I thought it should theoretically just work.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@pisymbol Can you provide your model files for testing (pb, meta)?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Uploading now….I really need to get this to work!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Those three links are two PBs and a meta. All of them are translated Keras -> TF models.
Note to get the 'FusedBatchNorm' issue I had to patch the NCSDK like so:
-- TensorFlowParser.py 2018-01-29 11:37:22.912714875 +0000
+++ TensorFlowParser.py.patch 2018-01-29 11:36:51.172559957 +0000
@@ -287,7 +287,7 @@
for a in node.outputs:
print(" OUT:", a.name)
if not inputfound:
- if have_first_input(strip_tensor_id(node.outputs[0].name)):
+ if node.outputs and have_first_input(strip_tensor_id(node.outputs[0].name)):
inputfound = True
if debug:
print('Starting to process')
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Note, here is some code I've used to translate Keras to TF (taken from a StackOverflow post I believe):
import os
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from keras import backend as K
from keras.models import load_model
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = convert_variables_to_constants(session, input_graph_def,
output_names, freeze_var_names)
return frozen_graph
model = load_model(os.getcwd() + os.path.sep + 'my_model.h5')
frozen_graph = freeze_session(K.get_session(), output_names=[model.output.op.name])
tf.train.write_graph(frozen_graph, os.getcwd(), "my_model.pb", as_text=False)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel Were you able to download these models? I'd like to understand what is going on here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@pisymbol FusedBatchNorm requires the input to have the mean and variance defined. You can double check your input to the FusedBatchNorm layer. You can see the exception at line 1539 of TensorFlowParser.py in /opt/movidius/NCSDK/ncsdk-x86_64/tk/Controllers:
if len(mean) == 0 or len(var) == 0:
throw_error(ErrorTable.StageDetailsNotSupported, "FusedbatchNorm inputs mean and variance are not defined. The graph is not created for inference.")
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel: I understand the exception what I don't understand is what exactly am I dong wrong in converting my Keras Model to Tensorflow. I don't set FusedBatchNorm explicitly. Also, you didn't address the patch I posted above to the TensorFlowParser which assumes node.outputs never has zero length.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel: Actually, they can be zero and should be during training: See here: https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/fused-batch-norm
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@pisymbol I understand. Per the Guidance for Compiling TensorFlow Doc @ https://movidius.github.io/ncsdk/tf_compile_guidance.html, you have to remove all training features and prepare a version of the network that is going to be used for inference only. Can you try removing any training features from the network and see if you can compile the network?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel I had to upgrade to TF 1.5 to get rid of the training features.
But now I see:
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:858: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in <module>
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 1007, in parse_tensor
output_size = [input_shape[0], 1, 1, outputs[3]]
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_shape.py", line 521, in __getitem__
return self._dims[key]
IndexError: list index out of range
Updated model file: https://drive.google.com/file/d/1A5hLi6Pa1rd6TtQ6PnikqkUmr0VwJvgj/view?usp=sharing
I'm using the TF Graph Transform Tool to fold batch norms etc.:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md
I actually think in your guidelines for TF a better idea would be to work with this tool which is suppose to remove training features and strip the graph of non-essential nodes for inference, etc.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel : Is there anything you can suggest to make this go?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@pisymbol You can try to flatten or reshape your output to a single 1D vector.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel Is this a bug in your parser? (also you didn't comment about the patch above?)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@pisymbol Since it is outside of the normal workflow, it's hard to say what it could be. We don't have support for Keras at the moment, but we've made a note that some of our Intel NCS users are using Keras and are interested in using these networks on the NCS.
Regarding your issue, I haven't done this exact method to convert a network for use on the NCS myself (Keras model to TF via script then stripping out the training layers via TF graph transform tool then trying to compile for NCS) so I can't recommend a well informed next plan of action.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel : :'(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@pisymbol did you get this to work? I am trying the exact same and see the same error as you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
model.fit({'input': X}, {'targets': Y}, n_epoch = 5,
validation_set =({'input': test_x}, {'targets': test_y}),
snapshot_step = 500, show_metric = True, run_id = MODEL_NAME)
model.save(MODEL_NAME)
Run id: LeafDetection-0.001-6conv-basic.model
Log directory: log/
INFO:tensorflow:Summary name Accuracy/ (raw) is illegal; using Accuracy/__raw_ instead.
Exception in thread Thread-9:
Traceback (most recent call last):
File "C:\Users\NIAZI\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\NIAZI\Anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\NIAZI\Anaconda3\lib\site-packages\tflearn\data_flow.py", line 201, in fill_batch_ids_queue
ids = self.next_batch_ids()
File "C:\Users\NIAZI\Anaconda3\lib\site-packages\tflearn\data_flow.py", line 215, in next_batch_ids
batch_start, batch_end = self.batches[self.batch_index]
IndexError: list index out of range
I face this error kindly help me
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Nomanniazi Can you provide a link to your model for issue reproduction?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page