Community
cancel
Showing results for 
Search instead for 
Did you mean: 
idata
Community Manager
647 Views

[Error 5] Toolkit Error: Stage Details Not Supported: VarHandleOp

I just prototyping a model in tf.keras following the link below:

 

https://www.dlology.com/blog/how-to-run-keras-model-on-movidius-neural-compute-stick/

 

I converted my Keras model to tensorflow, but when I tried to compile my tensorflow model to graph, it gives me error below:

 

[Error 5] Toolkit Error: Stage Details Not Supported: VarHandleOp

 

my tensorflow version is 1.9.0

 

my model:

 

#

 

model = models.Sequential()

 

model.add(layers.Flatten(input_shape=(28,28)))

 

model.add(layers.Dense(128,activation='relu'))

 

model.add(layers.Dense(128,activation='relu'))

 

model.add(layers.Dense(10,activation='softmax'))

 

model.compile(optimizer='adam',loss = 'sparse_categorical_crossentropy',metrics=['accuracy'])

 

#

 

using this command to compile:

 

#

 

mvNCCompile tf_model.meta -in=flatten_1_input -on=dense_5/Softmax

 

#

 

error:

 

#

 

mvNCCompile v02.00, Copyright @ Intel Corporation 2017

 

_* Info: No Weights provided. inferred path: tf_model.data-00000-of-00001_*

 

shape: [1, 28, 28]

 

[Error 5] Toolkit Error: Stage Details Not Supported: VarHandleOp

 

#

 

I have tried another model structure but I ended up facing the same error

 

Thank you.

0 Kudos
5 Replies
idata
Community Manager
213 Views

I am facing the same issue. I have taken inception_v3 and did transfer learning on it by adding last layer as dense. I have used tf.saved_model.simple_save to export the model.

 

Following is the error I get:

 

pi@raspberrypi:~$ mvNCCompile -s 12 _retrain_checkpoint.meta -in=Placeholder -on=final_result -o first_graph.graph /usr/lib/python3/dist-packages/scipy/_lib/_numpy_compat.py:10: DeprecationWarning: Importing from numpy.testing.nosetester is deprecated since 1.15.0, import from numpy.testing instead. from numpy.testing.nosetester import import_nose /usr/lib/python3/dist-packages/scipy/stats/morestats.py:16: DeprecationWarning: Importing from numpy.testing.decorators is deprecated since numpy 1.15.0, impo rt from numpy.testing instead. from numpy.testing.decorators import setastest /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not matc h runtime version 3.5 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412 return f(*args, **kwds) /usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:46: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(False, "Layer type not supported by Convolution: " + obj.type) /usr/local/bin/ncsdk/Controllers/Parsers/Phases.py:322: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(len(pred) == 1, "Slice not supported to have >1 predecessors") mvNCCompile v02.00, Copyright @ Intel Corporation 2017 ****** Info: No Weights provided. inferred path: _retrain_checkpoint.data-00000-of-00001****** WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.ch eckpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. shape: [1, 299, 299, 3] [Error 5] Toolkit Error: Stage Details Not Supported: VarHandleOp

 

I have confirmed the name of the in and out layers by sess.graph.get_operations().

 

I am thinking may be it is because of the variables defined in the network?

idata
Community Manager
213 Views

Hi @ddvoviyum

 

The "Stage Details Not Supported: VarHandleOp" error is given when you're using operations or layers that aren't yet supported. These errors can get tricky sometimes. Can you share your model and exact steps and I will try to reproduce and get back to you with results.

 

Best Regards,

 

Sahira
idata
Community Manager
213 Views

Thanks for the response @Sahira_at_Intel

 

Here are the steps to reproduce:

 

Base script used:

 

https://github.com/tensorflow/hub/raw/master/examples/image_retraining/retrain.py

 

Changes made:

 

     

  1. Changed module to https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1
  2.  

  3. Used custom dataset of 3 classes instead of the default flower example
  4.  

  5. Extracted final result files (checkpoints, .meta etc)
  6.  

  7. Ran mvNCCompile -s 12 _retrain_checkpoint.meta -in=Placeholder -on=final_result
  8.  

idata
Community Manager
213 Views

Hi @ddvoviyum

 

Try enabling the debug option in the Tensorflow parser - this could provide more information about the error and which layer/operation is throwing it. You will need to set the debug flag to True in the following file as sudo:

 

/usr/local/bin/ncsdk/Controllers/Parsers/TensorFlow.py

 

Let me know what you find.

 

Sincerely,

 

Sahira
idata
Community Manager
213 Views

@Sahira_at_Intel

 

Since my last message, I have conducted my own steps as well. Following are the things I have done:

 

Renamed the input layer to img_input by modifying https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py#L309 to: resized_input_tensor = tf.placeholder(tf.float32, [None, height, width, 3], name="img_input")

 

To get information about the final model generated, I used TF's summarize_graph which gives the following output:

 

Found 1 possible inputs: (name=img_input, type=float(1), shape=[?,299,299,3]) No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax) Found 21824117 (21.82M) const parameters, 0 (0) variable parameters, and 0 control_edges Op types used: 490 Const, 378 Identity, 94 Conv2D, 94 FusedBatchNorm, 94 Relu, 15 ConcatV2, 9 AvgPool, 4 MaxPool, 1 Add, 1 Mean, 1 Mul, 1 Placeholder, 1 PlaceholderWithDefault, 1 MatMul, 1 Softmax, 1 Squeeze, 1 Sub To use with tensorflow/tools/benchmark:benchmark_model try these arguments: bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=model.pb --show_flops --input_layer=img_input --input_layer_type=float --input_layer_shape=-1,299,299,3 --output_layer=final_result

 

Upon running mvNCCompile model.pb -in=img_input -on=final_result -o firstgraph, following is the output:

 

/usr/lib/python3/dist-packages/scipy/_lib/_numpy_compat.py:10: DeprecationWarning: Importing from numpy.testing.nosetester is deprecated since 1.15.0, import from numpy.testing instead. from numpy.testing.nosetester import import_nose /usr/lib/python3/dist-packages/scipy/stats/morestats.py:16: DeprecationWarning: Importing from numpy.testing.decorators is deprecated since numpy 1.15.0, import from numpy.testing instead. from numpy.testing.decorators import setastest /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412 return f(*args, **kwds) /usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:46: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(False, "Layer type not supported by Convolution: " + obj.type) /usr/local/bin/ncsdk/Controllers/Parsers/Phases.py:322: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(len(pred) == 1, "Slice not supported to have >1 predecessors") mvNCCompile v02.00, Copyright @ Intel Corporation 2017 shape: [1, 299, 299, 3] [Error 5] Toolkit Error: Stage Details Not Supported: Top Not Found module_apply_default/hub_input/Sub

 

Next, upon reading some of the threads here in the forum, I thought of giving TF's transform_graph a try:

 

bazel-bin/tensorflow/tools/graph_transforms/transform_graph \ --in_graph=model.pb \ --out_graph=optimized_model.pb \ --inputs='img_input' \ --outputs='final_result' \ --transforms=' strip_unused_nodes(type=float, shape="1,299,299,3") remove_nodes(op=Identity, op=CheckNumerics) fold_old_batch_norms '

 

Then I ran the compilation on this optimized model, which gave the following output:

 

/usr/lib/python3/dist-packages/scipy/_lib/_numpy_compat.py:10: DeprecationWarning: Importing from numpy.testing.nosetester is deprecated since 1.15.0, import from numpy.testing instead. from numpy.testing.nosetester import import_nose /usr/lib/python3/dist-packages/scipy/stats/morestats.py:16: DeprecationWarning: Importing from numpy.testing.decorators is deprecated since numpy 1.15.0, import from numpy.testing instead. from numpy.testing.decorators import setastest /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412 return f(*args, **kwds) /usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:46: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(False, "Layer type not supported by Convolution: " + obj.type) /usr/local/bin/ncsdk/Controllers/Parsers/Phases.py:322: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(len(pred) == 1, "Slice not supported to have >1 predecessors") mvNCCompile v02.00, Copyright @ Intel Corporation 2017 shape: [1, 299, 299, 3] [Error 5] Toolkit Error: Stage Details Not Supported: Top Not Found module_apply_default/hub_input/Sub

 

Another interesting observation is that if I run mvNCCompile model.pb -in=input/BottleneckInputPlaceholder -on=final_result -o second.graph I get the following:

 

/usr/lib/python3/dist-packages/scipy/_lib/_numpy_compat.py:10: DeprecationWarning: Importing from numpy.testing.nosetester is deprecated since 1.15.0, import from numpy.testing instead. from numpy.testing.nosetester import import_nose /usr/lib/python3/dist-packages/scipy/stats/morestats.py:16: DeprecationWarning: Importing from numpy.testing.decorators is deprecated since numpy 1.15.0, import from numpy.testing instead. from numpy.testing.decorators import setastest /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412 return f(*args, **kwds) /usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:46: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(False, "Layer type not supported by Convolution: " + obj.type) /usr/local/bin/ncsdk/Controllers/Parsers/Phases.py:322: SyntaxWarning: assertion is always true, perhaps remove parentheses? assert(len(pred) == 1, "Slice not supported to have >1 predecessors") mvNCCompile v02.00, Copyright @ Intel Corporation 2017 shape: [1, 2048] res.shape: (1, 2) TensorFlow output shape: (1, 1, 2) /usr/local/bin/ncsdk/Controllers/FileIO.py:65: UserWarning: You are using a large type. Consider reducing your data sizes for best performance Blob generated

 

A success! However, this input layer is of a different shape, as you can see. Same success for optimized_model.pb is also seen.

 

I suspect that I really need to get rid of the training steps from retrain.py. Well, I do not know.

 

All these results are with debug = True in TF parser, as you suggested.

 

Model: https://drive.google.com/file/d/1wHRUEOiBjx4dHBeikbMjI7A2SxEjpv60/view?usp=sharing

 

Thanks!

Reply