Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Error ' No op named NonMaxSuppressionV3' with mvNCCompile frozen_inference_graph.pb -o movidius.pb

idata
Employee
1,577 Views

Following: https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10, I was able to both test and rebuild my own graph.

 

Now I am getting an error trying to compile the graph for the Movidius. Any help would be appreciated.

 

root@ubuntuNuc:/opt/checkmail# uname -a

 

Linux ubuntuNuc 4.15.0-29-generic #31~16.04.1-Ubuntu SMP Wed Jul 18 08:54:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

 

root@ubuntuNuc:/opt/checkmail# ls -all

 

total 50920

 

drwxr-xr-x 2 root root 4096 Jul 30 09:55 .

 

drwxr-xr-x 4 root root 4096 Jul 30 09:54 ..

 

-rw-r--r-- 1 root root 52130456 Jul 30 09:55 frozen_inference_graph.pb

 

root@ubuntuNuc:/opt/checkmail# mvNCCompile frozen_inference_graph.pb -o movidius.pb

 

mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no module attribute

 

EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)

 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead

 

if d.decorator_argspec is not None), _inspect.getargspec(target))

 

Traceback (most recent call last):

 

File "/usr/local/bin/mvNCCompile", line 118, in

 

create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)

 

File "/usr/local/bin/mvNCCompile", line 104, in create_graph

 

net = parse_tensor(args, myriad_config)

 

File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 211, in parse_tensor

 

tf.import_graph_def(graph_def, name="")

 

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 285, in import_graph_def

 

raise ValueError('No op named %s in defined operations.' % node.op)

 

ValueError: No op named NonMaxSuppressionV3 in defined operations.
0 Kudos
7 Replies
idata
Employee
971 Views

I'm also experiencing the same issue. I believe it has something to do with the way we trained our model. As the example your following, like the example I followed https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/, doesn't take into consideration that we will be using the Movidius NCS. Movidius provides guidance on compiling a network for use on the NCS here https://movidius.github.io/ncsdk/tf_compile_guidance.html.

 

I believe one of two things needs to be done. I've posted my thoughts, but I haven't gotten any feedback yet. I'm very new to all of this, so I may be way off base.

 

1) We have to train our own network, we can't use a pre-trained network that wasn't designed to run on the NCS. The input and output layers must be known, and there shouldn't be any unknown placeholders in the model description. (not sure what placeholders are for). Thats the minimum requirements based on the guidlines needed to run a model on the NCS

 

2) The second option is we use a pre-trained network that we know will run on the NCS. We have access to those in the NCAPPZOO, however, I can't find any .config files associated with those models. Based on the tutorials I've seen, we need a .config file to train a new network.

 

Hopefully, someone can give some insight and either say I'm right on the money or barking up the wrong tree. In either case, some direction would be very appreciated.

0 Kudos
idata
Employee
971 Views

@ksaye Thanks for reporting this. For your network, the NCSDK doesn't have support for TensorFlow's non-max suppression op, so that's probably the reason you're getting that error. Currently the NCSDK doesn't have support for all Tensorflow operations yet.

 

@mascenzi Placeholders are a way of inserting external data into the model at a later time. Since the NCS is an inference only device, it makes sense to have a defined placeholder for the input data.

0 Kudos
idata
Employee
971 Views

Thanks. Any examples of building our own Object Detection model using Tensorflow and inferencing it on Movidius?

0 Kudos
idata
Employee
971 Views

@Tome_at_Intel Thats an excellent idea, @ksaye, as it doesn't seem to be very straight forward to generate a graph that will work with the NCS. Is there any direction that is available as to what steps would need to be taken in order to train a new model off of a pretrained network?

 

Michael

0 Kudos
idata
Employee
971 Views
@ksaye @Tome_at_Intel There are 2 main problems with TensorFlow Object Detection Models to run on the NCS: 1. They produce multiple outputs, while the NCS supports only a single output (Solution --> using the tf.concat to combine the "bounding boxes" and "class scores" into a single output). 2. The postprocessing steps (non-max suppression...etc) are part of the frozen graph (Solution --> remove the postprocessing steps from the graph and do it later outside the NCS). So, In order to have the TF Detection models works for NCS we need to edit the code which export the "frozen_inference graph.pb" (see https://github.com/tensorflow/models/tree/master/research/object_detection/exporter.py). I successfully obtained a graph that can compile on the NCS SDK. Only one problem prevented the compilation of the graph, the <>code>CONCAT</code> operation is not supported correctly in the NCSDK. I tried both versions of the NCSDK, tried the '-ec' option in the new SDK v2.005 but it is supported in caffe only. I tried to manually use the explicit option for CONCAT op in <code>TensorFlowParser.py</code> in the old SDK v1.12 (with some other edits), I got the model compiled successfully, but got nan outputs using the <code>mvNCCheck</code> So, I think the only thing preventing the support of the TensorFlow Detection models is the CONCAT op support in the NCSDK. Ahmed
0 Kudos
idata
Employee
971 Views

@ahmed.ezzat great progress!

 

Would love any code share, as I am pretty new at Object Detection and the modifications you have made. Would love to use the Movidius, but honestly without Object Detection in TensorFlow, it seems pretty limited.

 

@Tome_at_Intel any suggestions on the last step that @ahmed.ezzat is struggling with?

0 Kudos
idata
Employee
971 Views
Thanks @ksaye I made the following modifications to the <code>exporter.py</code> 1. Add this function <pre><code> def _build_custom_detection_graph(detection_model, input_shape, output_collection_name): """ Build the detection graph. """ # Create Input Placeholder input_tensor = tf.placeholder(dtype=tf.float32, shape=input_shape, name='input') out_dict = detection_model.predict(input_tensor, None) out = tf.concat([out_dict['box_encodings'], out_dict['class_predictions_with_background']], axis=2, name='output') # Get output outputs = {} key = 'output' outputs[key] = out for output_key in outputs: tf.add_to_collection(output_collection_name, outputs[output_key]) return outputs, input_tensor </pre></code> 2. Replace the function <code>_export_inference_graph</code> with this: <pre><code> def _export_inference_graph(input_type, detection_model, use_moving_averages, trained_checkpoint_prefix, output_directory, additional_output_tensor_names=None, input_shape=None, output_collection_name='inference_op', graph_hook_fn=None, write_inference_graph=False): """Export helper.""" tf.gfile.MakeDirs(output_directory) if input_type == 'input': ## Custom Export print ('Custom Export') frozen_graph_path = os.path.join(output_directory, 'model_graph.pb') saved_model_path = os.path.join(output_directory, 'model') model_path = os.path.join(output_directory, 'chkpt') outputs, placeholder_tensor = _build_custom_detection_graph(detection_model, input_shape, output_collection_name) else: frozen_graph_path = os.path.join(output_directory, 'frozen_inference_graph.pb') saved_model_path = os.path.join(output_directory, 'saved_model') model_path = os.path.join(output_directory, 'model.ckpt') outputs, placeholder_tensor = _build_detection_graph( input_type=input_type, detection_model=detection_model, input_shape=input_shape, output_collection_name=output_collection_name, graph_hook_fn=graph_hook_fn) profile_inference_graph(tf.get_default_graph()) saver_kwargs = {} if use_moving_averages: # This check is to be compatible with both version of SaverDef. if os.path.isfile(trained_checkpoint_prefix): saver_kwargs['write_version'] = saver_pb2.SaverDef.V1 temp_checkpoint_prefix = tempfile.NamedTemporaryFile().name else: temp_checkpoint_prefix = tempfile.mkdtemp() replace_variable_values_with_moving_averages( tf.get_default_graph(), trained_checkpoint_prefix, temp_checkpoint_prefix) checkpoint_to_use = temp_checkpoint_prefix else: print ('not using mov_avg') checkpoint_to_use = trained_checkpoint_prefix saver = tf.train.Saver(**saver_kwargs) input_saver_def = saver.as_saver_def() write_graph_and_checkpoint( inference_graph_def=tf.get_default_graph().as_graph_def(), model_path=model_path, input_saver_def=input_saver_def, trained_checkpoint_prefix=checkpoint_to_use) if write_inference_graph: inference_graph_def = tf.get_default_graph().as_graph_def() inference_graph_path = os.path.join(output_directory, 'inference_graph.pbtxt') for node in inference_graph_def.node: node.device = '' with gfile.GFile(inference_graph_path, 'wb') as f: f.write(str(inference_graph_def)) if additional_output_tensor_names is not None: output_node_names = ','.join(outputs.keys()+additional_output_tensor_names) else: print ('No Additional Tensors') output_node_names = ','.join(outputs.keys()) frozen_graph_def = freeze_graph.freeze_graph_with_def_protos( input_graph_def=tf.get_default_graph().as_graph_def(), input_saver_def=input_saver_def, input_checkpoint=checkpoint_to_use, output_node_names=output_node_names, restore_op_name='save/restore_all', filename_tensor_name='save/Const:0', output_graph=frozen_graph_path, clear_devices=True, initializer_nodes='') write_saved_model(saved_model_path, frozen_graph_def, placeholder_tensor, outputs) </pre></code> 3. I was focusing on <code>ssd_mobilenet_v1</code> so, I modified only the <code>extract_features</code> function in <code>ssd_mobilenet_v1_feature_extractor.py</code> to have an additional argument <code>is_training=False</code> and setting <code>is_training=is_training</code> in calling <code>mobilenet_v1.mobilenet_v1_arg_scope</code>. Maybe it is needed in other feature extractors. 4. Download and extract the pretrained ssd_mobilenet_v1 from the detection_model_zoo. 5. You need to install the object_detection as a python package by following the instructions at https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md 6.Call the export_inference_graph.py as follows: <pre><code> #!/bin/bash MODEL_DIR=path/to/ssd_mobilenet_v1_coco_2017_11_17 CONFIG_FILE=path/to/object_detection/samples/configs/ssd_mobilenet_v1_coco.config INPUT_TENSOR=input OUT_DIR=ssd_mobile_net python3 export_inference_graph.py --input_type $INPUT_TENSOR --pipeline_config_path $CONFIG_FILE \ --trained_checkpoint_prefix $MODEL_DIR/model.ckpt --output_directory $OUT_DIR --input_shape 1,300,300,3 </pre></code> 7. Compile on the NCSDK, you should get error in concat layer. 8. You may need to open debug logs in the NCSDK by setting debug=True at <pre><code>/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py --> parse_tensor</pre></code>
0 Kudos
Reply