Community
cancel
Showing results for 
Search instead for 
Did you mean: 
106 Views

NCS2 Graph Optimization Issues with Custom Faster RCNN Model

Im trying to convert a custom frozen tensorflow graph for use with the NCS2 using the OpenVino toolkit.  Details of the graph are shown here:

Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[?,?,?,3]) 
No variables spotted.
Found 4 possible outputs: (name=detection_boxes, op=Identity) (name=detection_scores, op=Identity) (name=detection_classes, op=Identity) (name=num_detections, op=Identity) 
Found 12969734 (12.97M) const parameters, 0 (0) variable parameters, and 1128 control_edges
Op types used: 1881 Const, 406 Identity, 243 StridedSlice, 129 Mul, 127 GatherV2, 126 Sub, 102 Minimum, 102 Reshape, 81 Maximum, 77 Shape, 72 ConcatV2, 72 Conv2D, 69 FusedBatchNorm, 69 Relu, 62 Pack, 62 RealDiv, 60 Cast, 60 Split, 55 Enter, 48 Greater, 45 Switch, 42 Slice, 41 Add, 39 Where, 37 Range, 27 Unpack, 25 ExpandDims, 25 Squeeze, 24 Merge, 22 TensorArrayV3, 22 ZerosLike, 19 NonMaxSuppressionV2, 15 NextIteration, 14 Fill, 12 TensorArrayScatterV3, 12 TensorArrayReadV3, 12 Tile, 10 TensorArraySizeV3, 10 TensorArrayGatherV3, 10 TensorArrayWriteV3, 10 Exit, 7 AvgPool, 6 MaxPool, 6 Transpose, 6 Rank, 6 Assert, 5 Less, 5 LoopCond, 5 Equal, 5 BiasAdd, 4 Round, 4 Exp, 2 TopKV2, 2 GreaterEqual, 2 Size, 2 Softmax, 2 MatMul, 1 Mean, 1 All, 1 CropAndResize, 1 DepthwiseConv2dNative, 1 Sqrt, 1 ResizeBilinear, 1 Relu6, 1 LogicalAnd, 1 Placeholder, 1 Max
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/justin/Desktop/Previous_TF_Graph/symbols/frozen_inference_graph8.pb --show_flops --input_layer=image_tensor --input_layer_type=uint8 --input_layer_shape=-1,-1,-1,3 --output_layer=detection_boxes,detection_scores,detection_classes,num_detections
 

Then I run the following python script from the development_tools/model_optimizer folder:

python mo_tf.py --input_model C:\Intel\SymbolsTensorflow\frozen_inference_graph_symbols.pb --input_shape [1,300,300,3] --input image_tensor --output detection_boxes,detection_scores,detection_classes,num_detections --log_level DEBUG
 

and get the following error at the end of the debug log:

 

[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:39 ]  input[1]: shape = [], value = 1
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:39 ]  input[0]: shape = [], value = 0
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:39 ]  input[2]: shape = [], value = 1
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:148 ]  Outputs:
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:39 ]  output[0]: shape = [1], value = [0]
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:133 ]  --------------------
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:134 ]  Partial infer for Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:135 ]  Op: TensorArrayGatherV3
[ ERROR ]  Cannot infer shapes or values for node "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3".
[ ERROR ]
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function TensorArrayGather.array_infer at 0x0000021FEB111BF8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ infer:198 ]  Node "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3" attributes: {'pb': name: "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3"
op: "TensorArrayGatherV3"
input: "Preprocessor/map/TensorArray_2"
input: "Preprocessor/map/TensorArrayStack_1/range"
input: "Preprocessor/map/while/Exit_2"
attr {
  key: "_class"
  value {
    list {
    }
  }
}
attr {
  key: "dtype"
  value {
    type: DT_INT32
  }
}
attr {
  key: "element_shape"
  value {
    shape {
      dim {
        size: 3
      }
    }
  }
}
, 'kind': 'op', 'name': 'Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3', 'op': 'TensorArrayGatherV3', 'precision': 'FP32', 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x0000021FFC14D158>), 'name', 'precision', 'type'], [('data', [], []), '@ports', '@consts'])], 'dim_attrs': ['spatial_dims', 'channel_dims', 'axis', 'batch_dims'], 'shape_attrs': ['output_shape', 'shape', 'stride', 'window', 'pad'], 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'type': 'TensorArrayGatherV3', 'infer': <function TensorArrayGather.array_infer at 0x0000021FEB111BF8>, 'element_shape': array([3], dtype=int64), 'is_partial_inferred': False}
[ ERROR ]  Stopped shape/value propagation at "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
[ 2018-12-04 15:50:49,299 ] [ DEBUG ] [ main:331 ]  Traceback (most recent call last):
  File "C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 136, in partial_infer
    node.infer(node)
  File "C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\extensions\ops\TensorArrayGather.py", line 47, in array_infer
    assert match_shapes(ta_node['element_shape'], node.element_shape)
AssertionError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo\main.py", line 325, in main
    return driver(argv)
  File "C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo\main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 230, in tf2nx
    partial_infer(graph)
  File "C:\Intel\computer_vision_sdk_2018.4.420\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 200, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

Any help on this issue would be most appreciated.

0 Kudos
2 Replies
Severine_H_Intel
Employee
106 Views

Dear Justin , 

for Faster RCNN, we have a special pipeline to follow, you can see it in our documentation C:/Intel/computer_vision_sdk_2018.4.420/deployment_tools/documentation/docs/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

However, we do not support yet Faster RCNN on Movidius, only on CPU and GPU.

Best, 

Severine

106 Views

Hi Severine,

Thanks for the info, it is much appreciated.   A question on timing, do you have any sense of the timeline to support Faster RCNN on Movidius?  We have to decide between using older tablets with NCS2, or upgrading tablets fleet wide sometime early next year.

Best regards,

Justin

Reply