Community
cancel
Showing results for 
Search instead for 
Did you mean: 
jambhale__ranjit
Beginner
225 Views

Openvino + ICNet + FPGA Arria 10 GX

I am using Arria® 10 GX FPGA Development Kit as accelerator. I have done the setup ready as per instruction given on below link.
https://software.intel.com/en-us/articles/OpenVINO-Install-Linux-FPGA#program%20aria%2010%20gx
I am able to run sample application successfully on FPGA as well.

I do have pre-trained tensorflow model ICNet.pb file. I want to run segmentation application using ICNet network.
Now I was tring to convert ICNet.pb tensorflow network to  ICNet.xml and ICNet.bin file using mo_tf.py tool
But i am getting below mentioned error.


python3 /home/datadrive1/intel/computer_vision_sdk_fpga_2018.4.420/deployment_tools/model_optimizer/mo_tf.py --input_model /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb --data_type FP32 --output_dir .
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb
    - Path for generated IR:     /home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/.
    - IR output name:     ICNet-model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.4.292.6ef7232d
[ ERROR ]  Shape is not defined for output 0 of "split".
[ ERROR ]  Shape is not defined for output 1 of "split".
[ ERROR ]  Shape is not defined for output 2 of "split".
[ ERROR ]  Cannot infer shapes or values for node "split".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "split".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_split_infer at 0x7f82df8c1c80>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "split" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
Thanks & regards
Ranjit Jambhale

0 Kudos
5 Replies
jambhale__ranjit
Beginner
225 Views

Please find the error in detailed

Command :    python3 /home/datadrive1/intel/computer_vision_sdk_fpga_2018.4.420/deployment_tools/model_optimizer/mo_tf.py --input_model /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb --data_type FP32 --output_dir .
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb
    - Path for generated IR:     /home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/.
    - IR output name:     ICNet-model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.4.292.6ef7232d
[ ERROR ]  Shape is not defined for output 0 of "split".
[ ERROR ]  Shape is not defined for output 1 of "split".
[ ERROR ]  Shape is not defined for output 2 of "split".
[ ERROR ]  Cannot infer shapes or values for node "split".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "split".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_split_infer at 0x7f82df8c1c80>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "split" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

CJian1
Beginner
225 Views

You need to define "--input_shape" if the shape of input is undefined in your tf code (the batch size was probably set to be "None"). And if you have multiple inputs, you should also set the "--input" option as well. Try something like this:

python3 mo.py --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),(1,6,1,1) 

jambhale__ranjit
Beginner
225 Views

I have tried the way you suggested but getting error as below

python3 /home/datadrive1/intel/computer_vision_sdk_fpga_2018.4.420/deployment_tools/model_optimizer/mo.py --input_model /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb --data_type FP32 --output_dir . --input split --input_shape "[1,3,227,227]","[1,6,1,1]"
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb
    - Path for generated IR:     /home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/.
    - IR output name:     ICNet-model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     split
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,3,227,227],[1,6,1,1]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.4.292.6ef7232d
[ ERROR ]  Please provide each input layers with an input layer shape.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #58.

CJian1
Beginner
225 Views

Since you have two input shape, you will also need to specify two inputs after "--input".

jambhale__ranjit
Beginner
225 Views

Before adding --input parameter i was getting error as below :
python3 /home/datadrive1/intel/computer_vision_sdk_fpga_2018.4.420/deployment_tools/model_optimizer/mo.py --input_model /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb --data_type FP32 --output_dir . --input split/split_dim --input_shape "[1,227,227,3]","[1,6,1,1]" --log_level=DEBUG

[ ERROR ]  Shape is not defined for output 0 of "split".
[ ERROR ]  Shape is not defined for output 1 of "split".
[ ERROR ]  Shape is not defined for output 2 of "split".
[ ERROR ]  Cannot infer shapes or values for node "split".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "split".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_split_infer at 0x7fb36ff52d08>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-01-16 17:45:18,363 ] [ DEBUG ] [ infer:198 ]  Node "split" attributes: {'infer': <function tf_split_infer at 0x7fb36ff52d08>, 'name': 'split', 'pb': name: "split"
op: "Split"
input: "split/split_dim"
input: "Placeholder"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "num_split"
  value {
    i: 3
  }
}


After adding --input parameter i was getting error as below :

python3 /home/datadrive1/intel/computer_vision_sdk_fpga_2018.4.420/deployment_tools/model_optimizer/mo.py --input_model /home/datadrive1/prj_IntelFPGA/Deewakar_shared_Files/ICNet-model.pb --data_type FP32 --output_dir . --input split/split_dim,Placeholder --input_shape "[1,227,227,3]","[1,6,1,1]" --log_level=DEBUG

 2019-01-17 10:30:05,994 ] [ DEBUG ] [ register_custom_ops:105 ]  Added a new entry TensorIteratorInput to extractors with custom op class <class 'extensions.ops.TensorIterator_ops.TensorIteratorInput'>.
[ 2019-01-17 10:30:06,106 ] [ DEBUG ] [ extractor:829 ]  Sink: Reshape_1/sink_port_0 for node Reshape_1
[ 2019-01-17 10:30:06,107 ] [ DEBUG ] [ extractor:830 ]  {'is_output': True, 'data_type': None, 'infer': None, 'value': None, 'type': 'OpOutput', 'precision': 'FP32', 'dim_attrs': ['batch_dims', 'axis', 'channel_dims', 'spatial_dims'], 'kind': 'op', 'op': 'OpOutput', 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x7f10ccad32f0>), 'name', 'precision', 'type'], [('data', [], []), '@ports', '@consts'])], 'shape_attrs': ['shape', 'window', 'pad', 'output_shape', 'stride'], 'name': 'Reshape_1/sink_port_0'}
[ 2019-01-17 10:30:06,107 ] [ DEBUG ] [ extractor:831 ]  Add edge from Reshape_1 to Reshape_1/sink_port_0
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  
[ ERROR ]  Traceback (most recent call last):
  File "/home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 176, in tf2nx
    input_op_nodes = add_input_ops(graph, packed_user_shapes, True)
  File "/home/datadrive1/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/front/extractor.py", line 993, in add_input_ops
    assert n_inputs == 1
AssertionError

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

Queries

1. Does --input parameter am i using are correct ?
2. Currently i want to convert tensorflow ICNet.pb model to .xml and .bin using mo.py .
   Does --input_shape parameter am i using are correct ?


Thanks & regards

Ranjit

Reply