Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
302 Views

Can I run SegNet on the sticks 2(two input images: original and labeled). It is complaining about "ERROR just only one input supported...."

python3 mo_tf.py

--input_model ./Model_frozen/SegNet_XW.pb

--input Placeholder,Placeholder_1,phase_train,Placeholder_2,Placeholder_3,Placeholder_4

--input_shape "[1,224,224,3],[1,224,224,1],[0],[0],[0],[0]"

--data_type FP16

 

# Exception: Exception occurred during running replacer "None (<class 'extensions.front.no_op_eraser.NoOpEraser'>)": The node train must have just one input

 

# [ ERROR ] ---------------- END OF BUG REPORT --------------

# [ ERROR ] -------------------------------------------------

 

0 Kudos
3 Replies
Highlighted
Moderator
18 Views

Hi XWang97,

SegNet is not a supported Network on OpenVINO. Check out the list of supported networks and architectures here. Please let me know if you have any further questions.

 

Best Regards,

Sahira

0 Kudos
Highlighted
Beginner
18 Views

Hi Sahira,

This is really helpful, and I am stopping to try my SegNet now.

 

Concerning the supported net, I actually also have a vgg16 net. There is one thing making me feel strange is .bin and .xml files are not generted but no error message at all.

My vgg16.pb file is more than 2GB(I am not clear why it is huge), could this be the issue? I tested my vgg16 for predection, it works smoothly without no problem.

 

I have uploaded my checkpoint and bp files in OneDrive, it would be great you could help more. https://1drv.ms/u/s!AtjLM4-mbBLkrINmi_WmmlHWrnRBXg?e=0pYKSX

 

python3 ...\mo_tf.py --input_model model_frozen\vgg_xw.pb --output_dir model_ir\ --data_type FP16 Model Optimizer arguments: Common parameters: - Path to the Input Model: C:\Users\wang\Desktop\work_hub\segmentation_ki_vgg\model_frozen\vgg_xw.pb - Path for generated IR: C:\Users\wang\Desktop\work_hub\segmentation_ki_vgg\model_ir\ - IR output name: vgg_xw - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 2019.1.0-341-gc9b66a2

Can not figure out why nothing generated nor error.

 

Best regards,

Song

0 Kudos
Highlighted
Moderator
18 Views

Hi Song,

 

To convert your model into IR format, use the command:

python3 mo_tf.py --input_meta_graph <INPUT_META_GRAPH>.meta

 

This will create .xml and .bin files. For further information, visit this page. Please let me know if this information is helpful!

 

Best Regards,

Sahira

 

 

0 Kudos