# Exception: Exception occurred during running replacer "None (<class 'extensions.front.no_op_eraser.NoOpEraser'>)": The node train must have just one input
# [ ERROR ] ---------------- END OF BUG REPORT --------------
# [ ERROR ] -------------------------------------------------
This is really helpful, and I am stopping to try my SegNet now.
Concerning the supported net, I actually also have a vgg16 net. There is one thing making me feel strange is .bin and .xml files are not generted but no error message at all.
My vgg16.pb file is more than 2GB(I am not clear why it is huge), could this be the issue? I tested my vgg16 for predection, it works smoothly without no problem.
I have uploaded my checkpoint and bp files in OneDrive, it would be great you could help more. https://1drv.ms/u/s!AtjLM4-mbBLkrINmi_WmmlHWrnRBXg?e=0pYKSX
python3 ...\mo_tf.py --input_model model_frozen\vgg_xw.pb --output_dir model_ir\ --data_type FP16 Model Optimizer arguments: Common parameters: - Path to the Input Model: C:\Users\wang\Desktop\work_hub\segmentation_ki_vgg\model_frozen\vgg_xw.pb - Path for generated IR: C:\Users\wang\Desktop\work_hub\segmentation_ki_vgg\model_ir\ - IR output name: vgg_xw - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 2019.1.0-341-gc9b66a2
Can not figure out why nothing generated nor error.
To convert your model into IR format, use the command:
python3 mo_tf.py --input_meta_graph <INPUT_META_GRAPH>.meta
This will create .xml and .bin files. For further information, visit this page. Please let me know if this information is helpful!