I am trying to convert my tensorflow model(.pb) to IR using the script mo_tf.py and i dont want model optimizer to perform the translation operation on input size from (n,H,W,C) to (n,C,H,W) so i am passing the flag --disable_nhwc_to_nchw, but then it is giving error :
[ ERROR ] Concat input shapes do not match
[ ERROR ] Shape is not defined for output 0 of "concatenate/concat".
[ ERROR ] Cannot infer shapes or values for node "concatenate/concat".
[ ERROR ] Not all output shapes were inferred or fully defined for node "concatenate/concat".
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function concat_infer at 0x7f1dbeced0d0>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "concatenate/concat" node.
I am running the following command:
python3 mo_tf.py --input_model frozen_model.pb --output_dir model --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --input_shape [1,416,416,3] --disable_nhwc_to_nchw
PS. It runs fine if I dont pass the --disable_nhwc_to_nchw flag.
Any help will be appreciated.
Hi Shubha, yes it works if don't add -disable_nhwc_to_nchw flag, but then then during the inference, model is detecting lot of objetcs in a frame.
I converted the darknet yolov3 to keras framework and also removed some of the layers for my use case, but the last layers(3 output layers-YoloRegion) have the same dimension as of darknet
Hi Shubham. Can you try with --log_level DEBUG ?
It may help to dump the tensorflow model to a text version to see if there's something odd about node "concatenate/concat".