- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I'm trying to run model optimizer from this tutorial https://medium.com/@vijendra1125/custom-mask-rcnn-using-tensorflow-object-detection-api-101149ce0765 . It use the mask_rcnn_inception_v2_coco model.
However when I run: python3 mo_tf.py --input_model /home/gpuserver/Custom-Mask-RCNN-using-Tensorfow-Object-detection-API/IG/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config /home/gpuserver/Custom-Mask-RCNN-using-Tensorfow-Object-detection-API/mask_rcnn_inception_v2_coco.config --tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support.json --output_dir /home/gpuserver
I encounter an error as below:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/gpuserver/Custom-Mask-RCNN-using-Tensorfow-Object-detection-API/IG/frozen_inference_graph.pb
- Path for generated IR: /home/gpuserver
- IR output name: frozen_inference_graph
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /home/gpuserver/Custom-Mask-RCNN-using-Tensorfow-Object-detection-API/mask_rcnn_inception_v2_coco.config
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json
Model Optimizer version: 1.5.12.49d067a0
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation
file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (800, 800).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The predicted masks are produced by the "masks" layer for each bounding box generated with a "detection_output" layer.
Refer to IR catalogue in the documentation for information about the DetectionOutput layer and Inference Engine documentation about output data interpretation.
The topology can be inferred using dedicated demo "mask_rcnn_demo".
[ ERROR ] Shape is not defined for output 0 of "BatchMultiClassNonMaxSuppression_1/map/TensorArrayUnstack_4/Shape".
[ ERROR ] Cannot infer shapes or values for node "BatchMultiClassNonMaxSuppression_1/map/TensorArrayUnstack_4/Shape".
[ ERROR ] Not all output shapes were inferred or fully defined for node "BatchMultiClassNonMaxSuppression_1/map/TensorArrayUnstack_4/Shape".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function Shape.infer at 0x7fa06cf7d7b8>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "BatchMultiClassNonMaxSuppression_1/map/TensorArrayUnstack_4/Shape" node.
I also ran the summarize_grapth and find that my model it is different with the original mask-rcnn graph:
My model:
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
5 output(s) detected:
detection_boxes
detection_scores
detection_classes
num_detections
detection_masks
Original model mask-rcnn
21 output(s) detected:
BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/switch_t
BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/switch_f
BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t
BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond/switch_t
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond/switch_f
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond/cond/switch_t
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond/cond/switch_f
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_1/switch_t
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_1/switch_f
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_1/cond/switch_t
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_1/cond/switch_f
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_3/switch_t
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_3/switch_f
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_3/cond/switch_t
BatchMultiClassNonMaxSuppression_1/map/while/PadOrClipBoxList/cond_3/cond/switch_f
detection_boxes
detection_scores
detection_classes
num_detections
detection_masks
Someone can help me on this problem ? Thank you so much in advance
Best regards,
Hoa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nikos. After trying several methods, I think I find the problems coming from that I used tensorflow v1.12 which is not yet compatible with openvino (I guess). Now I retry with v1.9 and it seems ok
It's quite hard finding the bug coming from the version but not the code. But after all, I learn lot of things by trying several ways.
Thanks for your support,
Nice weekend,
Hoa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Hoa,
Could we have a link to your frozen_inference_graph.pb?
If it is the same as the one in the pre-trained in the model zoo please try a command similar to my post #2 in
https://software.intel.com/en-us/forums/computer-vision/topic/804768
cheers,
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your reply @nikos . I have found your post and ran as you suggested but not succeed.
You find here link to download my .pb model also the config file. https://drive.google.com/open?id=1iN02CoXF6XjImw3uugFKsCyu-HDAcl0q
I retrained the model zoo with new dataset and new configuration file.
As I mentionned in my previous post. The output shape of my model and the original model of tensorflow zoo are different when I ran summarize_graph.
Can you help me to take a look on it to figure out the problem ? Thank you so much in advance
Hoa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nikos , Do you have time take a look on my model ? I am still stucking on this. I have tried with the model_main.py rather than the legacy/train.py to train my model with getting the same error. So now I'm guessing the problem come from the tf_record I'm using, it is based from the tf_record_pet_dataset and I saw someone succeed using the tf_record_voc_dataset. I will continue to find the problem but very appreciated if someone with experience can take a look on this.
Thank you so much in advance,
Hoa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Hoa,
I did look at this yesterday briefly but could not find the issue. Please make sure you freeze properly. Inspect your graph. There is some good info in this forum on related issues. You could also go back to the original model from the zoo and make incremental changes towards yours to see what changes break this. I may not have the time to study this more, sorry. Other experts here may also help.
Cheers,
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you @nikos. I will try to figure out where is the problem. Can I have your idea where it could be from ?
- The TF record. I have modified the create_pet_tf_record.py for appropriating with my dataset. However, it is only a kind of dataset and can not affect the model right ?
- The pipeline config: I modified from mask_rcnn_inception_v2_coco.config
- The freezing model: I will take a look on this but I used
python3 ./models/research/object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--checkpoint_dir=${MODEL_DIR} \
--model_dir=${EVAL_DIR} \
--run_once \
--alsologtostderr
Sorry for bothering you too much. I'm quite new in both Tensorflow and Openvino. Thank you again and best regards,
Hoa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nikos. After trying several methods, I think I find the problems coming from that I used tensorflow v1.12 which is not yet compatible with openvino (I guess). Now I retry with v1.9 and it seems ok
It's quite hard finding the bug coming from the version but not the code. But after all, I learn lot of things by trying several ways.
Thanks for your support,
Nice weekend,
Hoa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hoa,
> I think I find the problems coming from that I used tensorflow v1.12 which is not yet compatible with openvino (I guess). Now I retry with v1.9 and it seems ok
So happy to hear, we learn something new every day here in this forum. BTW I tried again yesterday and could not find the issue so I was getting worried too. Did not update the forum as I wanted to do more investigation today if I had some spare time.
In any case please mark your message #7 as Best Answer for other users to benefit too.
Thank you!
nikos
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page