- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am trying to use MO on a re-trained model (I followed this tutorial: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html and I retrained the model : "ssd_inception_v2_coco_2018_01_28")
I used the command :
/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model=/home/simon/Downloads/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/simon/Downloads/ssd_inception_v2_coco_2018_01_28/pipeline.config --reverse_input_channels
Where "pipeline.config" and "frozen_inference_graph.pb" are the files I got by exporting the retrained model.
This leads me to an error:
E0726 15:40:01.772906 140194489521984 infer.py:178] Cannot infer shapes or values for node "Postprocessor/Cast_1".
E0726 15:40:01.773174 140194489521984 infer.py:179] 0
E0726 15:40:01.773239 140194489521984 infer.py:180]
E0726 15:40:01.773307 140194489521984 infer.py:181] It can happen due to bug in custom shape infer function <function Cast.infer at 0x7f816b884ae8>.
E0726 15:40:01.773358 140194489521984 infer.py:182] Or because the node inputs have incorrect values/shapes.
E0726 15:40:01.773403 140194489521984 infer.py:183] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E0726 15:40:01.774091 140194489521984 infer.py:192] Run Model Optimizer with --log_level=DEBUG for more information.
E0726 15:40:01.774198 140194489521984 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/Cast_1" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
I tried using Openvino 2019 R1 and Openvino 2019 R2, both leading to this error.
What should I do?
Thanks.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Simon,
if the Optimizer cannot infer the shape of something I usually can solve the problem by specifying input and shape like this:
!python ~/dldt/model-optimizer/mo_tf.py \ --input_model /home/paul/Downloads/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb \ --input_shape [1,300,300,3] --input image_tensor
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Carreel, Simon,
Please use ssd_support_api_v1.14.json as an argument to --tensorflow_use_custom_operations_config and it should work. Please let me know the results here on this forum.
Thanks !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, thanks for your quick replies.
Bauriegel, Paul : I tried your command, but I had the same issue. When I retrained my model, I resized my images to 100x100 in order to make it retrain faster (it's only a try, so i'm not trying to have a very efficient model). I've seen it could cause an error to have a too little input size, could it be the reason for my mistake? The error logs don't seem to match though.
Shubha R.: I can't find the ssd_support_api_v1.14.json file, are you talking about the ssd_support.json file? I tried with it, yet I had the same error.
Thanks again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Carreel, Simon,
ssd_support_api_v1.14.json is definitely there (Windows 10) n C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\front\tf but you must install OpenVIno 2019R2 to get it.
Hope it helps,
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Have same issue with retrain ssd_mobilenet_v2_coco_2018_03_29.
Need some help!
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Indeed, I could find the file (I had downgraded to version 2019R1, that's why I couldn't find it). But using it still gets me the same error, with the difference that it adds:
[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
before the other errors. Does it mean that I should modify the ssd_support_api_v1.14.json file? And what should I change if that's the case?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Carreel, Simon,
What version of Tensorflow are you running ? Please upgrade at least to TF 1.14.0 .
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Carreel, Simon,
In your mo_tf.py command, can you try again this time leaving out --input_shape [1,300,300,3] ?
Please carefully read this document about fixed shape resizer . You do not have to specify the --input_shape since, according to the document:
The Model Optimizer generates an input layer with the height and width as defined in the pipeline.config.
And in fact, the pipeline.config for this ssd model looks like this:
image_resizer { fixed_shape_resizer { height: 300 width: 300 } }
Please report your status back here on this forum.
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I could finally solve my issue by changing line 57 of my ssd_support_api_v1.14.json file from "Postprocessor/Cast" to "Postprocessor/Cast_1". Many thanks for your support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Carreel, Simon,
This is great information. And thanks kindly for sharing your resolution with the OpenVino community ! This is a bug. We need to have a *.json which reflects Cast_1 as you found so I filed the bug on your behalf. Thanks for the workaround !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Carreel, Simon wrote:Hello,
I could finally solve my issue by changing line 57 of my ssd_support_api_v1.14.json file from "Postprocessor/Cast" to "Postprocessor/Cast_1". Many thanks for your support.
Thank you very much Carreel. I encountered the same issue and your effort in finding out the bug and reporting it really assisted me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, All,
I encountered the same error and follow Shubha R.'s suggestion to use `ssd_support_api_v1.14.json` and solved my problem. I don't need to change `Cast` to `Cast_1`, so I don't know whether it's a bug or not.
I am using TF 1.14 and OpenVINO 2019 R3 (openvino_2019.3.376). FYI.
Edit: Sorry I get it wrong. The modification of `Cast` to `Cast_1` is necessary. I was wrongly converting a model trained using TF 1.12.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page