I am currently having trouble converting a custom faster rcnn inception v2 model. The commands I input are
sudo python3 mo_tf.py --input_model ~/Distraction/inference_graph/frozen_inference_graph.pb --reverse_input_channels --output_dir ~/Distraction/inference_graph/ --input image_tensor --output detection_boxes,detection_scores,detection_classes,num_detections --tensorflow_object_detection_api_pipeline ~/Distraction/inference_graph/pipeline.config --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.7.json --log_level=DEBUG
The error that I get is
The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 312, in main return driver(argv) File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 263, in driver is_binary=not argv.input_model_is_text) File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 128, in tf2nx class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER) File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 184, in apply_replacements )) from err mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "add" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
I've explored the forums on users that have a similar problem and the closest one is this post:
https://software.intel.com/en-us/forums/computer-vision/topic/809407
I'd like to get assistance on the correct .json file to use to successfully convert this model
This is rather urgent, so some help on this will really be appreciated.
I've attached files I have used to convert the model
Link Copied
Hi Tshepo,
With Tensorflow v1.13.1 (not version 1.13), use the faster_rcnn_support.json. Also, try providing the input shape argument. Let us know if this fixes the issue.
Hemanth Kumar G. (Intel) wrote:Hi Tshepo,
With Tensorflow v1.13.1 (not version 1.13), use the faster_rcnn_support.json. Also, try providing the input shape argument. Let us know if this fixes the issue.
Thank you for the reply Hemanth, Kumar G.
Results on Input Shape and using faster_rcnn_support.json file:
So the input for my model is of the shape (-1,-1,-1,3). So according the documentation, I should either append the arguments -b 1 to the above command in the terminal, or, give no shape input at all. I have used the faster_rcnn_support.json as you have suggested, I still get the same error message
Hi Tshepo,
[N,C,H,W] - Image data layout. Refers to the representation of batches of images.
N - Number of images in a batch
C - Number of channels
H - Number of pixels in the vertical dimension
W - Number of pixels in the horizontal dimension
Hemanth Kumar G. (Intel) wrote:Hi Tshepo,
[N,C,H,W] - Image data layout. Refers to the representation of batches of images.
N - Number of images in a batch
C - Number of channels
H - Number of pixels in the vertical dimension
W - Number of pixels in the horizontal dimension
Hey there.
So I used the input shape [1, 3, 600, 600]. I still get the same error.
For more complete information about compiler optimizations, see our Optimization Notice.