- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
When I am optimizing the ssd_mobilenet_v2_coco model trained on tensorflow, the error always return:
Anything wrong with my trained model? I would appreciate if someone could provide your assistance.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Patrick,
Try to use this command to run the model:
sudo python3 mo.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --output="detection_boxes,detection_scores,num_detections" --tensorflow_object_detection_api_pipeline_config ../model_downloader/object_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config
This command uses the pipeline config and the json config as R3(I'm assuming you are using R3, if not I highly recommend you upgrade) has changed it's way of handling the Tensorflow Object Detection API . If this doesn't work I suggest you supply your own pipeline config file. If you don't know how I can help with this.
Also, for future post we have the inside blue page for internal employees.
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Patrick:
As Monique point out, you seems use R3 but the directory shows R2. The feature you are using started from R3.
I also noticed you are working on Windows and I think there might be related to the environment like Python version you are using.
I run exact command like yours on Ubuntu 16.04 without problem but I got a problem even install the prerequisties script. So you might stay on the Linux system.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Monique, Mark,
Thanks for your advice.
My situation is, the OpenVINO installed on my computer can optimize the pre-trained model downloaded from TensorFlow model zoo successfully. However, after training with my own data set, the trained model is unable to be optimized by OpenVINO with following errors (I updated my OpenVINO to r3 based on Monique's advice):
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Patrick,
I recommend doing a diff between the two files to see what the differences are. You can try to use the standard pipeline config file that's included in the OpenVINO package but my guess is that since you re=trained the model that some of the parameter values in your pipeline config may be different such as you restoring your model from the checkpoint I would maybe take that line out as it seems model optimizer can't parse that correctly . However, once you do the diff you could sync the value changes with the current file in the OpenVINO package.
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Patrick,
This looks like a bug with the parser in Model Optimizer. Can you please attach your files so that we can reproduce and resolve the issue?
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OH, I just did some test, and found the char ":" in the path of file is the root cause of this error. Any escape char should I add?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Patrick,
I have achieved the conversion of your model on my side by removing ":" and switching "\" char to "/" in all the paths in the pipeline.config file.
There shouldn't be any other escape chars to add. I've attached the updated pipeline config file that converted for me. Please remember I'm working on Linux instead of Windows but let me know if you run into issues or if you successfully convert the model with your pipeline config while taking out the ":" char.
Kind Regards,
Monique Jones
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have trained a custom SSD mobilenet v1 using Tensorflow Object Detection API. I managed to freeze the graph and successfully used it in inferencing with Tensorflow. I plan to use it with the object_detection_sample_ssd in OpenVINO. However, I was unable to convert the model using model optimizer using the following command:
python3 ./mo_tf.py --input_model /home/amalina/tf-demo/models/research/object_detection/inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/amalina/tf-demo/models/research/object_detection/inference_graph/pipeline.config --reverse_input_channels
These are the errors encountered:
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ] Cannot infer shapes or values for node "MultipleGridAnchorGenerator/ToFloat_11".
[ ERROR ] NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: MultipleGridAnchorGenerator/ToFloat_11 = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false](MultipleGridAnchorGenerator/ToFloat_11/x_port_0_ie_placeholder). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7ff7796f6510>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "MultipleGridAnchorGenerator/ToFloat_11" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
Please advice. The frozen graph is attached.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Amalina,
can you upgrade your Tensorflow with a recent version? I solved a very similar issue by upgrading to TF 1.11.
This is the command to do so:
pip3 install --upgrade tensorflow
Best,
Severine
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Severine,
Thanks for the response. I upgraded Tensorflow to version 1.11 and retrained the model. However I'm still getting the same error from Model Optimizer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Update - I managed to generate BIN and XML files by running model optimizer in a virtual environment where Tensorflow is installed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am trying to optimize ssd_mobilenet_v1. However I encounter an error
[ ERROR ] Failed to convert tokens to dictionary: Wrong character "Use" in position 62
My config doesn't have escape character in the path
numb question but how can I know what character is in position 62 ?
Thank you in advance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I'm having the same problem as Kamarol above (except instead of ToFloat_11 it is ToFloat_3). Using tensorflow 1.09.
Thanks.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page