- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, I am trying to convert a retrained TF OD API Mask RCNN model, which works on GPU, but I am not able to use 'mo' since it's giving me errore:
The command I use is:
mo \
> --saved_model_dir '<DIR_TF_model>/Tensorflow-Object-Detection-API-train-custom-Mask-R-CNN-model-master/inference_graph/saved_model' \
> --transformations_config '<DIR_OV>/openvino_env/lib/python3.7/site-packages/openvino/tools/mo/front/tf/mask_rcnn_support_api_v2.0.json' \
> --tensorflow_object_detection_api_pipeline_config '<DIR_TF>/Tensorflow-Object-Detection-API-train-custom-Mask-R-CNN-model-master/inference_graph/pipeline.config' \
> --reverse_input_channels
TF Version is 2.4
The error is:
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] Exception occurred during running replacer "ObjectDetectionAPIPreprocessor2Replacement (<class 'openvino.tools.mo.front.tf.ObjectDetectionAPI.ObjectDetectionAPIPreprocessor2Replacement'>)
I would appreciate very much any help to solve this issue,
Vicenç
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I checked your model and managed to convert it. (Note that your model is dynamic shaped)
This is the command that I used: mo --saved_model_dir "C:\Users\sjaismex\Downloads\saved_model (1)\saved_model"
The best is to provide your input shape together here, eg: mo --saved_model_dir "C:\Users\sjaismex\Downloads\saved_model (1)\saved_model" --input_shape [1, -1, -1, 3]
This is the inferencing result for your model.
Command that I used: benchmark_app -m C:\Users\sjaismex\saved_model.xml -data_shape [1,5,4,3]
Cordially,
Iffa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
could you share the relevant files for us to validate? (model files, etc)
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Iffa, yes, of course.
The relevant files are attached now, please tell me if you need additional files or information.
* The versions of OpenVino and TensorFlow are:
tensorflow==2.4.0
openvino==2022.3.0
Thank you very much,
Vicenç Parisi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your saved_model files that were shared are incomplete.
The SavedModel format should consist of a directory with a saved_model.pb file and two subfolders: variables and assets.
Make sure you are using & sharing the correct Tensorflow model file
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, thank you for your message.
I know that they are incomplete, but the folder size is 222MB and cannot be attached.
("The file (saved_model.tar.xz) exceeds the maximum file size. The maximum file size is 71 MB.")
I would appreciate very much if you can give me advice on how to send it.
Vicenç
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You could upload it to Google Drive & share the link with me (I'll request access afterward if you made this file private) or upload it to your GitHub page.
I don't think sharing through my Intel email is an option here since this also has a limit for file size.
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Iffa,
the link to the saved_model compressed folder is:
https://drive.google.com/file/d/1hyL5bjGp6V5bUpjCplQFoMTkyUN4QuX7/view?usp=share_link
Thank you!
Vicenç
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I checked your model and managed to convert it. (Note that your model is dynamic shaped)
This is the command that I used: mo --saved_model_dir "C:\Users\sjaismex\Downloads\saved_model (1)\saved_model"
The best is to provide your input shape together here, eg: mo --saved_model_dir "C:\Users\sjaismex\Downloads\saved_model (1)\saved_model" --input_shape [1, -1, -1, 3]
This is the inferencing result for your model.
Command that I used: benchmark_app -m C:\Users\sjaismex\saved_model.xml -data_shape [1,5,4,3]
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you very much Iffa!!!
So, we don't need to specify transformations_config and tensorflow_object_detection_api_pipeline_config?
By the way, which CPU did you use to do the inference test (it it an i7?) , I am asking this because the inference time is around 60s per image?
Is there a way to make it faster?
Cordially,
Vicenç
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You might want to pay attention to the Warnings that prompted.
They didn't cause errors but they might help your model to function more efficiently.
Yes I'm using i7 for the testing. You can consider using OpenVINO Model Optimization to improve the performance.
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Cordially,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page