Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6401 Discussions

Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoa

SPaul19
Innovator
4,170 Views

Hi.

I am using the latest version of OpenVINO to convert a custom trained object detection model that is fine-tuned from an SSD MobileNetV2. Please note that I used TensorFlow Object Detection API's TF2 branch for this. 

I was able to successfully convert the original pre-trained model with OpenVINO but not the custom trained model. I find this a bit weird. 

Here are the entire logs:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/content/openvino/model-optimizer/openvino_files
	- IR output name: 	saved_model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Use the config file: 	None
Model Optimizer version: 	unknown version
2021-02-11 11:30:21.761996: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-11 11:30:25.326932: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-11 11:30:25.328776: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-02-11 11:30:25.342815: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-11 11:30:25.342891: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (36faea018fe2): /proc/driver/nvidia/version does not exist
2021-02-11 11:30:25.343419: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-11 11:30:38.894173: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-02-11 11:30:38.894422: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-02-11 11:30:38.894848: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-11 11:30:38.895116: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2250000000 Hz
2021-02-11 11:30:39.166051: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 2627 nodes (2293), 5775 edges (5434), time = 105.485ms.
  function_optimizer: Graph size after: 2627 nodes (0), 5775 edges (0), time = 53.748ms.
Optimization results for grappler item: __inference_Postprocessor_BatchMultiClassNonMaxSuppression_map_while_body_9758_16164
  function_optimizer: function_optimizer did nothing. time = 0.004ms.
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_Postprocessor_BatchMultiClassNonMaxSuppression_map_while_cond_9757_7162
  function_optimizer: function_optimizer did nothing. time = 0.003ms.
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_map_while_cond_7133_17027
  function_optimizer: function_optimizer did nothing. time = 0.002ms.
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_map_while_body_7134_4578
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while.
Original exception message: '^Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField/Assert/Assert'

 

Here's the command I used:

python mo_tf.py \
	--saved_model_dir /content/saved_model \
	--output_dir openvino_files --data_type FP16

 

Additionally, I am providing the trained model files and configuration so that you folks can take a look. Also, here's the Colab Notebook that I used for trying to run the conversion. 

0 Kudos
1 Solution
Munesh_Intel
Moderator
3,968 Views

Hi Sayak,

I’m able to convert your custom model using the following command:

 

python3 ./openvino/model-optimizer/mo_tf.py --input_model ./detector/exported_model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./openvino/model-optimizer/extensions/front/tf/ssd_support_api_v1.15.json --tensorflow_object_detection_api_pipeline_config ./detector/helment_detector_tf1.config --input_shape [1,300,300,3] --reverse_input_channels --data_type FP16

 

Regards,

Munesh


View solution in original post

0 Kudos
16 Replies
Munesh_Intel
Moderator
4,152 Views

Hi Sayak,

Thanks for reaching out, and for sharing your model with us.

We need further information from you. Please provide the TensorFlow version that your model was trained in, and environment details as well (versions of OS, TensorFlow, Python, CMake, etc.).

 

Regards,

Munesh


0 Kudos
SPaul19
Innovator
4,150 Views

Hi @Munesh_Intel,

This is EXACTLY why I provided a Colab Notebook so that you can see all these for yourself. 

As for model training, it was trained using TensorFlow 2.2. 

0 Kudos
Munesh_Intel
Moderator
4,143 Views

Hi Sayak,

The TensorFlow 2 is in preview support mode, and we don’t have validated TensorFlow 2 models yet. TensorFlow 2 version of SSD MobileNetV2 is not officially supported nor has it been validated.

 

However, we do provide model conversion instructions and there might be a chance that the conversion happened incorrectly in your case.

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#savedmodel_format

 

Perhaps, there is a need to add more arguments to the conversion command, as usually done with TensorFlow 1 Object Detection API Models.

 

For example, if you downloaded the pre-trained SSD InceptionV2 topology and extracted archive to the directory /tmp/ssd_inception_v2_coco_2018_01_28, the sample command line to convert the model looks as follows:

 

<INSTALL_DIR>/deployment_tools/model_optimizer/mo_tf.py --input_model=/tmp/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --transformations_config <INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /tmp/ssd_inception_v2_coco_2018_01_28/pipeline.config --reverse_input_channels

 

More information is available here:

https://docs.openvinotoolkit.org/2021.2/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#how_to_convert_a_model

 

Let us try to convert your model, and please also try from your end as well.


Regards,

Munesh


0 Kudos
SPaul19
Innovator
4,140 Views

I did try with those as well but the same error persists. 

The pre-trained TF2 version of SSD MobileNetV2 converts successfully which should be noted. 

0 Kudos
SPaul19
Innovator
4,097 Views

Hi. 

Any updates on this?

0 Kudos
Munesh_Intel
Moderator
4,083 Views

Hi Sayak,


We’ve run from our side and found that the model is unable to be converted due to incompatibility of the model with Model Optimizer. As we’ve explained previously, since TensorFlow 2 is in preview support mode, thus not all topologies are supported by OpenVINO Model Optimizer.

 

Apart from that, we found that the pretrained model is trained in TensorFlow version 1.15.2. As such, there is no surprise that the pretrained model was converted by Model Optimizer.

 https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/Easy_Object_Detection_With_Custom_Data_Demo_Training.ipynb#scrollTo=mJz_ToJtkufM

 

“The model you will use is a pretrained Mobilenet SSD v2 from the Tensorflow Object Detection API model zoo. The framework used for training is TensorFlow 1.15.2”.


We would like to know whether you took pre-trained model (version TF1.15) from the link above, or did you convert the pre-trained TF1 model over to TF2 format, before custom training the model, or did you take the model template and trained in TF2 from the scratch?

 

Regards,

Munesh


0 Kudos
Thanki__Abhishek
4,066 Views

Hi @Munesh_Intel 

 

I work with Sayak and I would like to let you know that this is the specific pre-trained model we used. It's clear from the link that it is a TF2 model. We had no issues converting the pre-trained model to IR format but we are having issues converting the fine-tuned version of it to IR format as Sayak stated. 

 

As you stated, some topologies might not yet be supported, is there a list we can refer to? From the GitHub PR it seems like MobileNetSSD v2 is supported. But does that mean that custom trained versions are also supported or does it mean only the pre-trained versions are supported?

 

Best regards,

Abhishek

0 Kudos
Munesh_Intel
Moderator
4,045 Views

Hi Thanki,

Support for SSD MobileNetV2 TF2 has only recently been enabled for the pre-trained version for open source OpenVINO and just been merged to Master branch.

For this reason, for time being, we can’t guarantee that it’s going to work for custom trained models. Let me try test your model with the latest Master branch and get back to you.

 

Regards,

Munesh


0 Kudos
Thanki__Abhishek
4,013 Views

Hi @Munesh_Intel 

 

Any update? 

 

We also tried training MobileNetSSDv2 using TFOD v1 API but still got the same error message. 

 

Best regards,

Abhishek

0 Kudos
Munesh_Intel
Moderator
4,007 Views

Hi Thanki,

With regards to your TF2 model issue, we are still investigating it. However, with regards to your TF1 model issue, can you share the command given to Model Optimizer to convert the trained model to Intermediate Representation (IR)?

If possible, please share the trained model files for us to reproduce your issue.

 

Regards,

Munesh


0 Kudos
SPaul19
Innovator
3,987 Views

Hi @Munesh_Intel

PFB the responses to your queries. Please note that we are now talking about our custom trained TFOD API model in TF1 since we do understand that there might be instabilities regarding supporting TF2 variants as of now. 

Here's the command used for exporting the model frozen graph - 

python object_detection/export_inference_graph.py \
    --input_type=image_tensor \
    --pipeline_config_path={PIPELINE_CONFIG_PATH} \
    --output_directory="exported_model" \
    --trained_checkpoint_prefix="/content/models/research/helmet_detector/model.ckpt-10000"

 

where, `PIPELINE_CONFIG_PATH` is the absolute path to the training configuration file (which can be accessed here). Additionally, inside the attached zip you will find the trained model files (both the checkpoints and frozen inference graph). The `export_inference_graph.py` script is from the TFOD API library and is used as per the instructions provided here

Here's the command used to optimize the model:

python mo_tf.py \
    --input_model ./exported_model/frozen_inference_graph.pb \
    --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \
    --tensorflow_object_detection_api_pipeline_config ./helment_detector_tf1.config \
    --input_shape [1,300,300,3] \
    --reverse_input_channels \
    --output_dir output_ncs \
    --data_type FP16 \

 

Note that, we directly clone the openvino repository, install the TF2 prerequisites, and then use the `mo_tf.py` script provided inside `openvino/model-optimizer`.   Let me know if you would like to know anything else. 

0 Kudos
Munesh_Intel
Moderator
3,996 Views

Hi Thanki,

Thank you for waiting and apologies for the delay.

We’ve tested the SSD MobileNet V2 FPNLite 320x320 model with the latest Master branch and are able to convert it successfully. However, when attempting to convert your custom model, we are facing the same errors that you are facing as well.


Meanwhile, SSD MobileNet V2 COCO from TensorFlow Object Detection API is a validated and supported model by OpenVINO. The Model Optimizer conversion arguments can be obtained here:

https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/ssd_mobilenet_v2_coco/model.yml#L36


TF2 custom model conversion error is probably due to incompatibility of the model with Model Optimizer. As we’ve mentioned previously, not all topologies are supported by OpenVINO Model Optimizer for TensorFlow.


Besides that, we also suspect that there might be another probable cause, which is incorrect freezing of the model. I share here a similar case, whereby the error was caused by incorrect command given when freezing the model.

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Tensorflow-error-Unexpected-exception-happened-during-extracting/td-p/1165196


More information to freeze the model is available here:

https://docs.openvinotoolkit.org/2021.2/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model

 

Regards,

Munesh

 


0 Kudos
Munesh_Intel
Moderator
3,969 Views

Hi Sayak,

I’m able to convert your custom model using the following command:

 

python3 ./openvino/model-optimizer/mo_tf.py --input_model ./detector/exported_model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./openvino/model-optimizer/extensions/front/tf/ssd_support_api_v1.15.json --tensorflow_object_detection_api_pipeline_config ./detector/helment_detector_tf1.config --input_shape [1,300,300,3] --reverse_input_channels --data_type FP16

 

Regards,

Munesh


0 Kudos
SPaul19
Innovator
3,959 Views

Thank you, Munesh for your help. We can confirm that with the command you provided, the conversion now works perfectly fine. 

Should we keep this thread open so that TF2 conversion errors could be tracked? What do you suggest in that regard? 

0 Kudos
Munesh_Intel
Moderator
3,919 Views

Hi Sayak,

Glad to hear that you've successfully converted your model.


For your suggestion to keep this thread open, well, as per current practice, if our involvement is not needed anymore, we'll close the thread. If our assistance is still required, we can keep the case open for ten days, so that if you face other issues/errors within this timeframe, we would be able to help. And if you face issues/errors outside of this timeframe, you can simply create a new follow-up thread.

 

If you want to have the thread open "just in case" without anything happening within the case, then I’m afraid that we are unable to accommodate that request.

 

As of now, the TensorFlow 2 is in preview support mode, and as we’ve mentioned previously, not all topologies are supported by OpenVINO. More features will be enabled in upcoming OpenVINO releases.


Regards,

Munesh


0 Kudos
Munesh_Intel
Moderator
3,830 Views

Hi Sayak,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Munesh


0 Kudos
Reply