Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Model Optimizer SSD_mobilenet_v2 custom dataset

Saibro__Güinther
745 Views

Hi,

I have been trying to run the Model Optimizer on ssd_mobilenet_v2 from Google Object Detection API trained with my own dataset. I am able to run the optimizer with the models provided by Model Zoo, but when trying to use the exactly same model fine-tuned with my own dataset I get the following error:

The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  FusedBatchNorm doesn't support is_training=True. Node FeatureExtractor/MobilenetV2/Conv_1/BatchNorm/FusedBatchNorm
Exception occurred during running replacer "Fused_Batch_Norm_is_training_true_catcher" (<class 'extensions.middle.FusedBatchNormTrainingCatch.FusedBatchNormTrainingCatch'>): FusedBatchNorm doesn't support is_training=True. Node FeatureExtractor/MobilenetV2/Conv_1/BatchNorm/FusedBatchNorm

I put the following as args for the optimizer:

python mo_tf.py --input_model C:<path>\frozen_inference_graph.pb --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v1.14.json" --tensorflow_object_detection_api_pipeline_config "<path>\pipeline.config"

Does someone can help me with this problem?

My pipeline.config is the same from ssd_mobilenet_v2_coco_2018_03_29, I just replaced the num_classes.

0 Kudos
4 Replies
Cary_P_Intel1
Employee
745 Views

Hi, Güinther,

Can you try to add "--input is_training=False" while converting the model, I don't know if it will work or not. OR please freeze the model for inference not for training to remove those nodes for training purpose only.

 

0 Kudos
Saibro__Güinther
745 Views

Thank you for your answer, I am following the instructions from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md, would you have some insights about how to force is_training = false? Normally it should have already be done at https://github.com/tensorflow/models/blob/ad56514f100de0c724a704db9142af9f3d0dd6e0/research/object_detection/exporter.py#L493.

0 Kudos
Saibro__Güinther
745 Views

The problem seems to be from Tensorflow Object Detection API, when running export_inference_graph, the flag is_training is not set to False even if it's hard coded. Have you tested to train a model from there with a custom dataset and run it with the model optimizer?If so, would there be a step I'm missing??

Other question about the Model Optimizer using the --input_meta_graph, in this case I don't need any inference model, is that right? However, when running it I got the following error:

Model Optimizer version:        2019.3.0-408-gac8584cb7
2019-12-05 16:30:38.736964: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2019-12-05 16:30:38.741487: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2019-12-05 16:30:40.646551: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2019-12-05 16:30:40.649724: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
2019-12-05 16:30:40.655405: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-FU1L1K1
2019-12-05 16:30:40.657902: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-FU1L1K1
2019-12-05 16:30:40.660935: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
[ ERROR ]  Graph contains 0 node after executing <class 'extensions.front.output_cut.OutputCut'>. It considered as error because resulting IR will be empty which is not usual
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.output_cut.OutputCut'>): Graph contains 0 node after executing <class 'extensions.front.output_cut.OutputCut'>. It considered as error because resulting IR will be empty which is not usual

Regards,

Güinther.

0 Kudos
Saibro__Güinther
745 Views

The problem was in the tensorflow side, with tf 1.13.1 it works fine.

0 Kudos
Reply