Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Conversion of ssd_EfficientNet b0

GustavoLMourao
273 Views

Hello everyone.

 

We are trying to convert a ssd_efficientnet b0 model for a FP32 and FP16. I have tried considering two patterns (input arguments):

 

1.  python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --transformations_config ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/extensions/front/tf/efficient_det_support_api_v2.4.json --tensorflow_use_custom_operations_config pipeline.config --input_shape [1,512,512,3] --input_checkpoint checkpoint


2. python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --reverse_input_channels --tensorflow_use_custom_operations_config pipeline.config --input_shape [1,512,512,3] --input_checkpoint checkpoint

 

For example 1 I have got:

 

```

python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --transformations_config ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/extensions/front/tf/efficient_det_support_api_v2.4.json --tensorflow_use_custom_operations_config pipeline.config --input_shape [1,512,512,3] --input_checkpoint checkpoint
[ WARNING ] Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/eff_ir
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,512,512,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/pipeline.config
- Inference Engine found in: /home/macnicadhw/intel/openvino_2021.4.689/python/python3.6/openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
2021-09-17 10:18:29.610781: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/macnicadhw/intel/openvino_2021.4.689/data_processing/dl_streamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/data_processing/gstreamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/opencv/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/ngraph/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/tbb/lib::/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/hddl/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/omp/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/gna/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/lib/intel64
2021-09-17 10:18:29.610819: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2021-09-17 10:18:31.594209: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-17 10:18:31.594411: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/macnicadhw/intel/openvino_2021.4.689/data_processing/dl_streamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/data_processing/gstreamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/opencv/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/ngraph/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/tbb/lib::/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/hddl/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/omp/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/gna/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/lib/intel64
2021-09-17 10:18:31.594422: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-09-17 10:18:31.594469: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (macnicadhw-NUC8i7HVK): /proc/driver/nvidia/version does not exist
2021-09-17 10:18:31.594670: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-17 10:18:31.595036: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-17 10:18:45.345639: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-09-17 10:18:45.345820: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-09-17 10:18:45.346167: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-17 10:18:45.363896: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3099995000 Hz
2021-09-17 10:18:45.699588: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
function_optimizer: Graph size after: 5909 nodes (5208), 13515 edges (12807), time = 151.709ms.
function_optimizer: Graph size after: 5909 nodes (0), 13515 edges (0), time = 72.863ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_false_12929_61013
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_true_12928_15958
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_map_while_body_12882_61128
function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 1.109ms.
function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 1.224ms.
Optimization results for grappler item: __inference_map_while_cond_12881_14743
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.

[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.transformations_config.TransformationsConfig'>): Failed to parse custom replacements configuration file '/home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/pipeline.config': Expecting value: line 1 column 1 (char 0).
For more information please refer to Model Optimizer FAQ, question #70. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?q...)


```

 

For the second one:

```

python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --reverse_input_channels --tensorflow_use_custom_operations_config pipeline.config --input_shape [1,512,512,3] --input_checkpoint checkpoint
[ WARNING ] Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/eff_ir
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,512,512,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/pipeline.config
- Inference Engine found in: /home/macnicadhw/intel/openvino_2021.4.689/python/python3.6/openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
2021-09-17 10:17:40.112650: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/macnicadhw/intel/openvino_2021.4.689/data_processing/dl_streamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/data_processing/gstreamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/opencv/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/ngraph/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/tbb/lib::/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/hddl/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/omp/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/gna/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/lib/intel64
2021-09-17 10:17:40.112672: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2021-09-17 10:17:42.059106: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-17 10:17:42.059336: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/macnicadhw/intel/openvino_2021.4.689/data_processing/dl_streamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/data_processing/gstreamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/opencv/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/ngraph/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/tbb/lib::/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/hddl/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/omp/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/gna/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/lib/intel64
2021-09-17 10:17:42.059349: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-09-17 10:17:42.059396: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (macnicadhw-NUC8i7HVK): /proc/driver/nvidia/version does not exist
2021-09-17 10:17:42.059650: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-17 10:17:42.060013: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-17 10:17:55.525629: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-09-17 10:17:55.525812: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-09-17 10:17:55.526139: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-17 10:17:55.543801: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3099995000 Hz
2021-09-17 10:17:55.882656: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
function_optimizer: Graph size after: 5909 nodes (5208), 13515 edges (12807), time = 153.121ms.
function_optimizer: Graph size after: 5909 nodes (0), 13515 edges (0), time = 73.813ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_false_12929_61013
function_optimizer: function_optimizer did nothing. time = 0.002ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_true_12928_15958
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_body_12882_61128
function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 1.117ms.
function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 1.194ms.
Optimization results for grappler item: __inference_map_while_cond_12881_14743
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.

[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.transformations_config.TransformationsConfig'>): Failed to parse custom replacements configuration file '/home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/pipeline.config': Expecting value: line 1 column 1 (char 0).
For more information please refer to Model Optimizer FAQ, question #70. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?q...)

```

python version: 3.6
tensorflow version: 2.4.3

 

 

0 Kudos
4 Replies
Wan_Intel
Moderator
248 Views

Hi GustavoLMourao,

Thanks for reaching out.


Could you please share your model with us for further investigation?


Regards,

Wan


GustavoLMourao
216 Views

Hi Wan,

Thanks for the support.

 

We have tried the conversion considering those arguments now:

 

python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --transformations_config ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v2.4.json --tensorflow_use_custom_operations_config pipeline.config

 

```

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/eff_ir
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/pipeline.config
- Use the config file: None
- Inference Engine found in: /home/macnicadhw/intel/openvino_2021.4.689/python/python3.6/openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
2021-09-22 08:29:37.083836: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/macnicadhw/intel/openvino_2021.4.689/data_processing/dl_streamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/data_processing/gstreamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/opencv/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/ngraph/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/tbb/lib::/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/hddl/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/omp/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/gna/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/lib/intel64
2021-09-22 08:29:37.083857: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2021-09-22 08:29:39.396936: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-22 08:29:39.397517: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/macnicadhw/intel/openvino_2021.4.689/data_processing/dl_streamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/data_processing/gstreamer/lib:/home/macnicadhw/intel/openvino_2021.4.689/opencv/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/ngraph/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/tbb/lib::/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/hddl/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/omp/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/gna/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/home/macnicadhw/intel/openvino_2021.4.689/deployment_tools/inference_engine/lib/intel64
2021-09-22 08:29:39.397556: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-09-22 08:29:39.397603: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (macnicadhw-NUC8i7HVK): /proc/driver/nvidia/version does not exist
2021-09-22 08:29:39.397804: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-22 08:29:39.398200: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-22 08:29:53.911867: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-09-22 08:29:53.912032: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-09-22 08:29:53.912353: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-22 08:29:53.930567: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3099995000 Hz
2021-09-22 08:29:54.300506: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
function_optimizer: Graph size after: 5909 nodes (5208), 13515 edges (12807), time = 165.852ms.
function_optimizer: Graph size after: 5909 nodes (0), 13515 edges (0), time = 81.704ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_false_12929_61013
function_optimizer: function_optimizer did nothing. time = 0.002ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_true_12928_15958
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_body_12882_61128
function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 1.143ms.
function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 1.272ms.
Optimization results for grappler item: __inference_map_while_cond_12881_14743
function_optimizer: function_optimizer did nothing. time = 0ms.
function_optimizer: function_optimizer did nothing. time = 0ms.

[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Max retries exceeded with url: /collect (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f7a7d3ac8d0>, 'Connection to www.google-analytics.com timed out. (connect timeout=1.0)'))
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Failed to send event with the following error: HTTPSConnectionPool(host='www.google-analytics.com', port=443): Read timed out. (read timeout=1.0)
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (512, 512).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/eff_ir/saved_model.xml
[ SUCCESS ] BIN file: /home/macnicadhw/Documents/ambev-autoML/models/efficientdet-20210824T181239Z-001/efficientdet/eff_ir/saved_model.bin

```

 

And it worked (which is strange for me since we have passed the config related to another topology). Could you explain this behavior?

Wan_Intel
Moderator
163 Views

Hi GustavoLMourao,

 

Glad that you’ve been able to convert your model successfully. 

ssd_support_api_v.2.4.json is used for SSD topologies trained using the TensorFlow Object Detection API version 2.4 or higher.

 

Your model is ssd_efficientnet b0, which is an SSD with EfficientNet backbone.

Therefore, ssd_support_api_v2.4.json is the appropriate config file.

 

Best regards,

Wan


Wan_Intel
Moderator
131 Views

Hi GustavoLMourao,


This thread will no longer be monitored since we have provided a solution. 

If you need any additional information from Intel, please submit a new question.



Regards,

Wan


Reply