Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Issue converting AutoML TensorFlow model in openvino toolkit

robopp
Beginner
1,157 Views

I have an object detection model developed in the Google Cloud AutoML Vision service. That service outputs a single saved_model.pb file (it also outputs tflite and tf.js versions).

Running this file with the openvino toolkit produces the following error.

 

C:\Program Files (x86)\Intel\openvino_2021.4.689\deployment_tools\model_optimizer>py -3.8-64 mo.py --saved_model_dir D:\Google\TFContainer --reverse_input_channels
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: C:\Program Files (x86)\Intel\openvino_2021.4.689\deployment_tools\model_optimizer\.
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
- Inference Engine found in: C:\Program Files (x86)\Intel\openvino_2021.4.689\python\python3.8\openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
2021-10-16 14:44:03.825371: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-10-16 14:44:03.825523: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
C:\Users\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\autograph\impl\api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2021-10-16 14:44:07.546940: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-16 14:44:07.547844: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-10-16 14:44:08.001769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce 940MX computeCapability: 5.0
coreClock: 1.189GHz coreCount: 3 deviceMemorySize: 2.00GiB deviceMemoryBandwidth: 29.80GiB/s
2021-10-16 14:44:08.003259: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-10-16 14:44:08.004432: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2021-10-16 14:44:08.008225: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found
2021-10-16 14:44:08.009320: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found
2021-10-16 14:44:08.010417: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found
2021-10-16 14:44:08.011492: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found
2021-10-16 14:44:08.012598: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found
2021-10-16 14:44:08.013649: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2021-10-16 14:44:08.013799: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-10-16 14:44:08.014388: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-16 14:44:08.015056: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-16 14:44:08.015246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]
2021-10-16 14:44:08.015639: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-16 14:44:08.040262: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-10-16 14:44:09.855246: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-10-16 14:44:09.855670: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-10-16 14:44:09.862338: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce 940MX computeCapability: 5.0
coreClock: 1.189GHz coreCount: 3 deviceMemorySize: 2.00GiB deviceMemoryBandwidth: 29.80GiB/s
2021-10-16 14:44:09.863536: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-10-16 14:44:09.864626: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2021-10-16 14:44:09.865692: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found
2021-10-16 14:44:09.866756: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found
2021-10-16 14:44:09.867850: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found
2021-10-16 14:44:09.868954: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found
2021-10-16 14:44:09.870755: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found
2021-10-16 14:44:09.872893: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2021-10-16 14:44:09.873083: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-10-16 14:44:09.935418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-16 14:44:09.935605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2021-10-16 14:44:09.936633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2021-10-16 14:44:09.940320: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-16 14:44:09.992214: E tensorflow/core/grappler/grappler_item_builder.cc:669] Init node index_to_string/table_init/LookupTableImportV2 doesn't exist in graph
[ FRAMEWORK ERROR ] Cannot load input model: SavedModel format load failure: Failed to import metagraph, check error log for more info.

 

I understand from other posts that I may need a pipeline.config file. But not sure this model aligns with any of the pipeline.config samples, and unsure how to assemble one from scratch.

Thanks

0 Kudos
2 Replies
Iffa_Intel
Moderator
1,111 Views

Hi,

 

 Generally, these are steps for optimizing and deploying a model that was trained with the TensorFlow* framework:

 

  1. Configure the Model Optimizer for TensorFlow* (TensorFlow was used to train your model).
  2. Freeze the TensorFlow model if your model is not already frozen or skip this step and use the instruction to a convert a non-frozen model.
  3. Convert a TensorFlow* model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and biases values.
  4. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided sample applications.
  5. Integrate the Inference Engine in your application to deploy the model in the target environment.

 

Step #2 is really important. You need to load a frozen model to the OpenVINO MO.

Besides that, your Tensorflow model's topology must be listed in the supported topology section (if not, your model is not supported):

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#supported_topologies

 

 

 

Sincerely,

Iffa

 

0 Kudos
Iffa_Intel
Moderator
1,061 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


0 Kudos
Reply