Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5766 Discussions

Same error when generating IR from multiple models

Diego_Bima
Beginner
280 Views

Dear, I am unable to convert my custom models to IR. In all of them I get the same error.
They are trained with tensorflow 2.4.1
I tried with
faster_rcnn_inception_resnet_v2_640x640_coco17_tpu-8
ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8
ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8
faster_rcnn_resnet50_v1_640x640_coco17_tpu-8
faster_rcnn_resnet101_v1

Always setting the --transformations_config accordingly.

The error log I receive is the following:

 

 

[setupvars.bat] OpenVINO environment initialized
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      None
        - Path for generated IR:        D:\Tensorflow2\workspace\Patentes\exported-OpenVino-models\my_ssd_mobilenet_v1_fpn_640x640_coco17
        - IR output name:       BiarticModel
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  D:\Tensorflow2\workspace\Patentes\exported-models\my_ssd_mobilenet_v1_fpn_640x640_coco17\pipeline.config
        - Use the config file:  None
        - Inference Engine found in:    C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\python\python3.7\openvino
Inference Engine version:       2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Model Optimizer version:            2021.3.0-2787-60059f2c755-releases/2021/3
2021-05-19 10:26:57.817898: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-05-19 10:27:00.971445: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-19 10:27:00.973349: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-05-19 10:27:00.999818: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2070 Super with Max-Q Design computeCapability: 7.5
coreClock: 1.08GHz coreCount: 40 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s
2021-05-19 10:27:00.999933: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-05-19 10:27:01.007445: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-05-19 10:27:01.007544: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-05-19 10:27:01.011344: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-05-19 10:27:01.015244: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-05-19 10:27:01.019860: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-05-19 10:27:01.040467: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-05-19 10:27:01.041865: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-05-19 10:27:01.042095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-05-19 10:27:01.042482: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-05-19 10:27:01.043991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2070 Super with Max-Q Design computeCapability: 7.5
coreClock: 1.08GHz coreCount: 40 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s
2021-05-19 10:27:01.044075: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-05-19 10:27:01.044322: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-05-19 10:27:01.044556: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-05-19 10:27:01.044925: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-05-19 10:27:01.045235: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-05-19 10:27:01.045513: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-05-19 10:27:01.045742: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-05-19 10:27:01.046020: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-05-19 10:27:01.046269: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-05-19 10:27:01.513826: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-05-19 10:27:01.513959: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0
2021-05-19 10:27:01.514228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N
2021-05-19 10:27:01.514467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6611 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2070 Super with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 7.5)
2021-05-19 10:27:01.515041: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-19 10:27:07.325037: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-05-19 10:27:07.325378: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-05-19 10:27:07.326499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2070 Super with Max-Q Design computeCapability: 7.5
coreClock: 1.08GHz coreCount: 40 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 327.88GiB/s
2021-05-19 10:27:07.326590: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-05-19 10:27:07.326629: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-05-19 10:27:07.326658: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-05-19 10:27:07.326686: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-05-19 10:27:07.326713: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-05-19 10:27:07.326740: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-05-19 10:27:07.326767: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-05-19 10:27:07.326794: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-05-19 10:27:07.326849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-05-19 10:27:07.326907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-05-19 10:27:07.326935: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0
2021-05-19 10:27:07.326956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N
2021-05-19 10:27:07.327083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6611 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2070 Super with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 7.5)
2021-05-19 10:27:07.327135: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-19 10:27:07.547366: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 3426 nodes (3068), 7254 edges (6889), time = 111.956ms.
  function_optimizer: Graph size after: 3426 nodes (0), 7254 edges (0), time = 38.062ms.
Optimization results for grappler item: __inference_map_while_body_6529_8991
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_cond_6528_12447
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIOutputReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIOutputReplacement'>)": 'inputs'
[ ERROR ]  Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 290, in apply_transform
    for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\middle\pattern_match.py", line 60, in for_graph_and_each_sub_graph_recursively
    for_each_sub_graph_recursively(graph, func)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\middle\pattern_match.py", line 54, in for_each_sub_graph_recursively
    for_each_sub_graph(graph, recursive_helper)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\middle\pattern_match.py", line 39, in for_each_sub_graph
    func(node[sub_graph_name])
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\middle\pattern_match.py", line 50, in recursive_helper
    func(sub_graph)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\front\tf\replacement.py", line 48, in find_and_replace_pattern
    self.transform_graph(graph, desc._replacement_desc['custom_attributes'])
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\extensions\front\tf\ObjectDetectionAPI.py", line 1222, in transform_graph
    add_output_ops(graph, _outputs, graph.graph['inputs'])
KeyError: 'inputs'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\main.py", line 345, in main
    ret_code = driver(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\main.py", line 309, in driver
    ret_res = emit_ir(prepare_ir(argv), argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\main.py", line 252, in prepare_ir
    graph = unified_pipeline(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\pipeline\unified.py", line 29, in unified_pipeline
    class_registration.ClassType.BACK_REPLACER
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 340, in apply_replacements
    apply_replacements_list(graph, replacers_order)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 330, in apply_replacements_list
    num_transforms=len(replacers_order))
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\utils\logger.py", line 124, in wrapper
    function(*args, **kwargs)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2021.3.394\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 318, in apply_transform
    )) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPIOutputReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIOutputReplacement'>)": 'inputs'

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

 

Any idea what is happening ?
TIP: When I freeze the model in tensorflow 2, the process shows several warnings that say: 

Skipping full serialization of Keras layer <object_detection.meta_architectures.ssd_meta_arch.SSDMetaArch object at 0x000001F13105E1C8>, because it is not built

 

0 Kudos
4 Replies
IntelSupport
Community Manager
252 Views

Hi Diego_Bima,

Thanks for reaching out. These kinds of issues usually arise when an unsupported model is used. Since you are using a custom model, there might be some issue with the configuration file (faster_rcnn_support_api_v2.0.json). Some parts of the model may also change and causing the .json file to be incompatible. Anyway, you also tested with OpenVINO and got the same error and this might be happening because the frozen file is not completely built according to the warning that you get.

 

You can refer to these Freezing Custom Models to freeze your native model to convert them to IR using Model Optimizer.

 

Have a look at Convert TensorFlow *2 Models documentation for the supported model formats.

 

Plus, you also can check the TensorFlow 2 Keras Supported Operations for your model using netron. Some of the framework layers might not be supported and that’s can lead to error also.

 

Regards,

Aznie


Diego_Bima
Beginner
243 Views

Hi Aznie! Thanks for your answer, I will try what you tell me.
A query...  when you say:

"You can refer to these Freezing Custom Models to freeze your native model to convert them to IR using Model Optimizer."

refers to the freeze method of tensorflow 1 right? Doesn't it matter that I'm using Tensorflow 2?

IntelSupport
Community Manager
233 Views

Hi Diego_Bima,

I believe we can use the same method as it is from OpenVINO official documentation. Then, you can refer to Convert Tensorflow *2 Models for the conversion.

 

Regards,

Aznie


IntelSupport
Community Manager
204 Views

Hi Diego Bima,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


Reply