Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

How to offload OpenVINO non-compliant layer to Tensorflow (undefined symbol: _ZN15InferenceEngine10TensorDescC1Ev)

Hyodo__Katsuya
Innovator
1,058 Views
Hello experts. I'm challenging the accuracy tuning of tiny-YoloV3 in a way different from Intel's tutorial. I would like OpenVINO to offload unsupported layers to Tensorflow. Following the tutorial below, I tried to build the latest Tensorflow 1.12, but I get an error when loading the model. (undefined symbol: _ZN15InferenceEngine10TensorDescC1Ev) - Intel's Tutorial https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer#offloading-computations-tensorflow However, I have confirmed that it is not possible to build properly even using Tensorflow of r1.4 described in Intel's tutorial. How can I successfully offload to Tensorflow? I have already succeeded in implementing tiny-YoloV3 on the Github repository I own. https://github.com/PINTO0309/OpenVINO-YoloV3.git However, I am looking for another way because it is very inaccurate and can not be used. I do not want to use "--tensorflow_use_custom_operations_config yolo_v3_changed.json". https://github.com/PINTO0309/OpenVINO-YoloV3/blob/master/yolo_v3_tiny_changed.json https://github.com/PINTO0309/OpenVINO-YoloV3/blob/master/script.txt - Environment Tensorflow 1.12 (or Tensorflow r1.4) bazel 0.21 protobuf 3.6.1 OpenVINO R5 - Installation procedure $ git clone -b v1.12.0 https://github.com/tensorflow/tensorflow.git $ cd tensorflow $ git checkout v1.12.0 $ sudo -E /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/tf_call_ie_layer/build.sh $ cp bazel-bin/tensorflow/cc/inference_engine_layer/libtensorflow_call_layer.so /home/xxx/git/tiny-yolo-tensorflow - summarize_graph of .pb Found 1 possible inputs: (name=YOLO/input, type=float(1), shape=[1,416,416,3]) No variables spotted. Found 2 possible outputs: (name=YOLO/output1, op=Identity) (name=YOLO/output2, op=Identity) Found 8854542 (8.85M) const parameters, 0 (0) variable parameters, and 0 control_edges Op types used: 201 Const, 63 Identity, 50 Mul, 36 Add, 30 StridedSlice, 24 Sigmoid, 22 Mean, 13 Conv2D, 12 RealDiv, 11 Sub, 11 StopGradient, 11 SquaredDifference, 11 Rsqrt, 11 Maximum, 6 MaxPool, 6 Exp, 3 ConcatV2, 2 Split, 1 Placeholder, 1 Fill, 1 Conv2DBackpropInput To use with tensorflow/tools/benchmark:benchmark_model try these arguments: bazel run tensorflow/tools/benchmark:benchmark_model -- \ --graph=/home/xxxx/git/tiny-yolo-tensorflow/train_graph/tiny-yolo-final.pb \ --show_flops \ --input_layer=YOLO/input \ --input_layer_type=float \ --input_layer_shape=1,416,416,3 \ --output_layer=YOLO/output1,YOLO/output2 - Convert to IR $ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ --input_model train_graph/tiny-yolo-final.pb \ --output_dir lrmodels/tiny-YoloV3/FP32 \ --data_type FP32 \ --batch 1 \ --offload_unsupported_operations_to_tf - Convert Log Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/xxxx/git/tiny-yolo-tensorflow/train_graph/tiny-yolo-final.pb - Path for generated IR: /home/xxxx/git/tiny-yolo-tensorflow/lrmodels/tiny-YoloV3/FP32 - IR output name: tiny-yolo-final - Log level: ERROR - Batch: 1 - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: True - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.5.12.49d067a0 After 1 iteration there are 6 unsupported ops [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: /home/xxxx/git/tiny-yolo-tensorflow/lrmodels/tiny-YoloV3/FP32/tiny-yolo-final.xml [ SUCCESS ] BIN file: /home/xxxx/git/tiny-yolo-tensorflow/lrmodels/tiny-YoloV3/FP32/tiny-yolo-final.bin [ SUCCESS ] Total execution time: 12.87 seconds. - Test program ## ## python3 openvino_modelload_test.py ## import sys import cv2 import numpy as np from openvino.inference_engine import IENetwork, IEPlugin model_xml="lrmodels/tiny-YoloV3/FP32/tiny-yolo-final.xml" model_bin="lrmodels/tiny-YoloV3/FP32/tiny-yolo-final.bin" net = IENetwork(model=model_xml, weights=model_bin) plugin = IEPlugin(device="CPU") plugin.add_cpu_extension("libtensorflow_call_layer.so") #--- Where the error occurred. exec_net = plugin.load(network=net) input_blob = next(iter(net.inputs)) out_blob = next(iter(net.outputs)) - Error occurring at runtime Traceback (most recent call last): File "openvino_modelload_test.py", line 16, in plugin.add_cpu_extension("libtensorflow_call_layer.so") File "ie_api.pyx", line 423, in openvino.inference_engine.ie_api.IEPlugin.add_cpu_extension File "ie_api.pyx", line 427, in openvino.inference_engine.ie_api.IEPlugin.add_cpu_extension RuntimeError: Cannot load library 'libtensorflow_call_layer.so': libtensorflow_call_layer.so: undefined symbol: _ZN15InferenceEngine10TensorDescC1Ev
0 Kudos
14 Replies
nikos1
Valued Contributor I
1,058 Views

Hello Katsuya-san,

Looks like some dependency is not met. What is the output of

ldd -r libtensorflow_call_layer.so

Thanks,

Nikos

0 Kudos
Hyodo__Katsuya
Innovator
1,058 Views
Always thank you, nikos! The command execution result is as follows. I knew for the first time the existence of the command that you suggested. From this result, can I see what modules are missing? - Result of "ldd -r libtensorflow_call_layer.so" xxxx@ubuntu:~/git/tiny-yolo-tensorflow$ ldd -r libtensorflow_call_layer.so linux-vdso.so.1 => (0x00007ffea1995000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f942bc01000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f942b9e4000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f942b6db000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f942b4d3000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f942b151000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f942af3b000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f942ab71000) /lib64/ld-linux-x86-64.so.2 (0x00007f9432b23000) undefined symbol: _ZN15InferenceEngine10TensorDescC1Ev (./libtensorflow_call_layer.so) undefined symbol: _ZNK15InferenceEngine4Data13getTensorDescEv (./libtensorflow_call_layer.so) undefined symbol: _ZN15InferenceEngine12BlockingDescC1ERKSt6vectorImSaImEES5_ (./libtensorflow_call_layer.so) undefined symbol: _ZN15InferenceEngine10TensorDescC1ERKNS_9PrecisionESt6vectorImSaImEERKNS_12BlockingDescE (./libtensorflow_call_layer.so)
0 Kudos
Hyodo__Katsuya
Innovator
1,058 Views
I examined it within the range that I can check, and I expected that a symbolic error occurred in the following part. - File /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/tf_call_ie_layer/layer_sources/tensorflow_layer.cpp - Part of the program StatusCode TensorflowImplementation::getSupportedConfigurations(std::vector& conf, ResponseDesc *resp) noexcept { if (!errorMsg.empty()) { if (resp) errorMsg.copy(resp->msg, sizeof(resp->msg) - 1); return GENERAL_ERROR; } LayerConfig config; config.dynBatchSupport = false; for (size_t i = 0; i < _layer.insData.size(); i++) { DataConfig dataConfig; dataConfig.inPlace = -1; dataConfig.constant = false; std::vector order; for (size_t j = 0; j < _layer.insData.lock()->getTensorDesc().getDims().size(); j++) order.push_back(j); dataConfig.desc = TensorDesc(InferenceEngine::Precision::FP32, _layer.insData.lock()->getTensorDesc().getDims(), {_layer.insData.lock()->getTensorDesc().getDims(), order}); config.inConfs.push_back(dataConfig); } for (size_t i = 0; i < _layer.outData.size(); i++) { DataConfig dataConfig; dataConfig.inPlace = -1; dataConfig.constant = false; std::vector order; for (size_t j = 0; j < _layer.outData->getTensorDesc().getDims().size(); j++) order.push_back(j); dataConfig.desc = TensorDesc(InferenceEngine::Precision::FP32, _layer.outData->getTensorDesc().getDims(), {_layer.outData->getTensorDesc().getDims(), order}); config.outConfs.push_back(dataConfig); } conf.push_back(config); return OK; }
0 Kudos
nikos1
Valued Contributor I
1,058 Views

Hi Katsuya-san,

> From this result, can I see what modules are missing?

Yes, I believe so ; seems like a mismatch between OpenVino and tensorflow causing this issue.

Ideally when LD_LIBRARY_PATH is set properly then the ldd -r should not show any undefined symbols.

I am also getting the same issue and trying to find a solution. I will try other TF versions.

0 Kudos
nikos1
Valued Contributor I
1,058 Views

Question to the Intel OpenVino team please: 

RE: https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer#offloading-computations-tensorflow

Is the documentation still valid for OpenVino SDK R5  ( computer_vision_sdk_2018.5.445 ) ?

      Clone the TensorFlow* r1.4 Git repository.

or we need to checkout a different version of Tensorflow?

Thank you!

0 Kudos
Hyodo__Katsuya
Innovator
1,058 Views

Question to the Intel OpenVino team please:

I tried all versions from Tensorflow v1.4.1 to v1.12.0.

Although the installation procedure is somewhat different, the same error is displayed in all versions.

- Inspection result (r1.4 - r1.12)

https://github.com/PINTO0309/OpenVINO-YoloV3/wiki/How-to-offload-OpenVINO-non-compliant-layer-to-Tensorflow

0 Kudos
Hyodo__Katsuya
Innovator
1,058 Views
I have discovered another new tutorial. It seems necessary to build Tensorflow from source. It is very confusing that tutorials are divided in many places... https://docs.openvinotoolkit.org/R5/_docs_MO_DG_prepare_model_customize_model_optimizer_Offloading_Sub_Graph_Inference.html
0 Kudos
nikos1
Valued Contributor I
1,058 Views

Thank you Katsuya-san, good find!!

I will read the new docs you found and try the new technique.

( BTW I also tried other versions of tensorflow but no luck. )

 

0 Kudos
Hyodo__Katsuya
Innovator
1,058 Views
I noticed that there are no modules listed in the tutorial. For example, the following is an article about when "OpenVINO" was called "deeplearning deploymenttoolkit" in 2017. There was a program that did not exist in the OpenVINO sample. - For example, "extensibility_sample" https://gist.github.com/juliensimon/7dd4ee1a9e091e0f1332a18fd3464af5 "deeplearning deploymenttoolkit" seems to be called "dldt", but somehow it seems that the information around 2017 is lost in the current Github repository. https://github.com/opencv/dldt/tree/2018/model-optimizer The layer offloading feature was a more meaningful function than creating a custom layer in terms of work cost. Therefore, I hope that Intel will revive it. In the current OpenVINO specification, I have to write a program that offloads to Tensorflow on my own like my repository below. It's not very beautiful. https://github.com/PINTO0309/OpenVINO-DeeplabV3.git Also, I am very unhappy to leave unavailable functions in the tutorial.
0 Kudos
Xiaojun_H_Intel
Employee
1,059 Views

Hi @Hyodo, Katsuya,

Any progress about this issue? I have encountered the same issue with you.

0 Kudos
Hyodo__Katsuya
Innovator
1,059 Views
@Xiaojun H. (Intel) (Intel) No. I am waiting for the Intel OpenVINO team to correct the document correctly or to rearrange the library correctly. If this function recovers, I am able to verify a fairly diverse model, so I am looking forward to it very much. This problem seems to have been neglected for quite a long time, so Intel may not plan to improve it.
0 Kudos
Lee__Terry
Beginner
1,059 Views

I'm waiting for the solution as well. I'm hoping intel could build and verify the working tiny-YoloV3 model. I needed  to decide to wait for a solution or looking for an alternative.   

0 Kudos
Shaoqiang_C_Intel
1,059 Views

undefined symbols belong to libinference_engine, here is a walkaround:

import ctypes

...

ctypes.CDLL("libinference_engine.so", ctypes.RTLD_GLOBAL); # to export symbols to others now
plugin.add_cpu_extension(plugin_dirs + "libtensorflow_call_layer.so")

0 Kudos
Hyodo__Katsuya
Innovator
1,059 Views

@Shaoqiang C. (Intel)

I appreciate it very much!! The error no longer occurs. Let's continue to verify the operation.

Terminal - b920405@ubuntu: -media-b920405-Windows1-work-tensorflow-bazel-bin-tensorflow-cc-inference_engine_layer_036.png

0 Kudos
Reply