- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi, when i use openvino's calibration tool, i got error below:
IE version: 1.6.custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
Loaded CPU plugin version: 1.6.22443
Traceback (most recent call last):
File "/opt/intel/openvino_2019.1.094/python/python3.5/openvino/tools/calibration/base_calibrator.py", line 436, in _infer
model_evaluator.launcher.exec_network = model_evaluator.launcher.plugin.load(network)
File "ie_api.pyx", line 395, in openvino.inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx", line 406, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: Pattern when one layer pass data to several eltwise layers are not supported in int8 quantization
i think it shows that i have a layer pass data to several eltwise layers, but it doesn't point out which layers are not supported. is there any ops can help me to find out which layer cause the problem?
the original topology of FP32 is transformer from Tensorflow, but the layers' name are different between OpenVINO and Tensorflow.
Link Copied

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page