- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi guys,
I am trying to convert our deep learning model to VPU supported format that our end product could consume.
Normally, the process includes IR conversion by OpenVINO and the second conversion step by the product software SDK.
The onnx model could pass the IR conversion without error while fails the second step with the error shown below.
Inference Engine:
API version ............ 2.1
Build .................. custom_HEAD_18e83a217702c650280c6abfc43f3285a3aadb61
Description ....... API
Network batch size: 1
[Warning][VPU][Config] Deprecated option was used : VPU_MYRIAD_PLATFORM
duplicateData error: while duplicating Conv_110/reshape_begin Const data got different desc and content byte sizes (1500 and 300 respectively)
We tried to standardize the network interface data format, such as int32 or int64. While the conversion still gave us the exact same error like above.
I guess it is probably related to some inner layer and reshape format design. But if the "Conv_110" just refers to the 110th convolution layer in the network, it is hard to locate precisely.
The IR conversion process report is as below.
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: model.onnx
- Path for generated IR: .
- IR output name: somename
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3,68,136]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
ONNX specific parameters:
fatal: not a git repository (or any of the parent directories): .git
- Inference Engine found in: some place
Inference Engine version: 2.1.2021.3.0-3029-a8827a2ec28-mryzhov/centos_py38
Model Optimizer version: unknown version
[ WARNING ] Model Optimizer and Inference Engine versions do no match.
[ WARNING ] Consider building the Inference Engine Python API from sources or reinstall OpenVINO (TM) toolkit using "pip install openvino" (may be incompatible with the current Model Optimizer version)
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: model.xml
[ SUCCESS ] BIN file: model.bin
[ SUCCESS ] Total execution time: 28.94 seconds.
[ SUCCESS ] Memory consumed: 156 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-3&content=upg_all&medium=organic or on the GitHub*
I am not sure the direction I am following in based on this error is correct. Is it any possible idea on this point that could be helpful?
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wenchi,
As explained in this portion of the GitHub thread discussion, the reason for the workaround is that not all OpenVINO plugins natively support int64 operations so Model Optimizers converts them to int32 instead.
You might want to check the model nodes and see any discrepancies to the data type or complete removal of said node when converted by Model Optimizer. Similar to the thread, you can use Netron or any neural network visualization tool to view the node properties.
Regards,
Hairul
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wenchi,
Thank you for reaching out to us.
From my end, I've managed to find a GitHub thread which also discusses regarding "duplicateData error: while duplicating Conv_110/reshape_begin" issue. According to the thread, the workaround is that you will need to explicitly specify the input data type to "--input <input_name>{i64}".
I'd also suggest for you to try out on the latest version of OpenVINO and see if the issue is resolved.
On another note, please do share your model files, OpenVINO version and system information with us for further investigation on this matter.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hairul,
Thanks for your reply. Let me put the version information list here.
- OpenVINO: 2021.3.0
- Operating System / Platform: Ubuntu 20.04
- Problem classification: Model inference
- Framework: ONNX
Our production sdk restricts the OpenVINO version we can use is only 2021.3.
I think your suggestion that "to explicitly specify the input data type to "--input <input_name>{i64}". " is quite a good idea.
The current situation for our custom model is quite the same to the GiHub thread.
I am a bit confused about the "<input_name>" mentioned above, which should be the exact node name. Well, could you explain more about the "node name"?
Regards,
Wenchi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wenchi,
As explained in this portion of the GitHub thread discussion, the reason for the workaround is that not all OpenVINO plugins natively support int64 operations so Model Optimizers converts them to int32 instead.
You might want to check the model nodes and see any discrepancies to the data type or complete removal of said node when converted by Model Optimizer. Similar to the thread, you can use Netron or any neural network visualization tool to view the node properties.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hairul,
Thanks for the suggestion!
We have solved the issue by removing the node that needs the data format conversion during inference as part of the network operation structure and put that data as input at the network input interface so that we could explicitly specify the input data type as "i64".
Regards,
Wenchi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wenchi,
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Peh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page