- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We use a centernet model without the regression heads for detecting object positions. We also changed input to one channel. We omitted the width, height and offset regression parts. We testet this model using mxnet inference and get the expected output, which looks like this image below.
After successfully converting this model using openvino 2023.0.1 to openvino format, we tried openvino inference unsuccessfully. The resulting heatmap is all zeros, or sometimes nearly all zeros, with just two pixels in the top-right corner beeing nearly zero (for example 1.22*10^-14).
We tried the inference with openvino by calling openvino from python and from c++. Both resulting in the same empty heatmap. This is why I think there is either something wrong with the converted model or the openvino runtime. We also testet openvino 2022.3.1 with the identical bad result.
Please find the model attached.
Do you have any idea why the model is not returning the same values after conversion?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ahewic1,
Thanks for sharing your model with us.
However, only looking into your model is insufficient for us to understand the issue.
Could you share your native model (.params or .json), command used to convert MXNet model and also the script used to test inferencing with us for further investigation?
In addition, did you obtain the incorrect inference results when only running on CPU, on GPU or both?
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your reply Peh!
Please find the source model attached. It contains the original mxnet format.
For conversion we used the following command:
(openvino_env) C:\01_Data\kdl-training-trunk-svn>mo --input_model "C:/01_Data/kdl-training-trunk-svn/nets/2023-07-13_15-32_BaseGvS126_02-03-2023_11-13_base_resnet18/model-0019.params" --input_shape (1,1,512,512)
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: C:\01_Data\kdl-training-trunk-svn\model-0019.xml
[ SUCCESS ] BIN file: C:\01_Data\kdl-training-trunk-svn\model-0019.bin
The problem is present on cpu and gpu identical.
Inference script is attached.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ahewic1,
Thanks for sharing models and inferencing script with us.
However, I was not able to run the inferencing script due to the following errors.
First error:
generator = lambda: [(yield self._batchify_fn([self._dataset[idx] for idx in batch]))
SyntaxError: 'yield' inside list comprehension
Did you encounter this error? Which Python version you’re using?
Second error:
ModuleNotFoundError: No module named 'kdl.data'
Where to get all kdl.data, kdl.util, kdl.metrics and kdl.training modules?
Besides, please also share some example images so that we can obtain the same results.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ahewic1,
We have not heard back from you. Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
Regards,
Peh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page