- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello,
I am using the ESANET semantic segmentation model (https://github.com/TUI-NICR/ESANet) with Openvino by converting it to ONNX and OpenVino. The original model was in PyTorch which I converted to both OpenVino FP32 and FP16 models. I utilize Intel Integrated GPU, CPU, and both together (MULTI). I am performing inference on a real-time RGB-D stream.
The results are correct in the following scenarios.
1) I used the model without PrePostProcessor Block (specifically without mean and scale) using the FP16 format. I do the normalization manually with a custom preprocessing code before passing the input to the model. This test is done using CPU, integrated GPU, and Multi.
2) I used the model with PrePostProcessor Block (specifically with mean and scale) using the FP32 format. This test is done with CPU, integrated GPU, and Multi.
3) I used the model with PrePostProcessor Block (specifically with mean and scale) using the FP16 format. This test is done with CPU only
Since latency is a major concern, I want to stack preprocessing (normalization and resizing) within the model using the PrePostProcessor module and use the FP16 format. In other words, I want to use both the FP16 format and GPU to minimize latency. However, this scheme is producing the wrong results at the moment.
The version of OpenVino 2022.3.0, and Python 3.7 is used.
Feel free to ask any other questions that might be relevant.
Thank you
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi,
could you share:
- Relevant model files (ONNX, IR, etc)
- Custom code involved (if any)
- Steps/commands that you use till the point of issue
- Expected "correct" result (eg: You get latency: 8.35, Expected latency:7.00)
Cordially,
Iffa
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks for your response. Is it possible to individually share these things with you instead of posting them publically here?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Yes, you could email it to me or share it through private links (eg Google Drive with private access).
Cordially,
Iffa
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello,
I shared with you the necessary info and files. I look forward to hearing from you soon.
Thank you
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi,
I noticed that you have 2 yml files so I tried both of them (note that I had installed the requirements_jetson too).
However, there are packages not found for both of your yml files.
Is there anything else required to be done beforehand?
Cordially,
Iffa
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello,
As mentioned in the email, you have to only install environment.yaml (the other are for running it without openvino and pyrealsense2). If you install both it may bring dependencies conflict.
Can you indicate which packages are not found? I can not see it from the posted images.
Thank you
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello,
Can you try removing this package names from the environment.yaml file. I created this using Ubuntu 20.04. It could be system decencies conflicts.
If the environment doesn't work, try creating conda environment from rgbd_segmentation.yaml and install openvino runtime and pyrealsense2 on top. Please let me know if it works.
Looking forward to hearing from you soon.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
We are looking into this issue.
Thank you for your patience.
Cordially,
Iffa
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Your custom files requires quite a number of tweaking.
However, I managed to set the custom env with OpenVINO and checked on your model's latency, which seems good.
As for your custom inferencing code, there's also some issue (you may refer attachment).
I'm using Ubuntu 20.04 with Realsense D435 camera which is the same as yours.
There's also some chance that the issue came from your software design itself.
To ease this validation process, could you just share (screenshot,etc):
1.Your results pointing to these (correct prediction):
1) I used the model without PrePostProcessor Block (specifically without mean and scale) using the FP16 format. I do the normalization manually with a custom preprocessing code before passing the input to the model. This test is done using CPU, integrated GPU, and Multi.
2) I used the model with PrePostProcessor Block (specifically with mean and scale) using the FP32 format. This test is done with CPU, integrated GPU, and Multi.
3) I used the model with PrePostProcessor Block (specifically with mean and scale) using the FP16 format. This test is done with CPU only
2.Your result indicating the issue (wrong prediction)
This would help us to scope the specific part of the issue.
Thank you.
Cordially,
Iffa
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thank you for your question. If you need any additional information from Intel, please submit a new question as Intel is no longer monitoring this thread.
Cordially,
Iffa