- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I created and trained a Keras YOLO v3 model and tested it on CPU and NCS1 with similar results but big differences in speed. As NCS1 is discontinued, I got an NCS2 to test the same IR, and, surprisingly results changed dramatically.
I'm using openvino 2020r4 in a macos Catalina with Python 3.7.7 and tensorflow 1.15
For freezing the model I used the code:
I converted the pb model with
Is there any special reason for this difference?
Is there any way to get the same results?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I turned off VPU_HW_STAGES_OPTIMIZATION and looks like everything is ok now.
can you please tell where can we find a description of those parameters effect? I found this, but it is not too clarifier
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_VPU.html
Thanks a lot
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi pjvazquez,
Thanks for reaching out. Please try to add --reverse_input_channels to your model optimizer command. Could you provide the output on the Intel Movidius Neural Compute Stick, Intel Neural Compute Stick 2 and CPU for comparison? Also, would it be possible to share your frozen TensorFlow model to reproduce on my end?
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, I included the --reverse_input_channels in the mo command and I'm testing it.
Now I can not give you both outputs, I'm testing the system.
This is the link to the TF frozen pb file (200MB)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi pjvazquez,
Thanks for sharing the TensorFlow frozen model, I was able to convert to OpenVINO IR format. However, I was not able to test using our Yolo V3 demo application as the model seems to have a different architecture than the supported Darknet implementation of Yolo V3.
For debug purposes, could you try turning off VPU_HW_STAGES_OPTIMIZATION? In Python, place the following line before loading the network.
ie.set_config({'VPU_HW_STAGES_OPTIMIZATION': 'NO'}, "MYRIAD")
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jesus, yes, I'll do it.
You are right, the YOLO model was implemented in Keras based on this repo:
https://github.com/pjvazquez/head-detection-using-yolo
Next week I'll be able to obtain the outputs from both sticks and have data to compare them.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I turned off VPU_HW_STAGES_OPTIMIZATION and looks like everything is ok now.
can you please tell where can we find a description of those parameters effect? I found this, but it is not too clarifier
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_VPU.html
Thanks a lot
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi pjvazquez,
The VPU_HW_STAGES_OPTIMIZATION is only meant to be used for internal debug purposes. We are working on updating the documentation and API. Models that run correctly after turning VPU_HW_STAGES_OPTIMIZATION off may likely be a bug. However, in this case, the YOLOv3 model you are using has not been validated with OpenVINO.
Hope this answers your question.
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page