- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Live Object Detection with OpenVINO™
hardware: raspberry pi4B
openvino 2022.1
trying to run the above code using trained IR model on an mp4 video get:
input_img shape= (1, 512, 3, 3) output_layer= <ConstOutput: names[output0] shape{1,16128,12} type: f32> The input blob size is not equal to the network input size: got 4608 expecting 786432
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhad,
As I know, you are using your own trained model, YOLOv5. YOLOv5 model has different architecture compared to SSDLite MobileNetV2.
Since we do not have any Object Detection demo available for YOLOv5, I would recommend you try with this Object Detection demo available from GitHub repository: bethusaisampath/YOLOv5_Openvino
Regards,
Peh
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Any responses to resolve the above error?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhad,
Live Object Detection with OpenVINO™ demonstrates live object detection using the SSDLite MobileNetV2 from Open Model Zoo.
Using other than the supported models (SSDLite MobileNetV2), the demo is expected fail to run like your encountered errors.
You can use downloader.py and converter.py from Open Model Zoo to get the SSDLite MobileNetV2 model. For your ease, I attach SSDLite MobileNetV2 model as well.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks.
Do you mean there is no way to use my own trained model to do a live object detection? That doesn't make sense. There must be a way to use my raspberry pi4B camera live to detect objects and classes I trained my model for.
I think the error I encounter if due to a dimensionality mismatch:
input_img shape= (1, 512, 3, 3)
output_layer= <ConstOutput: names[output0] shape{1,16128,12} type: f32>
as you see above the input has 4 dimensions by the output is 3. Need to find out why.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhad,
As I know, you are using your own trained model, YOLOv5. YOLOv5 model has different architecture compared to SSDLite MobileNetV2.
Since we do not have any Object Detection demo available for YOLOv5, I would recommend you try with this Object Detection demo available from GitHub repository: bethusaisampath/YOLOv5_Openvino
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhad,
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page