- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @Karmeo
Thank you for posting on the Intel® communities.
We understand that you have some inquiries regarding OpenVINO™ toolkit. We have a forum for those specific products and questions so we are moving it to the Intel® Distribution of OpenVINO™ Toolkit Forum so it can get answered more quickly.
Best regards,
Andrew G.
Intel Customer Support Technician
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Karmeo,
Thank you for reaching out to us.
To answer your question, you need to check whether SPP and LSTM operations are supported by Model Optimizer as well as Inference Engine VPU plugin.
For Model Optimizer, TensorFlow 2 Keras supports LSTM and LSTMCell operations. You can refer here:
For Inference Engine, LSTMCell and LSTMSequence are supported by VPU plugin, which supports Intel® Neural Compute Stick 2.
On another note, I have downloaded yolov3-spp.weights and converted into frozen_darknet_yolov3_model.pb by following the guide below:
https://github.com/mystic123/tensorflow-yolo-v3
I have successfully converted YOLOV3-SPP model to Intermediate Representation format. I have also validated that the model works fine with Object Detection C++ Demo using Intel® Neural Compute Stick 2.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey! I tried to convert my weight, but it gives an error
File "/home/deep/tensorflow-yolo-v3/utils.py", line 115, in load_weights (form [3], form [2], form [0], form [1]))
ValueError: Unable to convert array of size 3114055 to form (1024,512,3,3)
I tried to fix it, but no results. Can you try to convert the .pb file on these files?
Link: https://drive.google.com/drive/folders/1_F-aRfIxbXXYV_p4rAYq4FcnRFZYvuwg?usp=sharing
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I solved the problem of freezing custom darknet models on my own.
To solve this problem, use the instructions on this link: https://github.com/TNTWEN/OpenVINO-YOLO-Automatic-Generation
Further, the conversion of the model for OPENVINO continues according to the instructions of Intel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Karmeo,
Glad to know you have solved. This thread will not monitor since issue has been solved. If you need any additional information from Intel, please submit a new question.
Regards,
Wan

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page