Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Problem with OpenVINO 2021.1 'Model Downloader and other automation tools' Fails

dougworld
Beginner
745 Views

OpenVINO 2021.1
https://docs.openvinotoolkit.org/latest/omz_tools_downloader_README.html
'Model Downloader and other automation tools'
Model converter usage
The basic usage is to run the script like this:
./converter.py --all

This fails (after about 2 hours of downloading). Here is the Head and Tail of the Console for the session:

dh@ubuntu:~/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader$ ./downloader.py --all --output_dir ~/Downloads/OpenVINO/Samples/
################|| Downloading action-recognition-0001-decoder ||################

========== Downloading /home/dh/Downloads/OpenVINO/Samples/intel/action-recognition-0001/action-recognition-0001-decoder/FP32/action-recognition-0001-decoder.xml
... 100%, 178 KB, 83938 KB/s, 0 seconds passed

...
...hundreds more lines like that
...
################|| Downloading yolo-v3-tiny-tf ||################

========== Downloading /home/dh/Downloads/OpenVINO/Samples/public/yolo-v3-tiny-tf/yolo-v3-tiny-tf.zip
... 100%, 32066 KB, 5113 KB/s, 6 seconds passed

========== Unpacking /home/dh/Downloads/OpenVINO/Samples/public/yolo-v3-tiny-tf/yolo-v3-tiny-tf.zip

FAILED:
bert-large-uncased-whole-word-masking-squad-fp32-0001
bert-large-uncased-whole-word-masking-squad-emb-0001
vgg19

I'm running Ubuntu 18.04 LTS and trying to use an Intel Neural Compute Stick 2.

Any clues? Thanks.

0 Kudos
4 Replies
Maria_R_Intel
Moderator
735 Views

Hello dougworld,


Thank you for posting on the Intel* Community Forum.


To better assist you, we will move this thread to the proper sub-forum. Please expect a response soon.


Best regards,

Maria R.

Intel Customer Support Technician


0 Kudos
Vladimir_Dudnik
Employee
722 Views

Hello, this kind of issues during models downloading already discussed there, I would refer you to the solution provided in this thread

Regards,
  Vladimir

0 Kudos
IntelSupport
Community Manager
707 Views

Hi Dougworld,

 

Thanks for reaching out.


The size of the BERT large model is more than 1 Gb, so the converter might experience networking issues. The converter.py model converter is to converts the models that are not in the Inference Engine IR format into that format using Model Optimizer. But I believe when you download the BERT large model, it is already in IR format. I would suggest you to just download the model using the model downloader.

To download the model, use the following command:

./downloader.py --name bert* --num_attempt 5

 

Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
678 Views

Hi Dougworld,


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


0 Kudos
Reply