- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OpenVINO 2021.1
https://docs.openvinotoolkit.org/latest/omz_tools_downloader_README.html
'Model Downloader and other automation tools'
Model converter usage
The basic usage is to run the script like this:
./converter.py --all
This fails (after about 2 hours of downloading). Here is the Head and Tail of the Console for the session:
dh@ubuntu:~/intel/openvino_2021.1.110/deployment_tools/open_model_zoo/tools/downloader$ ./downloader.py --all --output_dir ~/Downloads/OpenVINO/Samples/
################|| Downloading action-recognition-0001-decoder ||################
========== Downloading /home/dh/Downloads/OpenVINO/Samples/intel/action-recognition-0001/action-recognition-0001-decoder/FP32/action-recognition-0001-decoder.xml
... 100%, 178 KB, 83938 KB/s, 0 seconds passed
...
...hundreds more lines like that
...
################|| Downloading yolo-v3-tiny-tf ||################
========== Downloading /home/dh/Downloads/OpenVINO/Samples/public/yolo-v3-tiny-tf/yolo-v3-tiny-tf.zip
... 100%, 32066 KB, 5113 KB/s, 6 seconds passed
========== Unpacking /home/dh/Downloads/OpenVINO/Samples/public/yolo-v3-tiny-tf/yolo-v3-tiny-tf.zip
FAILED:
bert-large-uncased-whole-word-masking-squad-fp32-0001
bert-large-uncased-whole-word-masking-squad-emb-0001
vgg19
I'm running Ubuntu 18.04 LTS and trying to use an Intel Neural Compute Stick 2.
Any clues? Thanks.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello dougworld,
Thank you for posting on the Intel* Community Forum.
To better assist you, we will move this thread to the proper sub-forum. Please expect a response soon.
Best regards,
Maria R.
Intel Customer Support Technician
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, this kind of issues during models downloading already discussed there, I would refer you to the solution provided in this thread
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dougworld,
Thanks for reaching out.
The size of the BERT large model is more than 1 Gb, so the converter might experience networking issues. The converter.py model converter is to converts the models that are not in the Inference Engine IR format into that format using Model Optimizer. But I believe when you download the BERT large model, it is already in IR format. I would suggest you to just download the model using the model downloader.
To download the model, use the following command:
./downloader.py --name bert* --num_attempt 5
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dougworld,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page