Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1788 Discussions

Error downloading models

pastorsoto
Beginner
2,905 Views

I am testing the pneumonia detection notebook. I want to test the product by running different experiments with different models.

 

I used the code snipped explorer to get the code to download a model:

 

 

#download mobilenet-ssd model using omz_downloader
MODEL_NAME='mobilenet-ssd'
OUTPUT_FOLDER ='raw_models'
!omz_downloader --name $MODEL_NAME -o $OUTPUT_FOLDER
!mkdir -p raw_models/public
!echo "\nAll files that were downloaded:"
!find ./raw_models

 

I got the following error message:

 

========== Downloading raw_models/public/mobilenet-ssd/mobilenet-ssd.prototxt
Traceback (most recent call last):
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn
    conn = connection.create_connection(
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/util/connection.py", line 72, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "/usr/lib/python3.8/socket.py", line 914, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connectionpool.py", line 386, in _make_request
    self._validate_conn(conn)
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
    conn.connect()
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connection.py", line 358, in connect
    self.sock = conn = self._new_conn()
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fe08510f400>: Failed to establish a new connection: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/urllib3/util/retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /chuanqi305/MobileNet-SSD/ba00fc987b3eb0ba87bb99e89bf0298a2fd10765/MobileNetSSD_deploy.prototxt (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fe08510f400>: Failed to establish a new connection: [Errno -2] Name or service not known'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/openvino/model_zoo/download_engine/downloader.py", line 116, in _try_download
    chunk_iterable, continue_offset = start_download(offset=progress.size, timeout=self.timeout, size=size, checksum=hasher)
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/openvino/model_zoo/download_engine/file_source.py", line 71, in start_download
    response = session.get(self.url, stream=True, timeout=timeout,
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/requests/sessions.py", line 600, in get
    return self.request("GET", url, **kwargs)
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/data/venv/openvino_2022.3.0/lib/python3.8/site-packages/requests/adapters.py", line 565, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /chuanqi305/MobileNet-SSD/ba00fc987b3eb0ba87bb99e89bf0298a2fd10765/MobileNetSSD_deploy.prototxt (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fe08510f400>: Failed to establish a new connection: [Errno -2] Name or service not known'))
########## Error: Download failed

FAILED:
mobilenet-ssd
\nAll files that were downloaded:
./raw_models
./raw_models/public
./raw_models/public/bert-base-ner
./raw_models/public/mobilenet-ssd

 

Labels (1)
0 Kudos
1 Solution
IntelSupport
Community Manager
2,849 Views

Hi Pastorsoto,

 

Thanks for reaching out.

 

I am able to download and convert the mobilenet-ssd into OpenVINO IR format using command below:

 

omz_downloader --name mobilenet-ssd

omz_converter --name mobilenet-ssd

 

Refer below:

ssd.jpg

 

Please try and see if the same issue persists.

 

 

Regards,

Aznie

 

View solution in original post

0 Kudos
6 Replies
IntelSupport
Community Manager
2,850 Views

Hi Pastorsoto,

 

Thanks for reaching out.

 

I am able to download and convert the mobilenet-ssd into OpenVINO IR format using command below:

 

omz_downloader --name mobilenet-ssd

omz_converter --name mobilenet-ssd

 

Refer below:

ssd.jpg

 

Please try and see if the same issue persists.

 

 

Regards,

Aznie

 

0 Kudos
pastorsoto
Beginner
2,839 Views

Yes. I was able to download the model. but the model is not able to make inference.

 

I am using the Pneumonia classification notebook on the website which has a model already in there. Then it was converted to IR files using:

 

# Create FP16 IR files
!mo \
--input_model model.pb \
--input_shape=[1,224,224,3] \
--data_type FP16 \
-o models/FP16/ \
--mean_values [123.75,116.28,103.58] \
--scale_values [58.395,57.12,57.375] 

# Create FP32 IR files
!mo \
--input_model model.pb \
--input_shape=[1,224,224,3] \
--data_type FP32 \
-o models/FP32/ \
--mean_values [123.75,116.28,103.58] \
--scale_values [58.395,57.12,57.375] 

# find all resulting IR files
!echo "\nAll IR files that were downloaded or created:"
!find ./models -name "*.xml" -o -name "*.bin"

 

So, I downloaded succesfully the model and converted into IR files using:

 

# Run the following cell to use the Model Optimizer to create the FP16 and FP32 model IR files
!mo \
--input_model raw_models/public/mobilenet-ssd/mobilenet-ssd.caffemodel \
--input_shape=[1,3,300,300] \
--data_type FP16 \
--output_dir models/mobilenet-ssd/FP16 \
--mean_values [123.75,116.28,103.58] \
--scale_values [58.395,57.12,57.375] 

# Create FP32 IR files
!mo \
--input_model raw_models/public/mobilenet-ssd/mobilenet-ssd.caffemodel \
--input_shape=[1,3,300,300] \
--data_type FP32 \
--output_dir models/mobilenet-ssd/FP32 \
--mean_values [123.75,116.28,103.58] \
--scale_values [58.395,57.12,57.375] 

# find all resulting IR files
!echo "\nAll IR files that were created:"
!find ./models -name "*.xml" -o -name "*.bin"

 

But when I run the inference it doesn't do anything on the model I donwloaded but ti does with the model.xml

 

%%writefile classification_pneumonia_job.sh

# Store input arguments: <output_directory> <device> <fp_precision> <input_file>
OUTPUT_FILE=$1
DEVICE=$2
FP_MODEL=$3
INPUT_FILES=("${@:4}")

# The default path for the job is the user's home directory,
#  change directory to where the files are.
echo VENV_PATH=$VENV_PATH
echo OPENVINO_RUNTIME=$OPENVINO_RUNTIME
echo INPUT_FILES="${INPUT_FILES[@]}"
echo FP_MODEL=$FP_MODEL
echo INPUT_TILE=$INPUT_FILES
echo NUM_REQS=$NUM_REQS

# Follow this order of setting up environment for openVINO 2022.1.0.553
echo "Activating a Python virtual environment from ${VENV_PATH}..."
source ${VENV_PATH}/bin/activate
echo "Activating OpenVINO variables from ${OPENVINO_RUNTIME}..."
source ${OPENVINO_RUNTIME}/setupvars.sh


cd $PBS_O_WORKDIR

# Make sure that the output directory exists.
mkdir -p $OUTPUT_FILE

# Set inference model IR files using specified precision
MODELPATH=models/mobilenet-ssd/${FP_MODEL}/mobilenet-ssd.xml
#MODELPATH=models/${FP_MODEL}/model.xml

pip3 install Pillow
# Run the pneumonia detection code
python3 classification_pneumonia.py -m $MODELPATH \
                                    -i "${INPUT_FILES[@]}" \
                                    -o $OUTPUT_FILE \
                                    -d $DEVICE

 

This is the last file to run the inference, when I do it for model.xml it works fine, but when I use it for mobilenet-ssd.xml it doesn't work. I tried to change the input_shape, I went to the classification_pneumonia.py file and change the target_size and none of that worked. Any idea on how to run inference with a different model than the one is already loaded?

0 Kudos
IntelSupport
Community Manager
2,833 Views

Hi Pastorsoto,

 

Please share your repository details or the source of your jupyter notebook for further checking.

 

 

Regards,

Aznie


0 Kudos
pastorsoto
Beginner
2,824 Views

This is the link of the notebook I am trying to run:

 

https://cdrdv2.intel.com/v1/dl/getContent/678632?explicitVersion=true

 

The notebook runs fine with the default seetings, however when I tried to use a different model downaloded it doesn't run the inference.

 

The enviroment is: Python3.8 (OpenVINO 2022.3.0)

This is the code I used:

 

import json
import sys
from pathlib import Path

from IPython.display import Markdown, display
from openvino.runtime import Core

sys.path.append("../utils")
#from notebook_utils import DeviceNotFoundAlert, NotebookAlert

base_model_dir = Path("model")
omz_cache_dir = Path("cache")
precision = "FP16"
precision_2 = "FP32"

# Check if an iGPU is available on this system to use with Benchmark App.
ie = Core()
gpu_available = "GPU" in ie.available_devices

print(
    f"base_model_dir: {base_model_dir}, omz_cache_dir: {omz_cache_dir}, gpu_availble: {gpu_available}"
)


model_name = "mobilenet-ssd"

# download the model
download_command = (
    f"omz_downloader --name {model_name} --output_dir {base_model_dir} --cache_dir {omz_cache_dir}"
)
display(Markdown(f"Download command: `{download_command}`"))
display(Markdown(f"Downloading {model_name}..."))
! $download_command



model_info_output = %sx omz_info_dumper --name $model_name
model_info = json.loads(model_info_output.get_nlstr())

if len(model_info) > 1:
    NotebookAlert(
        f"There are multiple IR files for the {model_name} model. The first model in the "
        "omz_info_dumper output will be used for benchmarking. Change "
        "`selected_model_info` in the cell below to select a different model from the list.",
        "warning",
    )

model_info


selected_model_info = model_info[0]
MODEL_PATH = (
    base_model_dir
    / Path(selected_model_info["subdirectory"])
    / Path(f"{precision_2}/{selected_model_info['name']}.xml")
)
print(MODEL_PATH, "exists:", MODEL_PATH.exists())


# Run the following cell to use the Model Optimizer to create the FP16 and FP32 model IR files
!mo \
--input_model raw_models/public/mobilenet-ssd/mobilenet-ssd.caffemodel \
--input_shape=[1,3,300,300] \
--data_type FP16 \
--output_dir models/mobilenet-ssd/FP16 \
--mean_values [127.5,127.5,127.5] \
--scale_values [0.007843,0.007843,0.007843]

# Create FP32 IR files
!mo \
--input_model raw_models/public/mobilenet-ssd/mobilenet-ssd.caffemodel \
--input_shape=[1,3,300,300] \
--data_type FP32 \
--output_dir models/mobilenet-ssd/FP32 \
--mean_values [127.5,127.5,127.5] \
--scale_values [0.007843,0.007843,0.007843]

# find all resulting IR files
!echo "\nAll IR files that were created:"
!find ./models -name "*.xml" -o -name "*.bin"


%%writefile classification_pneumonia_job.sh

# Store input arguments: <output_directory> <device> <fp_precision> <input_file>
OUTPUT_FILE=$1
DEVICE=$2
FP_MODEL=$3
INPUT_FILE="$4"

# The default path for the job is the user's home directory,
#  change directory to where the files are.
echo VENV_PATH=$VENV_PATH
echo OPENVINO_RUNTIME=$OPENVINO_RUNTIME
echo INPUT_FILE=$INPUT_FILE
echo FP_MODEL=$FP_MODEL
echo INPUT_TILE=$INPUT_FILE
echo NUM_REQS=$NUM_REQS

# Follow this order of setting up environment for openVINO 2022.1.0.553
echo "Activating a Python virtual environment from ${VENV_PATH}..."
source ${VENV_PATH}/bin/activate
echo "Activating OpenVINO variables from ${OPENVINO_RUNTIME}..."
source ${OPENVINO_RUNTIME}/setupvars.sh


cd $PBS_O_WORKDIR

# Make sure that the output directory exists.
mkdir -p $OUTPUT_FILE

# Set inference model IR files using specified precision
MODELPATH=models/${FP_MODEL}/mobilenet-ssd.xml
pip3 install Pillow
# Run the pneumonia detection code
python3 classification_pneumonia.py -m $MODELPATH \
                                    -i "$INPUT_FILE" \
                                    -o $OUTPUT_FILE \
                                    -d $DEVICE

 

My guess is that it could be something related to the input shape but I am not really sure

0 Kudos
JesusE_Intel
Moderator
2,755 Views

Hi pastorsoto,


The Pneumonia Detection sample was put together for this specific model. Using a different model may require many changes to the OpenVINO inference code as the model architecture may be different. Take a look at the Jupyter notebook, it contains some background information about the specific model. There are other OpenVINO samples that handle mobile-ssd architectures. Take a look at this sample code:


https://docs.openvino.ai/latest/omz_demos_single_human_pose_estimation_demo_python.html#doxid-omz-demos-single-human-pose-estimation-demo-python


Regards,

Jesus


0 Kudos
JesusE_Intel
Moderator
2,683 Views

If you need any additional information, please submit a new question as this thread will no longer be monitored.


0 Kudos
Reply