Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6420 Discussions

Error running model inference with OpenVINO C++ API

ps2023
Beginner
2,201 Views

Hi,

I have a Docker image of DL Workbench 2022.1 on Ubuntu 20.04 LTS and am using Yolov4 Tiny tf model (Downloaded from model zoo) on DL Workbench  to understand model inferencing with C++ API.

I tried the sample jupyter notebook from my project:

Learn OpenVINO tab in Project-> Model Inference with OpenVINO API -> C++ API

 

Providing the appropriate path for the .xml file in the jupyter notebook and running step 3. Build the executable (build successful) and I get an error while executing Step4. Execute the Application:

# Executable application accepts several arguments:
# Path to the model (.xml)
MODEL="/home/workbench/.workbench/models/1/original/yolo-v4-tiny-tf.xml" #"model/public/squeezenet1.1/FP16/squeezenet1.1.xml"
# Batch size - how many images should be fed to the network at one time
BATCH_SIZE=1
# Number of streams
STREAMS=4
# Device
DEVICE="CPU"

# Usage: ./sample_app path_to_model_xml number_of_batches number_of_streams
./sample_app ${MODEL} ${BATCH_SIZE} ${STREAMS} ${DEVICE}

 

Following is the error message:

I understand the default output might contain only one output, but YOLO gives two outputs hence the error:

PrePostProcessor::output() - Model must have exactly one output, got 2

 

Any idea how to change the output size and where to change it?

 

Thank you for your time.

 

 

0 Kudos
1 Solution
Hairul_Intel
Moderator
2,105 Views

Hi ps2023,

 

Here is the answer to your questions:

1) You can refer the Object Detection C++ Demo code and try modifying your code to accept YOLOv4 model. However, it is not recommended to do this inside of DL Workbench as the Object Detection C++ Demo requires additional dependencies for it to work.

 

I'd suggest you trying this method using OpenVINO Development Tools and clone the Open Model Zoo repository to get the Object Detection C++ Demo.

 

 

2) I've encountered similar error when running the script for downloading squeezenet1.1 model. A workaround is to download and convert the model into Intermediate Representation (IR) format independently using OpenVINO Development Tools instead of DL Workbench.

 

Once the model is converted, you can upload the IR model into DL Workbench using the options below:

model.png

 

Next, copy the path for the model into the Jupyter Notebook:

path.png

 

 

 

Hope this helps.

 

 

Regards,

Hairul

 

 

View solution in original post

0 Kudos
6 Replies
Hairul_Intel
Moderator
2,170 Views

Hi ps2023,

Thank you for reaching out to us.


We're investigating this matter and will get back to you as soon as possible.

 

 

Regards,

Hairul


0 Kudos
Hairul_Intel
Moderator
2,141 Views

Hi ps2023,

Thank you for your patience.

 

I've encountered similar error message when running the Jupyter Notebook for Model Inference with OpenVINO API using yolo-v4-tiny-tf model.

 

For your information, that sample is only validated for classification models such as squeezenet1.1.

 

However, yolo-v4-tiny-tf model is used for object detection and requires a different sample code. I'd suggest you use the Object Detection C++ Demo to try out the yolo-v4-tiny-tf model.

 

 

Regards,

Hairul


0 Kudos
ps2023
Beginner
2,125 Views

Hi Hairul,

 

Thank you for looking into it. 

 

I have two follow up questions:

1) Can I modify the C++ to handle two outputs from YOLOv4

2) I am getting an error running the sample code:

 

source /tmp/virtualenvs/tutorial_sample_application/bin/activate

omz_converter \
--name squeezenet1.1 \
-d raw_model \
-o model

 

========== Converting squeezenet1.1 to IR (FP16)
Conversion command: /tmp/virtualenvs/tutorial_sample_application/bin/python -- /tmp/virtualenvs/tutorial_sample_application/bin/mo --framework=caffe --data_type=FP16 --output_dir=model/public/squeezenet1.1/FP16 --model_name=squeezenet1.1 --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=raw_model/public/squeezenet1.1/squeezenet1.1.caffemodel --input_proto=raw_model/public/squeezenet1.1/squeezenet1.1.prototxt '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]'

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/workbench/.workbench/tutorials/sample_application/raw_model/public/squeezenet1.1/squeezenet1.1.caffemodel
- Path for generated IR: /home/workbench/.workbench/tutorials/sample_application/model/public/squeezenet1.1/FP16
- IR output name: squeezenet1.1
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: data
- Output layers: prob
- Input shapes: [1, 3, 227, 227]
- Source layout: Not specified
- Target layout: Not specified
- Layout: data(NCHW)
- Mean values: data[104.0,117.0,123.0]
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: False
- Enable IR generation for fixed input shape: False
- Use the transformations config file: None
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
Caffe specific parameters:
- Path to Python Caffe* parser generated from caffe.proto: /tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/../front/caffe/proto
- Enable resnet optimization: True
- Path to the Input prototxt: /home/workbench/.workbench/tutorials/sample_application/raw_model/public/squeezenet1.1/squeezenet1.1.prototxt
- Path to CustomLayersMapping.xml: /tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/../../extensions/front/caffe/CustomLayersMapping.xml
- Path to a mean file: Not specified
- Offsets for a mean file: Not specified
OpenVINO runtime found in: /opt/intel/openvino/python/python3.8/openvino
OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1

[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID (<class 'openvino.tools.mo.load.caffe.loader.CaffeLoader'>)": Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
[ ERROR ] Traceback (most recent call last):
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 278, in apply_transform
for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/middle/pattern_match.py", line 46, in for_graph_and_each_sub_graph_recursively
func(graph)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/load/loader.py", line 14, in find_and_replace_pattern
self.load(graph)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/load/caffe/loader.py", line 20, in load
caffe_pb2 = loader.import_caffe_pb2(argv.caffe_parser_path)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/front/caffe/loader.py", line 24, in import_caffe_pb2
caffe_pb2 = importlib.import_module("caffe_pb2")
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/../front/caffe/proto/caffe_pb2.py", line 32, in <module>
_descriptor.EnumValueDescriptor(
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 796, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 533, in main
ret_code = driver(argv)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 489, in driver
graph, ngraph_function = prepare_ir(argv)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 407, in prepare_ir
graph = unified_pipeline(argv)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/pipeline/unified.py", line 13, in unified_pipeline
class_registration.apply_replacements(graph, [
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 328, in apply_replacements
apply_replacements_list(graph, replacers_order)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 314, in apply_replacements_list
apply_transform(
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/logger.py", line 112, in wrapper
function(*args, **kwargs)
File "/tmp/virtualenvs/tutorial_sample_application/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 302, in apply_transform
raise Exception('Exception occurred during running replacer "{} ({})": {}'.format(
Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'openvino.tools.mo.load.caffe.loader.CaffeLoader'>)": Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------

FAILED:
squeezenet1.1

---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 get_ipython().run_cell_magic('bash', '', 'source /tmp/virtualenvs/tutorial_sample_application/bin/activate\n\nomz_converter \\\n --name squeezenet1.1 \\\n -d raw_model \\\n -o model\n')

File /opt/intel/openvino_2022.1.0.643/tools/workbench/wb/data/jupyter_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py:2338, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
2336 with self.builtin_trap:
2337 args = (magic_arg_s, cell)
-> 2338 result = fn(*args, **kwargs)
2339 return result

File /opt/intel/openvino_2022.1.0.643/tools/workbench/wb/data/jupyter_env/lib/python3.8/site-packages/IPython/core/magics/script.py:153, in ScriptMagics._make_script_magic.<locals>.named_script_magic(line, cell)
151 else:
152 line = script
--> 153 return self.shebang(line, cell)

File /opt/intel/openvino_2022.1.0.643/tools/workbench/wb/data/jupyter_env/lib/python3.8/site-packages/IPython/core/magics/script.py:305, in ScriptMagics.shebang(self, line, cell)
300 if args.raise_error and p.returncode != 0:
301 # If we get here and p.returncode is still None, we must have
302 # killed it but not yet seen its return code. We don't wait for it,
303 # in case it's stuck in uninterruptible sleep. -9 = SIGKILL
304 rc = p.returncode or -9
--> 305 raise CalledProcessError(rc, cell)

CalledProcessError: Command 'b'source /tmp/virtualenvs/tutorial_sample_application/bin/activate\n\nomz_converter \\\n --name squeezenet1.1 \\\n -d raw_model \\\n -o model\n'' returned non-zero exit status 1.

 

 

0 Kudos
Hairul_Intel
Moderator
2,106 Views

Hi ps2023,

 

Here is the answer to your questions:

1) You can refer the Object Detection C++ Demo code and try modifying your code to accept YOLOv4 model. However, it is not recommended to do this inside of DL Workbench as the Object Detection C++ Demo requires additional dependencies for it to work.

 

I'd suggest you trying this method using OpenVINO Development Tools and clone the Open Model Zoo repository to get the Object Detection C++ Demo.

 

 

2) I've encountered similar error when running the script for downloading squeezenet1.1 model. A workaround is to download and convert the model into Intermediate Representation (IR) format independently using OpenVINO Development Tools instead of DL Workbench.

 

Once the model is converted, you can upload the IR model into DL Workbench using the options below:

model.png

 

Next, copy the path for the model into the Jupyter Notebook:

path.png

 

 

 

Hope this helps.

 

 

Regards,

Hairul

 

 

0 Kudos
ps2023
Beginner
2,086 Views
0 Kudos
Hairul_Intel
Moderator
2,026 Views

Hi ps2023,

I'm happy to help.

 

This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Hairul


0 Kudos
Reply