Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Run Object Detection YOLO* V3 C++ Demo

DVrta
Novice
1,556 Views

Dear all,

 

i', trying to run Object Detection YOLO* V3 C++ Demo, Async API Performance Showcase DEMO:

https://docs.openvinotoolkit.org/2020.2/_demos_object_detection_demo_yolov3_async_README.html

 

I used Model Downloader to download a pre-tarined model:  

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader>python downloader.py --name yolo-v3-tf
################|| Downloading models ||################

========== Downloading C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.pb
... 100%, 242313 KB, 4882 KB/s, 49 seconds passed

========== Downloading C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.json


################|| Post-processing ||################


C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader>

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1) is this model OK to work with Object Detection YOLO* V3 C++ Demo?

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

then i tried to convert the tensorflow model to IR but i get an error

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

2a)

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>mo_tf.py --input_model "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.pb" -b 1 --output_dir "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\test\." --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.json"
[ WARNING ]  Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\test\.
        - IR output name:       yolo-v3
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.json
Model Optimizer version:        2020.2.0-60-g0bc66e26ff
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ImportError: numpy.core.multiarray failed to import

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 968, in _find_and_load
SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set
ImportError: numpy.core._multiarray_umath failed to import
ImportError: numpy.core.umath failed to import
2020-06-02 13:53:38.134078: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

2b)

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>mo_tf.py --input_model "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.pb" -b 1 --output_dir "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\test\." --tensorflow_custom_operations_config_update "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.json"
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\model_optimizer\test\.
        - IR output name:       yolo-v3
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\public\yolo-v3-tf\yolo-v3.json
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  None
Model Optimizer version:        2020.2.0-60-g0bc66e26ff
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ImportError: numpy.core.multiarray failed to import

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 968, in _find_and_load
SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set
ImportError: numpy.core._multiarray_umath failed to import
ImportError: numpy.core.umath failed to import
2020-06-02 13:53:23.319731: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

here is also the output from  install_prerequisites_tf.bat

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites>install_prerequisites_tf.bat
Python 3.6.5
ECHO is off.
Requirement already satisfied: tensorflow<2.0.0,>=1.2.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from -r ..\requirements_tf.txt (line 1)) (1.15.3)
Requirement already satisfied: networkx>=1.11 in c:\users\david\appdata\roaming\python\python36\site-packages (from -r ..\requirements_tf.txt (line 2)) (2.4)
Requirement already satisfied: numpy>=1.12.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from -r ..\requirements_tf.txt (line 3)) (1.13.0)
Requirement already satisfied: defusedxml>=0.5.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from -r ..\requirements_tf.txt (line 4)) (0.6.0)
Requirement already satisfied: grpcio>=1.8.6 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.29.0)
Requirement already satisfied: gast==0.2.2 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (0.2.2)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (0.34.2)
Requirement already satisfied: tensorflow-estimator==1.15.1 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.15.1)
Requirement already satisfied: absl-py>=0.7.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (0.9.0)
Requirement already satisfied: google-pasta>=0.1.6 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (0.2.0)
Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.15.0)
Requirement already satisfied: astor>=0.6.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (0.8.1)
Requirement already satisfied: six>=1.10.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.15.0)
Requirement already satisfied: protobuf>=3.6.1 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (3.6.1)
Requirement already satisfied: termcolor>=1.1.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.12.1)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (3.2.1)
Requirement already satisfied: keras-applications>=1.0.8 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.0.8)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.1.2)
Requirement already satisfied: decorator>=4.3.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from networkx>=1.11->-r ..\requirements_tf.txt (line 2)) (4.4.2)
Requirement already satisfied: markdown>=2.6.8 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (3.2.2)
Requirement already satisfied: werkzeug>=0.11.15 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in c:\users\david\appdata\roaming\python\python36\site-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (47.1.1)
Requirement already satisfied: h5py in c:\users\david\appdata\roaming\python\python36\site-packages (from keras-applications>=1.0.8->tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (2.10.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in c:\users\david\appdata\roaming\python\python36\site-packages (from markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (1.6.0)
Requirement already satisfied: zipp>=0.5 in c:\users\david\appdata\roaming\python\python36\site-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow<2.0.0,>=1.2.0->-r ..\requirements_tf.txt (line 1)) (3.1.0)
*****************************************************************************************
Warning: please expect that Model Optimizer conversion might be slow.
You can boost conversion speed by installing protobuf-*.egg located in the
"model-optimizer\install_prerequisites" folder or building protobuf library from sources.
For more information please refer to Model Optimizer FAQ, question #80.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Please can you help?

 

Br.

 

0 Kudos
5 Replies
Vladimir_Dudnik
Employee
1,556 Views

Hello,

there what I did on windows:

1. From Start\Visual Studio 2015 launch command line window "VS2015 x64 Native Tools Command Prompt"

2. in this command line window, create python 3.5 environment with command: conda create -n py3.5-openvino python=python3.5, then run activate py3.5-openvino

3. call OpenVINO environment setup with command c:\Program Files (X86)\IntelSWTools\openvino_2019.2.117\bin\setupvars.bat

4. install Open Model Zoo python requirements with command: conda install --yes --file "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.117\deployment_tools\open_model_zoo\tools\downloader\requirements.in"

5. download model with command: python "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.117\deployment_tools\open_model_zoo\tool\downloader\downloader.py" --name yolo-v3-tf -o <download folder>

6. convert model to IR with command: python "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.117\deployment_tools\open_model_zoo\tools\downloader\converter.py" --name yolo-v3-tf -d <download folder> -o <conversion folder>

7. build Open Model Zoo samples with command: "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.117\deployment_tools\open_model_zoo\demos\build_demos_msvc.bat" VS2015

8. launch C++ demo with command: object_detection_demo_yolov3_async.exe -i <sample video file> -m <conversion folder>\public\FP16\yolo-v3-tf.xml -d CPU, and got detections on video.

Regards,
  Vladimir

0 Kudos
DVrta
Novice
1,556 Views

Hi Vladimir,

thank you for replay, i will try the converter.py script to do so.

 

I thought that the model optimizer is used to convert from caffe, tensorflow... to IR format like described here:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Converting_Model.html

 

SO my question wen do we use mo.py and when converter.py ?

 

Br.

0 Kudos
DVrta
Novice
1,556 Views

Hi Vladimir,

i solved the issue with : 

python -m pip install --upgrade numpy

and successfully run the demo with 

object_detection_demo_yolov3_async.exe -i cam -m "C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\open_model_zoo\tools\downloader\public\yolo-v3-tf\FP32\yolo-v3-tf.xml" -d MYRIAD

 

now i'm only missing the labels, the labels only have bounding box and then is sys label #56:0.576

 

How to solve this because when i check object_detection_demo_yolov3_async.exe -h , there is no --labels ?

 

in the demo readme.md is says:

 

## Demo Output

The demo uses OpenCV to display the resulting frame with detections (rendered as bounding boxes and labels, if provided).

 

how can i provide the labels?

 

Br.

0 Kudos
Vladimir_Dudnik
Employee
1,556 Views

Hi David,

conversion of model from source framework format is done by Open VINO Model Optimizer tool, which called from command like as mo.py. For conversion model MO may require a number of different parameters, which you have to supply in command line. Those parameters varies between different models and different source frameworks. Open Model Zoo converter.py is a tool which simplifies the usage of Model Optimizer for Open Model Zoo models. We test all public OMZ models internally and choose required Model Optimizer parameters for each model. The parameters are kept in OMZ model configuration files (all these model.yml files), located at each OMZ model folder. The purpose of model.yml file is to configure OMZ model downloader (provide links where to download model from, checksum to test download was not broken) and additionally, for public models, which require conversion to IR, it also contains Model Optimizer parameters for each model. So, when you need to convert OMZ public model to OpenVINO IR it is more convenient to use OMZ converter tool, to not look for (or remember) what model optimizer parameters should be specified for this particular model. The converter tool will call model optimizer with all necessary parameters for you.

But when you need to convert some model, which is not part of Open Model Zoo, then it is your responsibility to study Model Optimizer options for the model source framework and desired IR features and run mo.py on your own.

Regards,
  Vladimir

0 Kudos
Vladimir_Dudnik
Employee
1,556 Views

File with labels for this demo should be formed as <model-name>.labels. We will replace this approach with providing command line option to specify labels from arbitrary name.

Regards,
  Vladimir

0 Kudos
Reply