Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6498 Discussions

Problem while following the OpenVINO 2019_R3.1 documentation

neilyoung
Beginner
3,157 Views

Hi,

I'm a total newbie to this material. I'm having here a brand new Neural Compute Stick 2. Trying to find a use case for it.

After I was failing an installation on macOS Catalina ("Detected operating system is not supported. Supported operating systems for this release are macOS 1.13 and 10.14" -- welcome in 2020, Intel) and a completely fail of the installation in a "mature" (aka messed up) Ubuntu 16.04 I created an 18.04 VM and was completely able to install everything.

I was also able to run the first couple of demo apps, running on the CPU. I'm referring to this document http://docs.openvinotoolkit.org/2019_R3.1/_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps.

I blindly followed all advises here given in this section "Steps for Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2" and finally came to this step: "Set Up a Neural Network Model". I created the required directory and run the python script:

The result is:

decades@decades-VirtualBox:/opt/intel/openvino/deployment_tools/demo$ python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/decades/openvino_models/models/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir .
[ ERROR ]  The "/home/decades/openvino_models/models/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel" is not existing file

And the script is right. There is no such a directory/file, although all required samples have been passed before. What am I doing wrong?

TIA

 

 

0 Kudos
14 Replies
Max_L_Intel
Moderator
3,157 Views

Hi, young, neil.

We've tested this section, and it seems the user guide on our web-site got outdated, since the latest version of model downloader component within OpenVINO toolkit puts Squeezenet model into the folder  /home/<user>/openvino_models/models/public/squeezenet1.1/

Hence the correct command would be:
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/<user>/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir .

Let us escalate this case, so we can update the correct folder path in the user guide. 

0 Kudos
Max_L_Intel
Moderator
3,157 Views

With regards to macOS Catalina support, unfortunately this is not officially validated with currently available build (2019 R3.1). However, in order to suppress this warning message, please use Shift + right mouse click to launch the installer. So you'll be able to install OpenVINO on macOS Catalina.

0 Kudos
nyoun7
Beginner
3,157 Views

Hi Max,

top 1: Yes, I also noticed this confusion. But after the installation of the October 2019 R3.1 version of the OpenVINO toolkit I had several folders. One in /opt/intel/openvino and in my home directory was also something. Cannot check it right now, since I messed my system up and needed to re-install everything. Will let you know.

 

top 2: No, you got me wrong. This is not a Catalina problem. I could launch the installer w/o problem. The error message is given from within the installer.

 

0 Kudos
nyoun7
Beginner
3,157 Views

Regarding the python script:

Indeed, your changed command line was a bit more successful, but it revealed a new problem:

decades@ubuntu:~/squeezenet1.1_FP16$ python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/decades/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir .
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/decades/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel
	- Path for generated IR: 	/home/decades/squeezenet1.1_FP16/.
	- IR output name: 	squeezenet1.1
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
Caffe specific parameters:
	- Path to Python Caffe* parser generated from caffe.proto: 	/opt/intel/openvino/deployment_tools/model_optimizer/mo/front/caffe/proto
	- Enable resnet optimization: 	True
	- Path to the Input prototxt: 	/home/decades/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt
	- Path to CustomLayersMapping.xml: 	Default
	- Path to a mean file: 	Not specified
	- Offsets for a mean file: 	Not specified
Model Optimizer version: 	2019.3.0-408-gac8584cb7
[ WARNING ]  
Detected not satisfied dependencies:
	protobuf: installed: 3.0.0, required: 3.6.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
Note that install_prerequisites scripts may install additional components.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/decades/squeezenet1.1_FP16/./squeezenet1.1.xml
[ SUCCESS ] BIN file: /home/decades/squeezenet1.1_FP16/./squeezenet1.1.bin
[ SUCCESS ] Total execution time: 2.63 seconds. 


I did run the update script

/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh

but the problem remains. 

0 Kudos
nyoun7
Beginner
3,157 Views

BTW: Step 4 fails too of course.

cp /home/<user>/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels .

In real life:

decades@ubuntu:~/squeezenet1.1_FP16$ cp /home/decades/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels .
cp: cannot stat '/home/decades/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels': No such file or directory
 

0 Kudos
nyoun7
Beginner
3,157 Views

And finally running the sample on an NCS2 gives this:

decades@ubuntu:~/inference_engine_samples_build/intel64/Release$ ./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m /home/decades/squeezenet1.1_FP16/squeezenet1.1.xml -d MYRIAD
[ INFO ] InferenceEngine: 
    API version ............ 2.1
    Build .................. custom_releases/2019/R3_ac8584cb714a697a12f1f30b7a3b78a5b9ac5e05
    Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car.png
[ INFO ] Creating Inference Engine
    MYRIAD
    myriadPlugin version ......... 2.1
    Build ........... 32974

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the device
E: [ncAPI] [    567270] [classification_] ncDeviceOpen:859    Device doesn't appear after boot
[ ERROR ] Can not init Myriad device: NC_ERROR
 

I have seen things like that a couple of times already with the Intel Realsense T.265. Same procedure?

 

0 Kudos
nyoun7
Beginner
3,157 Views

OK, disregard the last post please (NC_ERROR). I forgot to reboot after the USB updates.

0 Kudos
Max_L_Intel
Moderator
3,157 Views

Hi Neil.

top 2: No, you got me wrong. This is not a Catalina problem. I could launch the installer w/o problem. The error message is given from within the installer.

As mentioned before, Catalina macOS is not officially supported by 2019 R3.1 build. So OpenVINO toolkit installer notifies about that fact.
Yes, this is not a Catalina OS problem.

BTW: Step 4 fails too of course.

cp /home/<user>/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels .

In real life:

decades@ubuntu:~/squeezenet1.1_FP16$ cp /home/decades/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels .
cp: cannot stat '/home/decades/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels': No such file or directory

With regards to Step 4 in setting up a NN model, of course we'll fix the folder path here as well. Thank you for heads-up.
However, for macOS installation and set up we recommend to use exactly macOS guide - https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_macos.html

[ WARNING ] 
Detected not satisfied dependencies:
    protobuf: installed: 3.0.0, required: 3.6.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
Note that install_prerequisites scripts may install additional components.

With regards to the warning message you get, so you haven't updated protobuf package on your system.
If you run install_prerequisites_caffe.sh or install_prerequisites.sh script, it should be updating protobuf to 3.6.1 version.

If that doesn't work anyway, try running the specific command from the script:

python3 -m pip install protobuf

If it's still doesn't get installed, please share bash output with us here.

0 Kudos
Max_L_Intel
Moderator
3,157 Views

top 2: No, you got me wrong. This is not a Catalina problem. I could launch the installer w/o problem. The error message is given from within the installer

Hello Neil.

We would like to inform you that the latest 2020.1 build of OpenVINO toolkit that has been released today officially supports macOS Catalina. 
You can download it here https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-macos 

Best regards, Max.

0 Kudos
nyoun7
Beginner
3,157 Views

Thanks for the info. Indeed, install worked. Then I came until 

Run the Image Classification Verification Script

Go to the Inference Engine demo directory:

cd /opt/intel/openvino/deployment_tools/demo

Run the Image Classification verification script:

./demo_squeezenet_download_convert_run.sh

/opt/intel/openvino/deployment_tools/demo $ ./demo_squeezenet_download_convert_run.sh

target_precision = FP16

[setupvars.sh] OpenVINO environment initialized





###################################################







Downloading the Caffe model and the prototxt

Installing dependencies

Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/site-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 1)) (5.3)

Requirement already satisfied: requests in /usr/local/lib/python3.7/site-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.22.0)

Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2019.11.28)

Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.8)

Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (1.25.7)

Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (3.0.4)

Run python3.7 /opt/intel//openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name squeezenet1.1 --output_dir /Users/decades/openvino_models/models --cache_dir /Users/decades/openvino_models/cache



################|| Downloading models ||################



========== Retrieving /Users/decades/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt from the cache



========== Retrieving /Users/decades/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel from the cache



################|| Post-processing ||################



========== Replacing text in /Users/decades/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt





Target folder /Users/decades/openvino_models/ir/public/squeezenet1.1/FP16 already exists. Skipping IR generation  with Model Optimizer.If you want to convert a model again, remove the entire /Users/decades/openvino_models/ir/public/squeezenet1.1/FP16 folder. Then run the script again







###################################################



Build Inference Engine samples



-- The C compiler identification is AppleClang 11.0.0.11000033

-- The CXX compiler identification is AppleClang 11.0.0.11000033

-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc

-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works

-- Detecting C compiler ABI info

-- Detecting C compiler ABI info - done

-- Detecting C compile features

-- Detecting C compile features - done

-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++

-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works

-- Detecting CXX compiler ABI info

-- Detecting CXX compiler ABI info - done

-- Detecting CXX compile features

-- Detecting CXX compile features - done

-- Looking for C++ include unistd.h

-- Looking for C++ include unistd.h - found

-- Looking for C++ include stdint.h

-- Looking for C++ include stdint.h - found

-- Looking for C++ include sys/types.h

-- Looking for C++ include sys/types.h - found

-- Looking for C++ include fnmatch.h

-- Looking for C++ include fnmatch.h - found

-- Looking for strtoll

-- Looking for strtoll - found

-- Found InferenceEngine: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libinference_engine.dylib (Required is at least version "2.1") 

-- Configuring done

CMake Warning (dev):

  Policy CMP0042 is not set: MACOSX_RPATH is enabled by default.  Run "cmake

  --help-policy CMP0042" for policy details.  Use the cmake_policy command to

  set the policy and suppress this warning.



  MACOSX_RPATH is not specified for the following targets:



   format_reader



This warning is for project developers.  Use -Wno-dev to suppress it.



-- Generating done

-- Build files have been written to: /Users/decades/inference_engine_samples_build

[ 36%] Built target gflags_nothreads_static

[ 81%] Built target format_reader

[100%] Built target classification_sample_async





###################################################



Run Inference Engine classification sample



Run ./classification_sample_async -d CPU -i /opt/intel/openvino/deployment_tools/demo/car.png -m /Users/decades/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml



[ INFO ] InferenceEngine: 

API version ............ 2.1

Build .................. 37988

Description ....... API

[ INFO ] Parsing input parameters

[ INFO ] Parsing input parameters

[ INFO ] Files were added: 1

[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car.png

[ INFO ] Creating Inference Engine

[ ERROR ] Failed to create plugin /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libMKLDNNPlugin.dylib for device CPU

Please, check your environment

Cannot load library '/opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libMKLDNNPlugin.dylib': dlopen(/opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libMKLDNNPlugin.dylib, 1): Library not loaded: @rpath/libmkl_tiny_tbb.dylib

  Referenced from: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libMKLDNNPlugin.dylib

  Reason: no suitable image found.  Did find:

/opt/intel//openvino_2020.1.023/deployment_tools/inference_engine/external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib: code signature in (/opt/intel//openvino_2020.1.023/deployment_tools/inference_engine/external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib) not valid for use in process using Library Validation: library load disallowed by system policy

/opt/intel//openvino_2020.1.023/deployment_tools/inference_engine/external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib: code signature in (/opt/intel//openvino_2020.1.023/deployment_tools/inference_engine/external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib) not valid for use in process using Library Validation: library load disallowed by system policy



Error on or near line 217; exiting with status 1

/opt/intel/openvino/deployment_tools/demo $ 

 

This was accompanied by a system dialog, which said, that the software must be updated, since Apple cannot check it for malicious software.

(Screenshot attached)

 

0 Kudos
nyoun7
Beginner
3,157 Views

I would really appreciate, if someone could fix the documentation. This part simply does not work.

Set Up a Neural Network Model

If you are running inference on hardware other than VPU-based devices, you already have the required FP32 neural network model converted to an optimized Intermediate Representation (IR). Follow the steps in the Run the Sample Application section to run the sample.

If you want to run inference on a VPU device (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs), you'll need an FP16 version of the model, which you will set up in this paragraph.

To convert the FP32 model to a FP16 IR suitable for VPU-based hardware accelerators, follow the steps below:

Make a directory for the FP16 SqueezeNet Model:

mkdir /home/<user>/squeezenet1.1_FP16

Go to /home/<user>/squeezenet1.1_FP16:

cd /home/<user>/squeezenet1.1_FP16

Run the Model Optimizer to convert the FP32 Squeezenet Caffe* model delivered with the installation into an optimized FP16 Intermediate Representation (IR):

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/<user>/openvino_models/models/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir .

The squeezenet1.1.labels file contains the classes that ImageNet uses. This file is included so that the inference results show text instead of classification numbers. Copy squeezenet1.1.labels to your optimized model location:

cp /home/<user>/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels .

Now your neural network setup is complete and you're ready to run the sample application.

While I was able (with your help) to resolve the path problem to the model: Where is the .labels file, which needs to be copied???

 

Then: The requirement installation of 2020.1 clearly stated, that protobuf 6.1 was installed (whereas during the installation it was told, that protobuf 6.13 was installed). Anyway: Some script is still complaining, that 6.1 is not installed. What a mess...

Model Optimizer version:     2020.1.0-61-gd349c3ba4a
[ WARNING ]  
Detected not satisfied dependencies:
    protobuf: installed: 3.0.0, required: == 3.6.1

 

0 Kudos
Max_L_Intel
Moderator
3,157 Views

Hello Neil.

While I was able (with your help) to resolve the path problem to the model: Where is the .labels file, which needs to be copied???

Labels file for squeezenet model should be located in the directory /home/<user>/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.labels 
We are also planning to fix this in our documentation.
 

Then: The requirement installation of 2020.1 clearly stated, that protobuf 6.1 was installed (whereas during the installation it was told, that protobuf 6.13 was installed). Anyway: Some script is still complaining, that 6.1 is not installed. What a mess...

Please make sure that you don't have multiple protobuf versions installed on your system. Run these:

sudo pip3 uninstall -y protobuf 

brew remove protobuf

brew cleanup -s protobuf

Then the following command should not show any output if they are all uninstalled

pip3 show protobuf 

So after then just reinstall with one of the install_prerequisites scripts.
 

This was accompanied by a system dialog, which said, that the software must be updated, since Apple cannot check it for malicious software.

(Screenshot attached)

For this one you need to manually allow an access to libmkl_tiny_tbb.dylib app in System Preferences > Security & Privacy > General.

Please use this workaround from Apple under the section "How to open an app that hasn’t been notarized or is from an unidentified developer" - https://support.apple.com/en-us/HT202491

We'll also report this in order to get fixed. Thank you for letting us know.

0 Kudos
nyoun7
Beginner
3,157 Views

Thanks

brew remove protobuf

brew cleanup -s protobuf

 

I'm not on a mac, so this suggestion is not making any sense.

For this one you need to manually allow an access to libmkl_tiny_tbb.dylib app in System Preferences > Security & Privacy > General.

Ok, maybe. Thank you very much. I have abandoned this stick. Some performance checks with ncapzopp (?) did show me, that the stick will not even reach the processing power of my middle class Asus Intel i5 laptop. That was what I wanted to achieve. 

Thanks.

0 Kudos
Max_L_Intel
Moderator
3,157 Views

Hello Neil.

I'm not on a mac, so this suggestion is not making any sense.

Homebrew commands were provided for Catalina OS. In case of Ubuntu instead of these commands you could use

sudo apt-get purge --remove libprotobuf* 

Ok, maybe. Thank you very much. I have abandoned this stick. Some performance checks with ncapzopp (?) did show me, that the stick will not even reach the processing power of my middle class Asus Intel i5 laptop. That was what I wanted to achieve. 

The ncpappzoo is a just repository of demos created by a community to show the functionality.
In order to measure the performance please use benchmark_app tool instead - https://docs.openvinotoolkit.org/latest/_inference_engine_tools_benchmark_tool_README.html

Intel Neural Compute Sticks are targeted for AI inference applications in low power segment. So you could check performance comparison (including efficiency in FPS per Watt) on different hardware devices here - https://docs.openvinotoolkit.org/latest/_docs_performance_benchmarks.html

Best regards, Max.

0 Kudos
Reply