Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Cannot parse future versions -- matching versions between Win 10 and Pi

Alpeyev__Pavel
Beginner
1,017 Views

Hi,

The error reads:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Error reading network: cannot parse future versions: 6
Aborted

I'm creating an IR of a Darkflow model I trained myself on a Windows 10 computer and then trying to run it on a Raspberry Pi.
Some posts suggested it had to do with the version difference between the machines, but trying different installations produced the same error.

The details:
- Pi is running 2019.3.334 (I'm pretty sure, though do not know how to confirm exactly)
- Windows 10 machine had 3.379, but I also tried creating an IR with 3.334 and 275
- this is the command I used to create IR on the Windows machines

python mo_tf.py --input_model yolo-chubba.pb --batch 1 --tensorflow_use_custom_operations_config C:/"Program Files (x86)"/IntelSWTools/openvino_2019.3.379/deployment_tools/model_optimizer/extensions/front/tf/yolo_v2.json


- each time, the xml that came out still read version="6"

Could it be that the version mismatch isn't a problem?
Or am I failing somewhere in the installation process on the Windows machine and actually generating the same version over and over?

Any help would be greatly appreciated.

Thank you in advance,

Pasha

0 Kudos
1 Solution
Sahira_Intel
Moderator
1,017 Views

Hi Pasha,

While I'm still looking into this, I did notice that you didn't specify which data type to convert your model. This is done with the --data_type flag. The Model Optimizer will default to convert to FP32 data type if it is not specified - which is used when you're running inference on the CPU. To run inference on the Raspberry Pi, you will need to use a Neural Compute Stick (1 or 2) and convert to FP16 with the command --data_type FP16.

I hope this is helpful.

Best,
Sahira 

View solution in original post

0 Kudos
6 Replies
Sahira_Intel
Moderator
1,017 Views

Hi Pasha,

You mentioned that you tried using different versions of OpenVINO to convert to IR - are you erasing the previous versions when trying a newer one? Try completely erasing every version of OpenVINO except the latest one on both machines and try again. Older versions of the software installed on the same system might be interfering with the latest one. 

If that still does not work, can you please attach your model and I can try to run it on my end (if you would rather not share it publicly, please let me know and I will send you a PM).

Best Regards,

Sahira 

0 Kudos
Alpeyev__Pavel
Beginner
1,017 Views

Sahira,

Thank you for the response. 

I was just doing an uninstall from Windows Add & Remove Programs screen. Did notice that some of the folders remain.
Will try deleting again and revert with results.

Best,

Pavel

0 Kudos
Alpeyev__Pavel
Beginner
1,017 Views

Sahira,

Tried again with a clean wipe between OpenVino installs on Windows -- getting the same error still.

Since it doesn't look like I can attaching the files here, here's a link to a Google Drive folder.
It has my IR files for 334 and 275, as well as the initial .pb and .meta

Is it helps, readouts from running mo_tf.py on Windows. 

2.275

C:\Program Files (x86)\IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer>python mo_tf.py --input_model yolo-chubba.pb --batch 1 --tensorflow_use_custom_operations_config C:/"Program Files (x86)"/IntelSWTools/openvino_2019.2.275/deployment_tools/model_optimizer/extensions/front/tf/yolo_v2.json
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer\yolo-chubba.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer\.
        - IR output name:       yolo-chubba
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:/Program Files (x86)/IntelSWTools/openvino_2019.2.275/deployment_tools/model_optimizer/extensions/front/tf/yolo_v2.json
Model Optimizer version:        2019.2.0-436-gf5827d4
2020-01-09 06:24:39.244701: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll


[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Program Files (x86)\IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer\.\yolo-chubba.xml
[ SUCCESS ] BIN file: C:\Program Files (x86)\IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer\.\yolo-chubba.bin
[ SUCCESS ] Total execution time: 19.66 seconds.

 

3.334

 

C:\Program Files (x86)\IntelSWTools\openvino_2019.3.334\deployment_tools\model_optimizer>python mo_tf.py --input_model yolo-chubba.pb --batch 1 --tensorflow_use_custom_operations_config C:/"Program Files (x86)"/IntelSWTools/openvino_2019.3.334/deployment_tools/model_optimizer/extensions/front/tf/yolo_v2.json
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino_2019.3.334\deployment_tools\model_optimizer\yolo-chubba.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.3.334\deployment_tools\model_optimizer\.
        - IR output name:       yolo-chubba
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:/Program Files (x86)/IntelSWTools/openvino_2019.3.334/deployment_tools/model_optimizer/extensions/front/tf/yolo_v2.json
Model Optimizer version:        2019.3.0-375-g332562022
2020-01-09 05:51:51.926704: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll


[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.334\deployment_tools\model_optimizer\.\yolo-chubba.xml
[ SUCCESS ] BIN file: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.334\deployment_tools\model_optimizer\.\yolo-chubba.bin
[ SUCCESS ] Total execution time: 18.79 seconds.

 

 

Would appreciate if you could take a look.

Thank you,

Pasha

0 Kudos
Sahira_Intel
Moderator
1,018 Views

Hi Pasha,

While I'm still looking into this, I did notice that you didn't specify which data type to convert your model. This is done with the --data_type flag. The Model Optimizer will default to convert to FP32 data type if it is not specified - which is used when you're running inference on the CPU. To run inference on the Raspberry Pi, you will need to use a Neural Compute Stick (1 or 2) and convert to FP16 with the command --data_type FP16.

I hope this is helpful.

Best,
Sahira 

0 Kudos
Alpeyev__Pavel
Beginner
1,017 Views

Sahira,

That's super helpful!
I do plan to use Movidius Neural Stick on the Pi. Let me try your suggestion and revert.

Thank you again for the help,

Pasha

0 Kudos
Alpeyev__Pavel
Beginner
1,017 Views

Turns out I had a much older version of openvino running on my Pi than I thought (/  -)
For other noobs like me that, the fastest way to find the version is to look at version.txt in /home/pi/openvino/inference_engine_vpu_arm/deployment_tools/inference_engine

I ended up re-flashing the Pi from scratch with a newer version. On to the next problem ;)

Thank you for all the help ~

0 Kudos
Reply