Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Lee__Terry
Beginner
391 Views

openVino 2019 R1 internal error

I got the following error message when running IR on 2019 R1. The same command run on R5.

Error message:

Common parameters:
        - Path to the Input Model:      C:\t\y11\output_inference_graph.pb\frozen_inference_graph.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,300,300,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP16
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\t\y11\output_inference_graph.pb\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json
Model Optimizer version:        2019.1.0-341-gc9b66a2

The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  [Errno 13] Permission denied: 'C:\\Program Files (x86)\\IntelSWTools\\openvino_2019.1.087\\deployment_tools\\model_optimizer\\.\\frozen_inference_graph.bin'
[ ERROR ]  Traceback (most recent call last):
 

Command:

python  mo_tf.py --input_model C:\t\y11\output_inference_graph.pb\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config C:\t\y11\output_inference_graph.pb\pipeline.config --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --input_shape=[1,300,300,3] --data_type FP16
 

Did I miss something simple?

Thanks,

Terry

0 Kudos
2 Replies
Shubha_R_Intel
Employee
391 Views

Dear Terry:

I see the following error:

Errno 13] Permission denied: 'C:\\Program Files (x86)\\IntelSWTools\\openvino_2019.1.087\\deployment_tools\\model_optimizer\\.\\frozen_inference_graph.bin'

Please open up your terminal as Administrator and that should solve the problem. Alternatively, create your IR in a directory where you have write permissions via use of the --output_dir argument. The reason you got the error is because you're trying to create IR in the openvino installation directory.

Thanks,

Shubha

 

Lee__Terry
Beginner
391 Views

It's the permission problem. Thanks,

Reply