Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Tensorflow error: Unexpected exception happened during extracting attributes for node map/while/decode_image/is_jpeg/Substr

Kasinathan__Deepa
2,068 Views

 Windows 10 Enterprise, Intel Core i7-8650U CPU @ 1.90 GHz 2.11 GHz
 openvino_2019.3.379,  Tensorflow version 1.14

Hello,

I downloaded the frozen tensorflow object detection API model  "faster_rcnn_resnet50_coco_2018_01_28.tar" and was able to successfully convert it. We then retrained the model using our images. Unfortunately, I am unable to convert the custom-retained model. This is the error I get:

[ ERROR ]  'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
Unexpected exception happened during extracting attributes for node map/while/decode_image/is_jpeg/Substr.
Original exception message: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)

Could you please help me to debug this error. Thanks. Below, I provide the screen grab from the successful conversion of "faster_rcnn_resnet50_coco_2018_01_28.tar"  and the unsuccessful attempt at the retrained model.

PS C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer> python .\mo_tf.py --input_model=C:\Users\deepa.kasinathan\Downloads\faster_rcnn_resnet50_coco_2018_01_28.tar\faster_rcnn_resnet50_coco_2018_01_28\frozen_inference_graph.pb --tensorflow_use_custom_operations_config  'C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json' --tensorflow_object_detection_api_pipeline_config 'C:\Users\deepa.kasinathan\Downloads\faster_rcnn_resnet50_coco_2018_01_28.tar\faster_rcnn_resnet50_coco_2018_01_28\pipeline.config' --reverse_input_channels
Model Optimizer arguments:

Common parameters:
        - Path to the Input Model:      C:\Users\deepa.kasinathan\Downloads\faster_rcnn_resnet50_coco_2018_01_28.tar\faster_rcnn_resnet50_coco_2018_01_28\frozen_inference_graph.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\Users\deepa.kasinathan\Downloads\faster_rcnn_resnet50_coco_2018_01_28.tar\faster_rcnn_resnet50_coco_2018_01_28\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json
Model Optimizer version:        2019.3.0-408-gac8584cb7
2020-01-02 09:42:34.782298: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2020-01-02 09:42:34.787208: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.\frozen_inference_graph.xml
[ SUCCESS ] BIN file: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.\frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 75.58 seconds.

 

PS C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer> python .\mo_tf.py --input_model=C:\Users\deepa.kasinathan\Desktop\frcnn_tassen_new\model_export\frozen_inference_graph.pb --tensorflow_use_custom_operations_config  'C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.10.json' --tensorflow_object_detection_api_pipeline_config 'C:\Users\deepa.kasinathan\Desktop\frcnn_tassen_new\model_export\pipeline.config' --reverse_input_channels
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Users\deepa.kasinathan\Desktop\frcnn_tassen_new\model_export\frozen_inference_graph.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\Users\deepa.kasinathan\Desktop\frcnn_tassen_new\model_export\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.10.json
Model Optimizer version:        2019.3.0-408-gac8584cb7
2020-01-02 09:55:03.581111: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2020-01-02 09:55:03.587276: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ ERROR ]  'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
Unexpected exception happened during extracting attributes for node map/while/decode_image/is_jpeg/Substr.
Original exception message: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)

0 Kudos
1 Solution
Kasinathan__Deepa
2,068 Views

Hi Jesus,

Thanks for the reply. From the link to the guide you gave us, we found that we made a mistake in the " --input_type  " when we freeze the model. Now, we are able to convert out model.

Deepa

View solution in original post

0 Kudos
2 Replies
JesusE_Intel
Moderator
2,068 Views

Hi Deepa,

I have not seen this error before, how did you freeze the model? Have you tried using the faster_rcnn_support_api_v1.14.json configuration file?

Have you taken a look at this guide for training a custom faster rcnn model?

Regards,

Jesus

 

0 Kudos
Kasinathan__Deepa
2,069 Views

Hi Jesus,

Thanks for the reply. From the link to the guide you gave us, we found that we made a mistake in the " --input_type  " when we freeze the model. Now, we are able to convert out model.

Deepa

0 Kudos
Reply