Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

[ ERROR ] data [FeatureExtractor/InceptionV2/strided_slice/Output_0/Data__const] doesn't exist on NCS 2

Bhatt__Samir
Beginner
529 Views

Hi,

I have run mo_tf.py successfully and got .xml file by following command : " python3 mo.py --input_model inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ~/Downloads/Samir/ssd_inception_v2_coco_2018_01_28/pipeline.config"

Now I am running it on NCS 2. I am able to run demo code on NCS 2 but cann't run with my model files. Below is the error which i am getting. Request you to guide me.

*@XYZ:~/inference_engine_samples_build/intel64/Release$ ./classification_sample_async -d MYRIAD -i /home/ABC/intel/openvino/deployment_tools/demo/car.png -m /opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/./inference_graph.xml
[ INFO ] InferenceEngine:
    API version ............ 2.1
    Build .................. 37988
    Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/ABC/intel/openvino/deployment_tools/demo/car.png
[ INFO ] Creating Inference Engine
    MYRIAD
    myriadPlugin version ......... 2.1
    Build ........... 37988

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (300, 300)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the device
[ ERROR ] data [FeatureExtractor/InceptionV2/strided_slice/Output_0/Data__const] doesn't exist

Specification:

Operating System - Ubuntu 18.04.x, 64-bit, Intel® Core™ i7-8700 CPU @ 3.20GHz × 12

Pre-trained Model - tensorflow ssd inception v2

[Note:I am facing issue in all tensorflow models. Caffe models are working fine on NCS 2.]

0 Kudos
7 Replies
David_C_Intel
Employee
529 Views

Hi Samir,

Thanks for reaching out.

It seems you are running an object detection model, but you are using the classification_sample_async sample. You should use the object_detection_ssd_async demo.

Also, we recommend you installing the latest OpenVINO™ toolkit release (2020.1 version), as you are using the 2019 R3.1 version.

If you have any additional questions, be free to ask.

Best Regards,

David

0 Kudos
Bhatt__Samir
Beginner
529 Views

Hi,

We tried those two things which worked for me:

1. Updated OpenVINO™ toolkit to 2020.1 version.

2. Used Object Detection specific script(e.g.object_detection_ssd_async). 

This was just for initial steps. Now, i want to run my custom tensorflow inception v2 model on NCS 2. Will it work?

If yes, then let me know required files and steps.

If no, then guide us correct way.

Previously i tried with custom .caffemodel and that worked easily on NCS 2. Is it like tensorflow only having issue in custom model?

Thanks in advance.

0 Kudos
David_C_Intel
Employee
529 Views

Hi Samir,

Thanks for your reply. 

If you want to use a tensorflow model you will have to get the frozen model, convert it to IR format using the Model Optimizer tool and then run inference with the Intel NCS2. You can check this Model Optimizer Guide about converting a Tensorflow Model, there you can see inception v2 topology is supported and should work.

Regards,

David

0 Kudos
Bhatt__Samir
Beginner
529 Views

Thanks for the steps. So, according to you, if i trained any custom tensorflow model it should work on NCS 2 with steps you mentioned.

0 Kudos
David_C_Intel
Employee
529 Views

Hi Samir,

Yes, it should work on NCS 2. if you have additional questions, let us know. 

Best regards,

David

0 Kudos
Bhatt__Samir
Beginner
529 Views

Hi,

I am using my custom model(having *.pb file). I am facing below issue during IR conversion with latest openvino code.

python3 mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config pipeline.config  --reverse_input_channels

[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/Cast_1".
[ ERROR ]  0
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Cast.infer at 0x7f6736b2cf28>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ANALYSIS INFO ]  Your model looks like TensorFlow Object Detection API Model.
Check if all parameters are specified:
    --tensorflow_use_custom_operations_config
    --tensorflow_object_detection_api_pipeline_config
    --input_shape (optional)
    --reverse_input_channels (if you convert a model to use with the Inference Engine sample applications)
Detailed information about conversion of this model can be found at
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/Cast_1" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

Also let me know further steps for running it on NCS 2.

[Note : I have trained model through link : https://github.com/tensorflow/models/issues/4932 ]

0 Kudos
David_C_Intel
Employee
529 Views

Hi Samir,

Thank you for your reply.

  1. You need to use this json file, as you are using a custom model: ssd_support_api_v1.14.json
  2. Regarding the input shape, you have to specify it with the --input_shape or use the --batch 1 flag. 
  3. As you are trying to run inference on the Intel® NCS2, you have to specify the flag:  --data_type FP16.
  4. Check this documentation for more information: Converting TensorFlow* Object Detection API Models

Regards,

David

0 Kudos
Reply