Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Different results with same IR model in NCS1 and NCS2

pjvazquez
Novice
1,168 Views

I created and trained a Keras YOLO v3 model and tested it on CPU and NCS1 with similar results but big differences in speed. As NCS1 is discontinued, I got an NCS2 to test the same IR, and, surprisingly results changed dramatically.

I'm using openvino 2020r4 in a macos Catalina with Python 3.7.7 and tensorflow 1.15

 

For freezing the model I used the code:

import tensorflow as tf
 
def freeze_graph(graphsessionoutputpb_dir=save_path, pb_name='frozen_model.pb'pb_as_text=False
    if not os.path.exists(pb_dir): 
        os.makedirs(pb_dir)
    with graph.as_default():

        graphdef_frozen = tf.compat.v1.graph_util.convert_variables_to_constants(session, 
                                                                                 session.graph.as_graph_def(), 
                                                                                 output)
        graphdef_inf = tf.compat.v1.graph_util.remove_training_nodes(graphdef_frozen)
        tf.io.write_graph(graphdef_inf, pb_dir, pb_name, as_text=pb_as_text)

        return graphdef_frozen
 
 

I converted the pb model with 

Users/pjvazquez/opt/anaconda3/envs/trackinc_tensorrt/bin/python {mo_tf_path} --input_model {pb_file} --output_dir {output_dir} --input {input_layer} --input_shape {input_shape1_str} --data_type {PRECISSION} 
obtaining this result
 
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /Users/pjvazquez/Documents/TrackIn_Project/trackin_keras_openvino/./Models/2020R3/FP16/960/frozen_model.pb
- Path for generated IR: /Users/pjvazquez/Documents/TrackIn_Project/trackin_keras_openvino/./Models/2020R3/FP16/960/
- IR output name: frozen_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: input_1_1
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None Model Optimizer version:
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /Users/pjvazquez/Documents/TrackIn_Project/trackin_keras_openvino/./Models/2020R3/FP16/416/frozen_model.xml
[ SUCCESS ] BIN file: /Users/pjvazquez/Documents/TrackIn_Project/trackin_keras_openvino/./Models/2020R3/FP16/416/frozen_model.bin
[ SUCCESS ] Total execution time: 14.10 seconds.
[ SUCCESS ] Memory consumed: 1664 MB.
 
 

Is there any special reason for this difference?

Is there any way to get the same results?

0 Kudos
1 Solution
pjvazquez
Novice
1,117 Views

turned off VPU_HW_STAGES_OPTIMIZATION and looks like everything is ok now.

can you please tell where can we find a description of those parameters effect? I found this, but it is not too clarifier

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_VPU.html

 

Thanks a lot

View solution in original post

0 Kudos
7 Replies
JesusE_Intel
Moderator
1,149 Views

Hi pjvazquez,


Thanks for reaching out. Please try to add --reverse_input_channels to your model optimizer command. Could you provide the output on the Intel Movidius Neural Compute Stick, Intel Neural Compute Stick 2 and CPU for comparison? Also, would it be possible to share your frozen TensorFlow model to reproduce on my end?


Regards,

Jesus


pjvazquez
Novice
1,142 Views

Thanks, I included the --reverse_input_channels in the mo command and I'm testing it.

Now I can not give you both outputs, I'm testing the system.

https://synapsestech146-my.sharepoint.com/:u:/g/personal/pj_vazquez_synapses_tech/EZKchywPJ8tDnZgQIL0hkJAB4qH8xjiBmnWLsX7k31gyoA?e=jfEXTK

This is the link to the TF frozen pb file (200MB)

0 Kudos
JesusE_Intel
Moderator
1,129 Views

Hi pjvazquez,


Thanks for sharing the TensorFlow frozen model, I was able to convert to OpenVINO IR format. However, I was not able to test using our Yolo V3 demo application as the model seems to have a different architecture than the supported Darknet implementation of Yolo V3. 


For debug purposes, could you try turning off VPU_HW_STAGES_OPTIMIZATION? In Python, place the following line before loading the network.


ie.set_config({'VPU_HW_STAGES_OPTIMIZATION': 'NO'}, "MYRIAD")


Regards,

Jesus


pjvazquez
Novice
1,121 Views

Hi Jesus, yes, I'll do it.

You are right, the YOLO model was implemented in Keras based on this repo:

https://github.com/pjvazquez/head-detection-using-yolo

Next week I'll be able to obtain the outputs from both sticks and have data to compare them.

Thanks

0 Kudos
pjvazquez
Novice
1,118 Views

turned off VPU_HW_STAGES_OPTIMIZATION and looks like everything is ok now.

can you please tell where can we find a description of those parameters effect? I found this, but it is not too clarifier

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_VPU.html

 

Thanks a lot

0 Kudos
JesusE_Intel
Moderator
1,101 Views

Hi pjvazquez,


The VPU_HW_STAGES_OPTIMIZATION is only meant to be used for internal debug purposes. We are working on updating the documentation and API. Models that run correctly after turning VPU_HW_STAGES_OPTIMIZATION off may likely be a bug. However, in this case, the YOLOv3 model you are using has not been validated with OpenVINO.


Hope this answers your question.


Regards,

Jesus


0 Kudos
JesusE_Intel
Moderator
1,075 Views

Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


0 Kudos
Reply