Community
cancel
Showing results for 
Search instead for 
Did you mean: 
davius
Beginner
147 Views

Differences between Keras and OpenVino results

Hello !

I'm using OpenVino as part as a self driving car project.

I've successfully implemented OpenVino, but I'm getting different results between my original Keras model (or a Tensorflow version) and the OpenVino version.

 

The model is based on Keras, using the following layers declarations :

    drop = 0.2
    img_in = Input(shape=input_shape, name='img_in')
    x = conv2d(24521)(x)
    x = Dropout(drop)(x)
    x = conv2d(32522)(x)
    x = Dropout(drop)(x)
    x = conv2d(64523)(x)
    x = Dropout(drop)(x)
    x = conv2d(643, l4_stride, 4)(x)
    x = Dropout(drop)(x)
    x = conv2d(64315)(x)
    x = Dropout(drop)(x)
    x = Flatten(name='flattened')(x)
    x = Dense(100activation='relu'name='dense_1')(x)
    x = Dropout(drop)(x)
    x = Dense(50activation='relu'name='dense_2')(x)
    x = Dropout(drop)(x)

 

The input is an image and the output is two numbers, between -1 and 1.

 

Keras values with a test image as input :

0.0007655650842934847
0.4871128797531128

Openvino values with the same image as input :

0.6232800483703613 --> Should be close to 0.
0.450412392616272

 

Here is the conversion command line I use :

    python mo_tf.py --input_model "model.pb" --batch 1 

 

I've tried various options (--reverse_input_channels, --scale 1...), without success.

I'm using latest OV version, on Python 3.7.10 Windows, with CPU or MYRIAD inference.

 

Any idea from where the difference could come ?

 

You can find attached a zip file with the original Keras model freezed as pb file, the OpenVino model and a test image.

 

Thanks !

David.

0 Kudos
9 Replies
Iffa_Intel
Moderator
121 Views

Hi,


May I know what is your model's topology?

It would be good if you could provide the link to it to ensure it is supported.

Plus, which OpenVINO's inference engine demo did you use to run this?



Sincerely,

Iffa



davius
Beginner
118 Views

Hello,

It's a custom model, using layers detailed in the first post (all of them seems to be supported).

I'm not using inference engine demo code, but a custom program.

David.

Iffa_Intel
Moderator
99 Views

Please note that listed here are validated supported topology for OpenVINO:

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...

 

I noticed that you converted your model with --batch 1, it's successfully converted but I believe this parameter is insufficient.

 

If you run the Model Optimizer without --batch 1: python3 mo_tf.py --input_model <INPUT_MODEL>.pb

You will note that there are some errors prompted.

 

[ ERROR ] Shape [ -1 224 288  3] is not fully defined for output 0 of "img_in". Use --input_shape with positive integers to override model input shapes.

[ ERROR ] Cannot infer shapes or values for node "img_in".

[ ERROR ] Not all output shapes were inferred or fully defined for node "img_in".

 

The shape section is really important for your model to properly works.

By using the Model Optimizer, try to specify your input shape and also ensure your model precision (FP32,FP16)

 

I'm able to get some output for your converted model but since I couldn't load the original TF file directly to the sample application and benchmark application in OpenVINO, I couldn't validate your comparison.

 

Sincerely,

Iffa

 

 

 

davius
Beginner
85 Views

Hello,

Thank you for this feedback.

It's strange, I don't have this warning when I run the model optimizer.

I tried the following parameters with the model optimizer, but I get the same result :

python mo_tf.py --input_model ".\models\model.pb" --output_dir "models" --model_name "model-optim" --data_type FP32 --input_shape=[1,224,288,3]

 

The output that you should get using the provided image.jpg are the following :

0.0007655650842934847
0.4871128797531128

Openvino values with the same image as input :

0.6232800483703613 --> Should be close to 0.
0.450412392616272

 

David.

Iffa_Intel
Moderator
53 Views

Hi,

 

It seems that you managed to convert your model to the IR model successfully and the benchmark app generates the report successfully.

 

As for accuracy, it seems that you are using the custom topology which is not tested nor optimized and thus it might cause some accuracy loss. Our recommendation for you would be to choose an appropriate NN model/topology from the list of supported Tensorflow topologies, as these topologies have been verified, and then to apply it for your respective use case. 

 

Sincerely,

Iffa

 

davius
Beginner
45 Views

Hello,

 

Thank you for this feedback.

 

David.

Iffa_Intel
Moderator
41 Views

If you don't have any further inquiries, shall I close this thread?


Sincerely,

Iffa


davius
Beginner
39 Views

Yes, thank you.

Iffa_Intel
Moderator
25 Views

Greetings,


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question


Sincerely,

Iffa


Reply