Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
7 Views

Incorrect output from Inference Engine

When using Keras' ResNet50 model with the top (softmax) layer removed and adding a Dense layer, the model optimiser is able to convert the resultant Tensorflow model.

Output from Keras model and Tensorflow model are identical as expected.

Output from Inference Engine is quite different.

I have created a code sample to reproduce the issue. Please see attached sample.

Python dependencies are listed in requirements.txt

To generate Tensorflow and Keras dummy models:
python generate_dummy_models.py

This will generate dummy models:
models/tf/dummy.pb
models/keras/dummy.h5

Generate MO model from models/tf/dummy.pb and place them in models/mo/dummy/FP32:
 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer$ sudo python3 mo.py --input_model PATH_TO_EXTRACTED_FOLDER/models/tf/dummy.pb --input_shape "(1,182,182,3)"

Now place the generated models in models/mo/dummy/FP32.

Run the test script. You can see that the output is different.
python test.py

May also be related to:
https://software.intel.com/en-us/forums/computer-vision/topic/798732
https://software.intel.com/en-us/forums/computer-vision/topic/797938

0 Kudos
1 Reply
Highlighted
Beginner
7 Views

I think there must have been a mistake on my part in copying the generated models.

I can no longer reproduce this issue.

0 Kudos