Community
cancel
Showing results for 
Search instead for 
Did you mean: 
wsla
Beginner
254 Views

Converted ResNet50 does not work correctly

Hello,

I have fine-tuned ResNet50 for 4 class classification using Keras and converted it to frozen Tensorflow model. Inference is running as excpected using "label_image.py" script in Tenorflow and parameters:

python label_image.py 

--graph="resnet50_20181022_1_converted.pb" 

--labels="labels.txt" 

--image="category1.PNG" 

--input_layer=resnet50_input 

--output_layer=dense_1/Softmax 

--input_width=224 

--input_height=224

Then I converted the model to IR format for Compute Stick using:

python mo_tf.py 

--input_model "resnet50_20181022_1_converted.pb" 

--output_dir "FP16" 

--data_type FP16 

--input resnet50_input 

--input_shape [1,224,224,3] 

--output dense_1/Softmax

Without --input_shape parameter I was getting "[ ERROR ]  Shape [ -1 224 224   3] is not fully defined for output 0 of "resnet50_input". Use --input_shape with positive integers to override model input shapes.", so I specified the input shape and conversion seems to be running ok. Now when using converted model in classification sample code with Compute Stick, I am getting:

Top 4 results:

1 nan label #1

3 nan label #3
0 nan label #0
2 nan label #2

"nan" instead of class confidence. And when running using cpu (model converted to FP32):

Top 4 results:

1 1.0000000 label #1

3 0.0000000 label #3
0 0.0000000 label #0
2 0.0000000 label #2

The values that I am getting using cpu are never changing, the same as the order of classes in both FP16 using Compute Stick and FP32 using cpu. The sample is reading correct number of classes from the model (4).

I uploaded two files, resnet50_20181022_1_converted containing model before model optimization, and FP16 with optimized model for Compute Stick.

Any help would be greatly appreciated.

0 Kudos
12 Replies
Severine_H_Intel
Employee
254 Views

Dear Wsla,

I could reproduce your issue. 

It is not specific to the stick, if you run with FP 16 on MYRIAD or GPU, you get Nan results. 

If you run with FP32 on CPU or GPU, you get the results you mentioned with probability 1.0 for label #1.  

I need to investigate to find the root cause of this and come back to you. 

Best, 

Severine

wsla
Beginner
254 Views

Hi Serverine,

Thank you for your response, I am awaiting futher information.

Regards

wsla
Beginner
254 Views

Hello Severine,

Did you have a chance to look into this further?

Regards

Severine_H_Intel
Employee
254 Views

Dear Wsla, 

yes, I have been trying to nail down the issue, but I can not find the root cause. I also tried your network on Ubuntu with Myriad and in this case, it does not have the Nan issue but gives the same wrong results as CPU. 

I will escalate the issue to our dev team and I hope I can come back to you soon enough with an answer. 

Best, 

Severine

wsla
Beginner
254 Views

Hi Serverine,

Any update on this?

Today I found this article https://ai.intel.com/unlocking-aws-deeplens-with-the-openvino-toolkit/, it mentions that Flatten layer is not supported by Model Optimizer (although it is not showing any errors during conversion?), but after replacing that layer with Reshape I am still getting exactly the same result..

Severine_H_Intel
Employee
254 Views

Hi Wsla, 

I escalated your issue to our dev team, your model is fully supported by the Model Optimizer as your model was successfully converted. The Model Optimizer would throw an error in presence of unsupported layers. 

Best, 

Severine

wsla
Beginner
254 Views

Hi Serverine,

I managed to solve this issue. The problem was a missing --scale (256) and --reverse_input_channels as Keras uses BGR ordering. I would have thought that openvino by default operates on 0..1 inputs like Keras, it would be good to see more examples showing parameters used in conversion instead of just compatible model files.

Thanks

lemniscate
Beginner
254 Views

I am also facing a similar issue where the Inference Engine result is different from Tensorflow and Keras result.

I have created a code sample to reproduce the issue. Please see attached sample.

Python dependencies are listed in requirements.txt

To generate Tensorflow and Keras dummy models:
python generate_dummy_models.py

This will generate dummy models:
models/tf/dummy.pb
models/keras/dummy.h5

Generate MO model from models/tf/dummy.pb and place them in models/mo/dummy/FP32:
 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer$ sudo python3 mo.py --input_model PATH_TO_EXTRACTED_FOLDER/models/tf/dummy.pb --input_shape "(1,182,182,3)"

Now place the generated models in models/mo/dummy/FP32.

Run the test script. You can see that the output is different.
python test.py

 

Monique_J_Intel
Employee
254 Views

Hi Varun,

Please create a new post for your question so that we can answer accordingly.

Kind regards,

Monique Jones

lemniscate
Beginner
254 Views

Hi Monique,

I have created a new post here:
https://software.intel.com/en-us/forums/computer-vision/topic/799138

254 Views

Hi, I'm having a similar issue. The same model converted using FP32 works on the CPU but converted using FP16 does not work on the NCS.

Was this ever solved?

Thanks.

nikos1
Valued Contributor I
254 Views

Hello Lourenço,

This is an old issue and many issues have been fixed in the new SDK.

Could you possibly elaborate on what your FP16 issue is?  Even better why don't you start a new thread?

 FWIW I just run ResNet50 on FP32 CPU, FP32/16 GPU and FP16 NCS and classification results seem reasonable.

nikos

Reply