I have fine-tuned ResNet50 for 4 class classification using Keras and converted it to frozen Tensorflow model. Inference is running as excpected using "label_image.py" script in Tenorflow and parameters:
Then I converted the model to IR format for Compute Stick using:
Without --input_shape parameter I was getting "[ ERROR ] Shape [ -1 224 224 3] is not fully defined for output 0 of "resnet50_input". Use --input_shape with positive integers to override model input shapes.", so I specified the input shape and conversion seems to be running ok. Now when using converted model in classification sample code with Compute Stick, I am getting:
Top 4 results:
1 nan label #1
3 nan label #3
0 nan label #0
2 nan label #2
"nan" instead of class confidence. And when running using cpu (model converted to FP32):
Top 4 results:
1 1.0000000 label #1
3 0.0000000 label #3
0 0.0000000 label #0
2 0.0000000 label #2
The values that I am getting using cpu are never changing, the same as the order of classes in both FP16 using Compute Stick and FP32 using cpu. The sample is reading correct number of classes from the model (4).
I uploaded two files, resnet50_20181022_1_converted containing model before model optimization, and FP16 with optimized model for Compute Stick.
Any help would be greatly appreciated.
I could reproduce your issue.
It is not specific to the stick, if you run with FP 16 on MYRIAD or GPU, you get Nan results.
If you run with FP32 on CPU or GPU, you get the results you mentioned with probability 1.0 for label #1.
I need to investigate to find the root cause of this and come back to you.
yes, I have been trying to nail down the issue, but I can not find the root cause. I also tried your network on Ubuntu with Myriad and in this case, it does not have the Nan issue but gives the same wrong results as CPU.
I will escalate the issue to our dev team and I hope I can come back to you soon enough with an answer.
Any update on this?
Today I found this article https://ai.intel.com/unlocking-aws-deeplens-with-the-openvino-toolkit/, it mentions that Flatten layer is not supported by Model Optimizer (although it is not showing any errors during conversion?), but after replacing that layer with Reshape I am still getting exactly the same result..
I escalated your issue to our dev team, your model is fully supported by the Model Optimizer as your model was successfully converted. The Model Optimizer would throw an error in presence of unsupported layers.
I managed to solve this issue. The problem was a missing --scale (256) and --reverse_input_channels as Keras uses BGR ordering. I would have thought that openvino by default operates on 0..1 inputs like Keras, it would be good to see more examples showing parameters used in conversion instead of just compatible model files.
I am also facing a similar issue where the Inference Engine result is different from Tensorflow and Keras result.
I have created a code sample to reproduce the issue. Please see attached sample.
Python dependencies are listed in requirements.txt
To generate Tensorflow and Keras dummy models:
This will generate dummy models:
Generate MO model from models/tf/dummy.pb and place them in models/mo/dummy/FP32:
/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer$ sudo python3 mo.py --input_model PATH_TO_EXTRACTED_FOLDER/models/tf/dummy.pb --input_shape "(1,182,182,3)"
Now place the generated models in models/mo/dummy/FP32.
Run the test script. You can see that the output is different.
This is an old issue and many issues have been fixed in the new SDK.
Could you possibly elaborate on what your FP16 issue is? Even better why don't you start a new thread?
FWIW I just run ResNet50 on FP32 CPU, FP32/16 GPU and FP16 NCS and classification results seem reasonable.