I am running a test suite against the OpenVino R3 and a pre-trained Facenet design (from 2017). When I call the OVR3 LoadNetwork API (see below), I am getting:
UNIT-DO_UNIT> Starting the Unit Test Suite
0 INFO  do_unit_intel_openvino_gpu_inference_1: Starting ...
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): network allocation failed: ../../../../src/scale.cpp at line: 92
Error has occured for: InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/mul/FusedScaleShift_
Scale feature size(=128) is not equal to: input feature size(=1)
Aborted (core dumped)
The test code is the following:
The command line that I used to generate the intermediate representation was:
sudo python3 ./mo_tf.py --input_model ~/Documents/ML/FaceNet/PreTrained2017/20170511-185253/20170511-185253.pb --freeze_placeholder_with_value "phase_train->False" --log_level DEBUG
I am still digging into this but I have noticed that the FusedScaleShift layer (see error message) is defined as one of the last layer just before normalization in the Facenet xml file (20170511-185253.xml):
I am trying to understand the meaning of "Scale feature size(=128) is not equal to: input feature size(=1)". It has to do with something fairly obvious, but cannot just nailed it yet.
Please share some comments if you experienced this during your testing.
Adding extra information. I tried the 2018 Facenet model and the issue is NOT reproducible. The network does load properly when plugin.LoadNetwork() is called.
I did generate the Intermediate Representation with the following command:
sudo python3 ./mo_tf.py --input_model ~/Documents/ML/FaceNet/PreTrained2018/20180402-114759/20180402-114759.pb --freeze_placeholder_with_value "phase_train->False" --log_level DEBUG
We would like to use the 2017 trained model as our current pipeline is aligned with the requirement related to face processing before inferencing. We are currently using Dlib and not MTCNN.
Any comments are welcome
The model used is stored here: https://www.dropbox.com/s/rmh1m5fs6fm6mht/20170511-185253.zip?dl=0
Additional note related to the official Protobuf file on Facenet respository: I did a quick compare between the Intermediate Representation .xml file generated with 20170511-185253.pb and 20170512-110547.pb (official). Both .xml files are similar (same layers, weights, bias) beside the name attribute of the net element.
The 20170512-110547.pb is available directly as part of the model .zip file on the official Facenet repository here: https://github.com/davidsandberg/facenet/blob/master/src/download_and_extract.py (see reference to 20170512-110547)
I have reproduced your issue on my side and also validated to see if the 2017 facenet model works on CPU to rule out that it's not the generation of the .xml and .bin files. So now it looks like it may be an issue on the inference engine/MKLDNN plugin side and we are investigating further. We will respond back when we have found the resolution.
The fix has been released in the latest version(R5) of OpenVINO that is now available. I've tested it so please let me know if you see success on your side as well.