Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error with OpenVino R3 and Facenet - network allocation failed/scale size error

Mickael_T
Beginner
870 Views

Hello,

I am running a test suite against the OpenVino R3 and a pre-trained Facenet design (from 2017). When I call the OVR3 LoadNetwork API (see below), I am getting:

UNIT-DO_UNIT> Starting the Unit Test Suite
0 INFO  [0] do_unit_intel_openvino_gpu_inference_1: Starting ...
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  network allocation failed: ../../../../src/scale.cpp at line: 92
Error has occured for: InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/mul/FusedScaleShift_
Scale feature size(=128) is not equal to: input feature size(=1)

/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/include/details/ie_exception_conversion.hpp:80
Aborted (core dumped)

The test code is the following:

InferenceEngine::PluginDispatcher dispatcher({full_path_ie_engine});
InferenceEngine::InferenceEnginePluginPtr enginePtr(dispatcher.getSuitablePlugin(InferenceEngine::TargetDevice::eGPU));
// Create an instance of the plugin so we can load the network
InferenceEngine::InferencePlugin plugin(enginePtr);
 
InferenceEngine::CNNNetReader network_reader;
...
network_reader.ReadNetwork(path_to_files+model_path_to_xml);
network_reader.ReadWeights(path_to_files+model_path_to_bin);
 
// Read the network information
auto network = network_reader.getNetwork();
network.setBatchSize(1);
InferenceEngine::InputsDataMap input_info(network.getInputsInfo());
InferenceEngine::OutputsDataMap output_info(network.getOutputsInfo());
 
// Load the prebuilt network with weight to the engine.
auto executable_network = plugin.LoadNetwork(network, {});

 

The command line that I used to generate the intermediate representation was:

sudo python3 ./mo_tf.py --input_model ~/Documents/ML/FaceNet/PreTrained2017/20170511-185253/20170511-185253.pb --freeze_placeholder_with_value "phase_train->False" --log_level DEBUG

I am still digging into this but I have noticed that the FusedScaleShift layer (see error message) is defined as one of the last layer just before normalization in the Facenet xml file (20170511-185253.xml):

<layer id="314" name="InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/mul/FusedScaleShift_" precision="FP32" type="ScaleShift">
            <input>
                <port id="0">
                    <dim>1</dim>
                    <dim>128</dim>
                </port>
            </input>
            <output>
                <port id="3">
                    <dim>1</dim>
                    <dim>128</dim>
                </port>
            </output>
            <blobs>
                <weights offset="91116736" size="512"/>
                <biases offset="91117248" size="512"/>
            </blobs>
        </layer>

 

I am trying to understand the meaning of "Scale feature size(=128) is not equal to: input feature size(=1)". It has to do with something fairly obvious, but cannot just nailed it yet.

Please share some comments if you experienced this during your testing.

Thank yoU!

 

0 Kudos
7 Replies
Mickael_T
Beginner
870 Views

Adding extra information. I tried the 2018 Facenet model and the issue is NOT reproducible. The network does load properly when plugin.LoadNetwork() is called.

I did generate the Intermediate Representation with the following command:

sudo python3 ./mo_tf.py --input_model ~/Documents/ML/FaceNet/PreTrained2018/20180402-114759/20180402-114759.pb --freeze_placeholder_with_value "phase_train->False" --log_level DEBUG

We would like to use the 2017 trained model as our current pipeline is aligned with the requirement related to face processing before inferencing. We are currently using Dlib and not MTCNN.

Any comments are welcome

 

0 Kudos
Yury_G_Intel
Employee
870 Views

Hi,

Could you point to FaceNet model that is not working? So that we could ensure that we are looking at the same file.

Thanks

0 Kudos
Mickael_T
Beginner
870 Views

Hi Yury,

The model used is stored here: https://www.dropbox.com/s/rmh1m5fs6fm6mht/20170511-185253.zip?dl=0

=====================================

Additional note related to the official Protobuf file on Facenet respository: I did a quick compare between the Intermediate Representation .xml file generated with 20170511-185253.pb and 20170512-110547.pb (official). Both .xml files are similar (same layers, weights, bias) beside the name attribute of the net element.

The 20170512-110547.pb is available directly as part of the model .zip file on the official Facenet repository here: https://github.com/davidsandberg/facenet/blob/master/src/download_and_extract.py (see reference to 20170512-110547)

======================================

Thanks,

0 Kudos
Monique_J_Intel
Employee
870 Views

Hi Mickael,

I have reproduced your issue on my side and also validated to see if the 2017 facenet model works on CPU to rule out that it's not the generation of the .xml and .bin files. So now it looks like it may be an issue on the inference engine/MKLDNN plugin side and we are investigating further. We will respond back when we have found the resolution.

Kind Regards,

Monique Jones

 

0 Kudos
Mickael_T
Beginner
870 Views

Ok, thank you Monique for the feedback and confirmation. We will be waiting for feedback and also if there is a way to get around the issue.

0 Kudos
Mickael_T
Beginner
870 Views

Hi Monique, do you have an update on this issue and an ETA for a solution? Thank you!

0 Kudos
Monique_J_Intel
Employee
870 Views

Hi Mickael,

The fix has been released in the latest version(R5) of OpenVINO that is now available. I've tested it so please let me know if you see success on your side as well.

Kind Regards,

Monique Jones

0 Kudos
Reply