Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVINO: Primitive descriptor was not found for node dense_1/MatMul

ETrau1
Beginner
1,914 Views

Hello everyone,

I am currently trying to use my onw IR of a tensorfow graph to do inference. As I have no experience with OpenVIINO I started with the hello_autoresize_classification sample using the alexnet_fp32.xml model. As long as I use this graph everything is fine. However as soon as I try to use my own intermediate representation of a graph I fail at this command:

ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});

The error reads:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Primitive descriptor was not found for node dense_1/MatMul.

I added my code, it seems as I cannot add a xml file. It would be great if somebody could help me.

 

Edit: I added my neural net as .txt-file. I also forgot to mention that I want to use OpenVINO for a non-convolutional neural network, as I do not do image recognition but still want to optimize my CPU usage. I therefore use a residual neural network. Is there a problem doing so?

0 Kudos
13 Replies
ETrau1
Beginner
1,914 Views

Hi everyone, a first hint from my side that may cause the difficulties:

I recognized that the error occurs right in the first layer after the input layer. In my xml file in the <meta_data> section I see:

<placeholder_shapes value="{'input_1': array([1, 3], dtype=int64)}"/>

Well, during the model conversion with the model optimizer I set the shape, but I did not set the data type to int64, as every single layer shall use FP32. Now the dense_1/MatMul layer causing the error works with FP32 while I obviously have told the input layer that it somehow works with int64.

Could that possibly cause my error? How can I fix this issue?

Thanks in advance!

0 Kudos
ETrau1
Beginner
1,914 Views

And yet another finding: the error message seems to come from libMKLDNNPlugin.so

According to my grep search it is the only file containing the respective error message. 

0 Kudos
ETrau1
Beginner
1,914 Views

Moreover I meanwhile found out that the error happens during this call:

in CALL_STATUS_FNC(...) within ie_plugin_cpp.hpp:

 

ExecutableNetwork LoadNetwork(CNNNetwork network, const std::map<std::string, std::string> &config) {
        IExecutableNetwork::Ptr ret;
        CALL_STATUS_FNC(LoadNetwork, ret, network, config);
        if (ret.get() == nullptr) THROW_IE_EXCEPTION << "Internal error: pointer to executable network is null";
        return ExecutableNetwork(ret);
    }

 

Unfortunately I am unable to find out where the function goes on.

0 Kudos
Shubha_R_Intel
Employee
1,914 Views

Dear Trautner, Elias,

I will investigate your issue and post my findings back on this forum. Using a residual neural network should not be a problem. For instance, OpenVino supports resnet (such as resnet50, etc) models just fine.

Thanks for attaching your files and most of all, thank you for your patience !

Shubha

0 Kudos
ETrau1
Beginner
1,914 Views

Hello Shubha,

thanks for your reply and your help. I assume there is a problem with the model so I provide further information on the conversion options used for the Model Optimizer:

1) I had to do the conversion on my Win10 laptop, as my Linux machine (which I am using to run the inference engine; Ubuntu 16.04 LTS) does not support AVX and therefore I do not have a local Tensorflow instalation

2) due to an error that my input_shape is [-1;3] (and therefore incorrect) I had to use the --input_shape option

3) The command I used is: python mo.py --input_model tf_model_FPV.pb --input input_1 --output output1_/BiasAdd --data_type=FP32 --input_shape [1,3]

Looking forward to your response, thanks again!

Elias

0 Kudos
Shubha_R_Intel
Employee
1,914 Views

Dearest Elias,

Sure thing. Thanks for your additional information too ! Will definitely investigate further...

Sincerely,

Shubha

0 Kudos
ETrau1
Beginner
1,914 Views

Hello Shubha,

I just recognized a major difference between my running networks and the one giving the error. The one throwing the error mentioned is the only one I use where some layers do not contain the following section in xml format:

<data ... />

This section is follows after <layer id = ... /> and is followed by <input> in the functional graphs. However I am not able to figure out why it is missing. However the layer (dense_1/MatMul) throwing an error does have the <data .../> section.

Best regards,

Elias

0 Kudos
Shubha_R_Intel
Employee
1,914 Views

Dear Trautner, Elias,

There is a known issue with MatMul. I filed a bug on it. Please read the below github post. 

https://github.com/opencv/dldt/issues/134

Does this issue seem familiar to you ?

Thanks,

Shubha

0 Kudos
Shubha_R_Intel
Employee
1,914 Views

Dear Dear Trautner, Elias,

As I just now advised This dldt github poster can you comment out this line network_reader.getNetwork().setBatchSize(batchSize); in your main.cpp and try again ?

Thanks,

Shubha

 

0 Kudos
ETrau1
Beginner
1,914 Views

Hello Shubha,

error message remains unchanged:

 

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Primitive descriptor was not found for node dense_1/MatMul.

 

0 Kudos
Shubha_R_Intel
Employee
1,914 Views

Dear Trautner, Elias,

Yeah other customers (and also me) reproduced the same. The developer is still working on it.

Thanks !

Shubha

0 Kudos
ETrau1
Beginner
1,914 Views

If anybody experiences the same issue: it was solved by changing the layout from HW to NC, see here:

https://github.com/opencv/dldt/issues/157

 

 

0 Kudos
Shubha_R_Intel
Employee
1,914 Views

Dear Trautner, Elias,

Indeed it was. Thanks for pointing it out to this community. I in fact addressed that Github issue with the fix.

Thanks,

Shubha

 

0 Kudos
Reply