- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everyone,
I am currently trying to use my onw IR of a tensorfow graph to do inference. As I have no experience with OpenVIINO I started with the hello_autoresize_classification sample using the alexnet_fp32.xml model. As long as I use this graph everything is fine. However as soon as I try to use my own intermediate representation of a graph I fail at this command:
ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});
The error reads:
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): Primitive descriptor was not found for node dense_1/MatMul.
I added my code, it seems as I cannot add a xml file. It would be great if somebody could help me.
Edit: I added my neural net as .txt-file. I also forgot to mention that I want to use OpenVINO for a non-convolutional neural network, as I do not do image recognition but still want to optimize my CPU usage. I therefore use a residual neural network. Is there a problem doing so?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone, a first hint from my side that may cause the difficulties:
I recognized that the error occurs right in the first layer after the input layer. In my xml file in the <meta_data> section I see:
<placeholder_shapes value="{'input_1': array([1, 3], dtype=int64)}"/>
Well, during the model conversion with the model optimizer I set the shape, but I did not set the data type to int64, as every single layer shall use FP32. Now the dense_1/MatMul layer causing the error works with FP32 while I obviously have told the input layer that it somehow works with int64.
Could that possibly cause my error? How can I fix this issue?
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
And yet another finding: the error message seems to come from libMKLDNNPlugin.so
According to my grep search it is the only file containing the respective error message.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Moreover I meanwhile found out that the error happens during this call:
in CALL_STATUS_FNC(...) within ie_plugin_cpp.hpp:
ExecutableNetwork LoadNetwork(CNNNetwork network, const std::map<std::string, std::string> &config) {
IExecutableNetwork::Ptr ret;
CALL_STATUS_FNC(LoadNetwork, ret, network, config);
if (ret.get() == nullptr) THROW_IE_EXCEPTION << "Internal error: pointer to executable network is null";
return ExecutableNetwork(ret);
}
Unfortunately I am unable to find out where the function goes on.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Trautner, Elias,
I will investigate your issue and post my findings back on this forum. Using a residual neural network should not be a problem. For instance, OpenVino supports resnet (such as resnet50, etc) models just fine.
Thanks for attaching your files and most of all, thank you for your patience !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Shubha,
thanks for your reply and your help. I assume there is a problem with the model so I provide further information on the conversion options used for the Model Optimizer:
1) I had to do the conversion on my Win10 laptop, as my Linux machine (which I am using to run the inference engine; Ubuntu 16.04 LTS) does not support AVX and therefore I do not have a local Tensorflow instalation
2) due to an error that my input_shape is [-1;3] (and therefore incorrect) I had to use the --input_shape option
3) The command I used is: python mo.py --input_model tf_model_FPV.pb --input input_1 --output output1_/BiasAdd --data_type=FP32 --input_shape [1,3]
Looking forward to your response, thanks again!
Elias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Elias,
Sure thing. Thanks for your additional information too ! Will definitely investigate further...
Sincerely,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Shubha,
I just recognized a major difference between my running networks and the one giving the error. The one throwing the error mentioned is the only one I use where some layers do not contain the following section in xml format:
<data ... />
This section is follows after <layer id = ... /> and is followed by <input> in the functional graphs. However I am not able to figure out why it is missing. However the layer (dense_1/MatMul) throwing an error does have the <data .../> section.
Best regards,
Elias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Trautner, Elias,
There is a known issue with MatMul. I filed a bug on it. Please read the below github post.
https://github.com/opencv/dldt/issues/134
Does this issue seem familiar to you ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Dear Trautner, Elias,
As I just now advised This dldt github poster can you comment out this line network_reader.getNetwork().setBatchSize(batchSize); in your main.cpp and try again ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Shubha,
error message remains unchanged:
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Primitive descriptor was not found for node dense_1/MatMul.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Trautner, Elias,
Yeah other customers (and also me) reproduced the same. The developer is still working on it.
Thanks !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If anybody experiences the same issue: it was solved by changing the layout from HW to NC, see here:
https://github.com/opencv/dldt/issues/157
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Trautner, Elias,
Indeed it was. Thanks for pointing it out to this community. I in fact addressed that Github issue with the fix.
Thanks,
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page