- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I understand the output of 1dconv needs to be a multiple of 8 for GNA.
My Input shape is (82, 18, 1) . I only have one feature.
18 refers to my input of sequencial numbers.
[0.001,0.002,0003,0.004.......] [0.018]
[0.002,0.003,0004,0.005.......] [0.019]
it basically learns the next number. It is basically a linear regression problem.
I though it is a simple model to test GNA(2.0)
I can do this with MLP but I want to us the 1dConv since that is GNA truly for.
So I create this model .
If my math is right this provides a multiple of 8 output from Conv1 Layer
(18-3)+1 = 16
I get this :
GNAPluginNS::GNAGraphCompiler::EltwisePrimitive(class std::shared_ptr<class InferenceEngine::CNNLayer>): Eltwise layer : "StatefulPartitionedCall/sequential/conv1d/BiasAdd/Add" : inputs4Bytes->getPrecision().size() == 4
Another error is
"Number of output columns does not equal output tensor size 4032 vs 4096"
I do not want solution to the problem I just want to understand what these 2 errors mean
The error reporting on openvino is really hard to understand.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi So my model does not have a layer that was not listed not even experimental layer (ex: 2dConv or limited support layer)
If one of the layers that was implemented uses an unsupported layer that might be a problem but that was not the case either as it says that layer is not supported.
But good news is that when I updated to the latest version of VINO the error messages was much better and more human readable but I also did not get the same error at all. So maybe it has something got to do with tensorflow+mo+python3.6.
Latest version fixed my issue.
Thanks
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi bingol_evren,
Thanks for reaching out. We are currently investigating this and will update you with the information soon.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi bingol_evren,
We are trying to understand this better. We would like to know, did this happened when you are executing the Model Optimizer or during the inference?
Meanwhile, your first error might due to the unsupported layer of the model. From this Supported Layers documentation, some Eltwise layer is not supported for GNA.
For your second error, I suspect that might arises because of the custom model that you used. I have tried to run some OpenVINO samples using GNA and it's working fine. You can have a look at this OpenVINO sample (Automatic Speech Recognition C++ Sample) for the inference using GNA.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi So my model does not have a layer that was not listed not even experimental layer (ex: 2dConv or limited support layer)
If one of the layers that was implemented uses an unsupported layer that might be a problem but that was not the case either as it says that layer is not supported.
But good news is that when I updated to the latest version of VINO the error messages was much better and more human readable but I also did not get the same error at all. So maybe it has something got to do with tensorflow+mo+python3.6.
Latest version fixed my issue.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi bingol_evren,
I'm glad to hear that and thank you for sharing the information here in community. This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page