Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

1dConv on GNA

bingol__evren
Novice
583 Views

I understand the output of 1dconv needs to be a multiple of 8 for GNA.

My Input shape is (82, 18, 1) . I only have one feature. 
18 refers to my input of sequencial numbers. 

[0.001,0.002,0003,0.004.......] [0.018]

[0.002,0.003,0004,0.005.......] [0.019]


it basically learns the next number. It is  basically a linear regression problem. 
I though it is a simple model to test GNA(2.0)
I can do this with MLP but I want to us the 1dConv since that is GNA truly for. 

So I create this model . 
If my math is right this provides a multiple of 8 output from Conv1 Layer
(18-3)+1 = 16

    model = tf.keras.Sequential()
    model.add(Conv1D(33activation='relu',input_shape=x_train.shape[1:], padding="valid"))
    model.add(MaxPooling1D(1))
    model.add(MaxPool1D(pool_size=1))
    model.add(Dropout(0.25))
    model.add(Dense(128activation='relu'))
    model.add(Dense(512activation='relu'))
    model.add(Dense(1,activation='sigmoid'))
 
    optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)



I get this : 

GNAPluginNS::GNAGraphCompiler::EltwisePrimitive(class std::shared_ptr<class InferenceEngine::CNNLayer>): Eltwise layer : "StatefulPartitionedCall/sequential/conv1d/BiasAdd/Add" : inputs4Bytes->getPrecision().size() == 4

Another error is 


"Number of output columns does not equal output tensor size 4032 vs 4096"


I do not want solution to the problem I just want to understand what these 2 errors mean 
The error reporting on openvino is really hard to understand. 

 

Thanks

0 Kudos
1 Solution
bingol__evren
Novice
540 Views

Hi So my model does not have a layer that was not listed not even experimental layer (ex: 2dConv or limited support layer)
If one of the layers that was implemented uses an unsupported layer that might be a problem but that was not the case either as it says that layer is not supported. 

But good news is that when I updated to the latest version of VINO the error messages was much better and more human readable but I also did not get the same error at all. So maybe it has something got to do with tensorflow+mo+python3.6.
Latest version fixed my issue. 
Thanks

View solution in original post

0 Kudos
4 Replies
IntelSupport
Community Manager
559 Views

Hi bingol_evren,

Thanks for reaching out. We are currently investigating this and will update you with the information soon.

 

Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
550 Views

Hi bingol_evren,

We are trying to understand this better. We would like to know, did this happened when you are executing the Model Optimizer or during the inference?

 

Meanwhile, your first error might due to the unsupported layer of the model. From this Supported Layers documentation, some Eltwise layer is not supported for GNA.

 

For your second error, I suspect that might arises because of the custom model that you used. I have tried to run some OpenVINO samples using GNA and it's working fine. You can have a look at this OpenVINO sample (Automatic Speech Recognition C++ Sample) for the inference using GNA.

 

Regards,

Aznie


0 Kudos
bingol__evren
Novice
541 Views

Hi So my model does not have a layer that was not listed not even experimental layer (ex: 2dConv or limited support layer)
If one of the layers that was implemented uses an unsupported layer that might be a problem but that was not the case either as it says that layer is not supported. 

But good news is that when I updated to the latest version of VINO the error messages was much better and more human readable but I also did not get the same error at all. So maybe it has something got to do with tensorflow+mo+python3.6.
Latest version fixed my issue. 
Thanks

0 Kudos
IntelSupport
Community Manager
529 Views

Hi bingol_evren,

I'm glad to hear that and thank you for sharing the information here in community. This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.

 

Regards,

Aznie


0 Kudos
Reply