Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Gouveia__César
New Contributor I
83 Views

Error running openvino model with /input_layer2 as the output layer

Hi,

I want to output the values from the input layer of my model, so I runned the following model optimizer command:

python mo_mxnet.py --input_model <MY_MODEL_TO_CONVERT_DIR>\model_file-0000.params --input_shape (1,3,128,128) --output_dir <MY_CONVERTED_IR_DIR> --output "/input_layer2"

So basically what we did was setting the /input_layer2 as the last layer of the model (output layer) in order to get the results from it.

The model convert runs successfully; a bin, mapping and xml is produced as usual; and the following output is given:

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\model_file-0000.params
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\openvino_model/_input_layer2
        - IR output name:       model_file-0000
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        /input_layer2
        - Input shapes:         (1,3,128,128)
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         1.0
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
MXNet specific parameters:
        - Deploy-ready symbol file:     None
        - Enable MXNet loader for models trained with MXNet version lower than 1.0.0:   False
        - Prefix name for args.nd and argx.nd files:    None
        - Pretrained model to be merged with the .nd files:     None
        - Enable saving built parameters file from .nd files:   False
Model Optimizer version:        2019.3.0-408-gac8584cb7

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\openvino_model/_input_layer2\model_file-0000.xml
[ SUCCESS ] BIN file: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\openvino_model/_input_layer2\model_file-0000.bin
[ SUCCESS ] Total execution time: 1.32 seconds.

 

The problem comes when I try to run the inference engine with this model, using the classification async script:

%MYPROFILE%\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release\classification_sample_async.exe 
-i <MY_TEST_IMAGES_DIR>\test.png -m <MY_CONVERTED_IR_DIR>\model_file-0000.xml -d CPU -l %USERPROFILE%\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release\cpu_extension.dll

 

Which gives the following error:

[ ERROR ] Sample supports topologies with 1 input only

 

I don't even understand the error message because I have only 1 input only, which is the input_layer.

 

Thanks,

César.

0 Kudos
3 Replies
JesusE_Intel
Moderator
83 Views

Hi César,

Could you share your model with me to take a look? I can start a private message to share privately if needed.

Please tell me more about your model:

  • What Topology is your model based on?
  • Is it a pre-trained or custom trained model?

Regards,

Jesus

Gouveia__César
New Contributor I
83 Views

What Topology is your model based on?

Is based on Lenet, however I tested with a model with only one convolutional layer and one FC layer and the same error remains.

Is it a pre-trained or custom trained model?

Is a custom trained model, just one convolutional layer and a FC layer, I trained it for just one epoch because it is just a dummy model to test the input layer.

I can start a private message to share privately if needed.

Yes off course we can start a private chat, plz send me a PM. If we got it working, I will post here the solution and explanation.

Thanks,

César.

JesusE_Intel
Moderator
83 Views

Hi César,

I have sent you a private message for you to share the model privately. 

Regards,

Jesus

Reply