Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVino | Pytorch Error LSTM

pankajrawat
Novice
2,971 Views

 

 

Enviroment Details
=====================
openvino_fpga_2020.4.287  -- Intel cloud environment
onnx==1.7.0
onnxconverter-common==1.7.0
onnxruntime==1.3.0
torch==1.3.1

 

 

 

Details

Its a custom LSTM model developed using pytorch, model is running fine on both pytorch and onnx runtime, however when the model is converted from ONNX to openvinoIR we are geeting runtime error.

Torch model - running and giving results
Onnx model - running and giving results
Openvino   -  Model optimizer error while converting onnx to IR

 

 

CPU
[ INFO ] Loading network files:
        models/torch/timeseries_enode.xml
        models/torch/timeseries_enode.bin
Traceback (most recent call last):
  File "openvino_model.py", line 96, in <module>
    openvino_model(model_xml)
  File "openvino_model.py", line 50, in openvino_model
    net = ie.read_network(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 261, in openvino.inference_engine.ie_api.IECore.read_network
  File "ie_api.pyx", line 293, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Check 'shape_size(get_input_shape(0)) == shape_size(output_shape)' failed at /home/jenkins/agent/workspace/private-ci/ie/build-linux-ubuntu18/b/repos/openvino/ngraph/src/ngraph/op/reshape.cpp:290:
While validating node 'v1::Reshape Reshape_774(Constant_767[0]:f32{1,512,128}, Constant_773[0]:i64{2}) -> (dynamic?)':
Requested output shape Shape{1, 128} is incompatible with input shape Shape{1, 512, 128}

 

 

Labels (1)
0 Kudos
1 Solution
Munesh_Intel
Moderator
2,860 Views

Hi Pankaj,

For the mentioned issue, I suggest you implement LSTMCell operation to output hidden state and cell state. More information is available at the following page:

https://docs.openvinotoolkit.org/2020.4/openvino_docs_ops_sequence_LSTMCell_1.html

 

Regards,

Munesh

 

View solution in original post

0 Kudos
5 Replies
Munesh_Intel
Moderator
2,946 Views

Hi Pankaj,

You’ve mentioned “Model optimizer error while converting onnx to IR”, but I can see that you are running inference using IR files (.xml and .bin).

Assuming you’ve converted your trained model files to IR successfully, it seems likely to be an issue regarding input shapes not being defined correctly by Model Optimizer.

General information regarding specifying input shapes is available here:

https://docs.openvinotoolkit.org/2020.4/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html#when_to_specify_input_shapes

 

Please share more information about your model, command given to Model Optimizer to convert the trained model to Intermediate Representation (IR), and environment details (versions of OS, Python, CMake, etc.). If possible, please share the trained model files for us to reproduce your issue.

 

Regards,

Munesh

 

0 Kudos
pankajrawat
Novice
2,908 Views

Yes models is converting successfully, but having issue when running inference from IR

$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

$ python  --version
Python 3.6.10

$ cmake --version
cmake version 3.10.2

Below command is used to convert onnx model to IR files

(cenv) u47404@s099-n003:~/intelmac$ python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model models/torch/timeseries_enode.onnx --input_shape [128,50,2]
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/u47404/intelmac/models/torch/timeseries_enode.onnx
        - Path for generated IR:        /home/u47404/intelmac/.
        - IR output name:       timeseries_enode
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [128,50,2]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
ONNX specific parameters:
Model Optimizer version: 

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/u47404/intelmac/./timeseries_enode.xml
[ SUCCESS ] BIN file: /home/u47404/intelmac/./timeseries_enode.bin
[ SUCCESS ] Total execution time: 6.63 seconds. 
[ SUCCESS ] Memory consumed: 114 MB. 

 

During infer, Error is coming while reading the xml files ie.read_network

def openvino_model(model_xml):
    model_bin = os.path.splitext(model_xml)[0] + ".bin"

    ie = IECore()
    if args.cpu_extension and 'CPU' in args.device:
        ie.add_extension(args.cpu_extension, "CPU")

    # Read IR
    net = ie.read_network(model=model_xml, weights=model_bin)
    print(net)

 

Code snippet:

lstm2 = nn.LSTM(hs, hidden_size=hs, batch_first=True)
...
x, (ht, ct) = self.lstm2(ht_, (ht, ct)) -- Doesnt work with openvino
x, (ht, ct) = self.lstm2(ht_) -- Works with openvino

 

As mentioned in the above code snippet, during Decoder Phase, when i pass previous step cell state and hidden values the code doesn't work with Openvino, however if i skip these values then code works normally.

0 Kudos
Munesh_Intel
Moderator
2,861 Views

Hi Pankaj,

For the mentioned issue, I suggest you implement LSTMCell operation to output hidden state and cell state. More information is available at the following page:

https://docs.openvinotoolkit.org/2020.4/openvino_docs_ops_sequence_LSTMCell_1.html

 

Regards,

Munesh

 

0 Kudos
pankajrawat
Novice
2,850 Views
0 Kudos
Munesh_Intel
Moderator
2,842 Views

Hi Pankaj,

This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Munesh


0 Kudos
Reply