Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

LSTM support on VPU

Buvana_R
Beginner
493 Views

Hello,

I tried to run a simple LSTM Model (pasting the Pytorch code that exports the model to ONNX below) after converting the ONNX version to IR on the CPU and on the HDDL device using the benchmark app.

The benchmark app successfully ran to completion with the CPU, but failed with the HDDL - the error message was:
Failed to compile layer "LSTM_0/LSTMCell_sequence": AssertionFailed: outputs.size() == 1

$:/opt/intel/openvino/deployment_tools/tools/benchmark_tool# python3 benchmark_app.py -m /opt/converted_models/sample_lstm_model/SimpleLSTM.xml -d HDDL
[Step 1/11] Parsing and validating input arguments
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.2020.3.0-3467-15f2c61a-releases/2020/3
[ INFO ] Device info
HDDL
HDDLPlugin.............. version 2.1
Build................... 2020.3.0-3467-15f2c61a-releases/2020/3

[Step 3/11] Reading the Intermediate Representation network
[ INFO ] Read network took 3.53 ms
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[ ERROR ] Failed to compile layer "LSTM_0/LSTMCell_sequence": AssertionFailed: outputs.size() == 1
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/python/python3.6/openvino/tools/benchmark/main.py", line 87, in run
exe_network = benchmark.load_network(ie_network, perf_counts)
File "/opt/intel/openvino_2020.3.194/python/python3.6/openvino/tools/benchmark/benchmark.py", line 138, in load_network
num_requests=1 if self.api_type == 'sync' else self.nireq or 0)
File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Failed to compile layer "LSTM_0/LSTMCell_sequence": AssertionFailed: outputs.size() == 1

My question is whether VPU supports the running of LSTM models? If yes, then are there any example models that you can point me to that would run successfully on the VPU? And if yes, then what is the reason for the above error message?

Thank you,
Buvana

-----------------

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

import torch.onnx
import torchvision

torch.manual_seed(1)

lstm = nn.LSTM(3, 3, num_layers=1)
lstm.eval()

with torch.no_grad():
inputs = [torch.randn(1,3) for _ in range(5)] #make a sequence of length 5
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
h0 = torch.randn(1, 1, 3)
c0 = torch.randn(1, 1, 3)
out, (hn, cn) = lstm(inputs, (h0, c0))

input_names = ['input', 'h0', 'c0']
output_names = ['output', 'hn', 'cn']

torch.onnx.export(lstm, (inputs, (h0, c0)), 'SimpleLSTM.onnx', input_names=input_names, output_names=output_names)

0 Kudos
5 Replies
Zulkifli_Intel
Moderator
438 Views

Hello Buvaneswari Ramanan,


Thank you for contacting us.


Where did you get your original ONNX model? Can you share your model with us?


Regards,

Zulkifli 


Buvana_R
Beginner
423 Views

Hi,

The onnx model can be gotten by simply running the python code that I had pasted in the original message. In any case, I share a zip of the following:

 

lstm_model.py - this is the code that generates the model

SimpleLSTM.onnx - the onnx model

SimpleLSTM.bin - IR

SimpleLSTM.mapping - IR

SimpleLSTM.xml - IR

 

Thanks,

Buvana

Zulkifli_Intel
Moderator
396 Views

Hello Buvaneswari Ramanan,


We will investigate this issue and we will get back to you.


Regards,

Zulkifli


Zulkifli_Intel
Moderator
379 Views

Hello Buvaneswari Ramanan,


Thank you for your patience. I have successfully executed your sample model without error.


Try to install the install_IVAD_VPU_dependencies on your system. You can follow this Configuration Steps for installation.


Regards,

Zulkifli


Zulkifli_Intel
Moderator
350 Views

Hello,


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Zulkifli


Reply