<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVino | Pytorch Error LSTM in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1206057#M20511</link>
    <description>&lt;P&gt;Thanks that helped&lt;/P&gt;</description>
    <pubDate>Wed, 02 Sep 2020 09:29:25 GMT</pubDate>
    <dc:creator>pankajrawat</dc:creator>
    <dc:date>2020-09-02T09:29:25Z</dc:date>
    <item>
      <title>OpenVino | Pytorch Error LSTM</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1201545#M20335</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;Enviroment Details
=====================
openvino_fpga_2020.4.287  -- Intel cloud environment
onnx==1.7.0
onnxconverter-common==1.7.0
onnxruntime==1.3.0
torch==1.3.1&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Details&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Its a custom LSTM model developed using pytorch, model is running fine on both pytorch and onnx runtime, however when the model is converted from ONNX to openvinoIR we are geeting runtime error.&lt;/P&gt;
&lt;P&gt;Torch model - running and giving results&lt;BR /&gt;Onnx model - running and giving results&lt;BR /&gt;Openvino&amp;nbsp;&amp;nbsp; -&amp;nbsp; Model optimizer error while converting onnx to IR&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;CPU
[ INFO ] Loading network files:
        models/torch/timeseries_enode.xml
        models/torch/timeseries_enode.bin
Traceback (most recent call last):
  File "openvino_model.py", line 96, in &amp;lt;module&amp;gt;
    openvino_model(model_xml)
  File "openvino_model.py", line 50, in openvino_model
    net = ie.read_network(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 261, in openvino.inference_engine.ie_api.IECore.read_network
  File "ie_api.pyx", line 293, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Check 'shape_size(get_input_shape(0)) == shape_size(output_shape)' failed at /home/jenkins/agent/workspace/private-ci/ie/build-linux-ubuntu18/b/repos/openvino/ngraph/src/ngraph/op/reshape.cpp:290:
While validating node 'v1::Reshape Reshape_774(Constant_767[0]:f32{1,512,128}, Constant_773[0]:i64{2}) -&amp;gt; (dynamic?)':
Requested output shape Shape{1, 128} is incompatible with input shape Shape{1, 512, 128}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Aug 2020 11:23:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1201545#M20335</guid>
      <dc:creator>pankajrawat</dc:creator>
      <dc:date>2020-08-19T11:23:28Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVino | Pytorch Error LSTM</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1202434#M20373</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Pankaj,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;You’ve mentioned “Model optimizer error while converting onnx to IR”, but I can see that you are running inference using IR files (.xml and .bin). &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Assuming you’ve converted your trained model files to IR successfully, it seems likely to be an issue regarding input shapes not being defined correctly by Model Optimizer. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;General information regarding specifying input shapes is available here:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A style="font-size: 16px;" href="https://docs.openvinotoolkit.org/2020.4/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html#when_to_specify_input_shapes" target="_blank" rel="noopener noreferrer"&gt;https://docs.openvinotoolkit.org/2020.4/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html#when_to_specify_input_shapes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Please share more information about your model, command given to Model Optimizer to convert the&amp;nbsp;trained model to Intermediate Representation (IR), and environment details (versions of OS, Python, CMake, etc.). If possible, please share the trained model files for us to reproduce your issue.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Munesh&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 18:54:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1202434#M20373</guid>
      <dc:creator>Munesh_Intel</dc:creator>
      <dc:date>2020-08-21T18:54:30Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVino | Pytorch Error LSTM</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1203965#M20409</link>
      <description>&lt;P&gt;&lt;EM&gt;Yes models is converting successfully, but having issue when running inference from IR&lt;/EM&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"

$ python  --version
Python 3.6.10

$ cmake --version
cmake version 3.10.2&lt;/LI-CODE&gt;
&lt;P&gt;Below command is used to convert onnx model to IR files&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;(cenv) u47404@s099-n003:~/intelmac$ python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model models/torch/timeseries_enode.onnx --input_shape [128,50,2]
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/u47404/intelmac/models/torch/timeseries_enode.onnx
        - Path for generated IR:        /home/u47404/intelmac/.
        - IR output name:       timeseries_enode
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [128,50,2]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
ONNX specific parameters:
Model Optimizer version: 

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/u47404/intelmac/./timeseries_enode.xml
[ SUCCESS ] BIN file: /home/u47404/intelmac/./timeseries_enode.bin
[ SUCCESS ] Total execution time: 6.63 seconds. 
[ SUCCESS ] Memory consumed: 114 MB. &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;During infer, Error is coming while reading the xml files &lt;STRONG&gt;ie.read_network&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;def openvino_model(model_xml):
    model_bin = os.path.splitext(model_xml)[0] + ".bin"

    ie = IECore()
    if args.cpu_extension and 'CPU' in args.device:
        ie.add_extension(args.cpu_extension, "CPU")

    # Read IR
    net = ie.read_network(model=model_xml, weights=model_bin)
    print(net)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Code snippet:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;lstm2 = nn.LSTM(hs, hidden_size=hs, batch_first=True)
...
x, (ht, ct) = self.lstm2(ht_, (ht, ct)) -- Doesnt work with openvino
x, (ht, ct) = self.lstm2(ht_) -- Works with openvino&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As mentioned in the above code snippet, during Decoder Phase, when i pass previous step cell state and hidden values the code doesn't work with Openvino, however if i skip these values then code works normally.&lt;/P&gt;</description>
      <pubDate>Tue, 25 Aug 2020 13:29:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1203965#M20409</guid>
      <dc:creator>pankajrawat</dc:creator>
      <dc:date>2020-08-25T13:29:37Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVino | Pytorch Error LSTM</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1205845#M20497</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Pankaj,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;For the mentioned issue, I suggest you implement&lt;/SPAN&gt;&lt;I&gt;&amp;nbsp;LSTMCell &lt;/I&gt;operation&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;to output hidden state and cell state. More information is available at the following page:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.openvinotoolkit.org/2020.4/openvino_docs_ops_sequence_LSTMCell_1.html" target="_blank"&gt;https://docs.openvinotoolkit.org/2020.4/openvino_docs_ops_sequence_LSTMCell_1.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Munesh&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 01 Sep 2020 18:03:06 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1205845#M20497</guid>
      <dc:creator>Munesh_Intel</dc:creator>
      <dc:date>2020-09-01T18:03:06Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVino | Pytorch Error LSTM</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1206057#M20511</link>
      <description>&lt;P&gt;Thanks that helped&lt;/P&gt;</description>
      <pubDate>Wed, 02 Sep 2020 09:29:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1206057#M20511</guid>
      <dc:creator>pankajrawat</dc:creator>
      <dc:date>2020-09-02T09:29:25Z</dc:date>
    </item>
    <item>
      <title>Re:OpenVino | Pytorch Error LSTM</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1206623#M20535</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Pankaj,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;This thread will no longer be monitored since this issue has been resolved.&amp;nbsp;If you need any additional information from Intel, please submit a new question.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Munesh&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Sep 2020 02:28:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/OpenVino-Pytorch-Error-LSTM/m-p/1206623#M20535</guid>
      <dc:creator>Munesh_Intel</dc:creator>
      <dc:date>2020-09-04T02:28:43Z</dc:date>
    </item>
  </channel>
</rss>

