<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Memory Leak in InferRequest::Infer in GPU mode in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130930#M8601</link>
    <description>&lt;P&gt;Hello&lt;/P&gt;&lt;P&gt;Assuming we are using Inference Engine correctly, there has to be a memory leak somewhere in the python interface or the inference engine. This has been tested with the following python script on a NUC7i3 with OpenVino 2018 R5.&lt;/P&gt;
&lt;PRE class="brush:python; class-name:dark;"&gt;from openvino.inference_engine import IENetwork, IEPlugin
import numpy as np

net = IENetwork(model="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml",
                weights="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin")
plugin = IEPlugin(device="GPU")
network = plugin.load(network=net)
input_data = np.zeros((1, 3, 64, 64), np.float32)

i = 0
while True:
    out = network.infer({"data": input_data})
    i += 1
    print i&lt;/PRE&gt;

&lt;P&gt;For a few thousand calls to infer the memory consumption is constant at about 75MB. Then the memory consumption starts to increase (about 150MB when i=80000, about 300MB for i=200000).&lt;/P&gt;
&lt;P&gt;We didn't find a Git repository with the source code of R5 to look into the problem?&amp;nbsp; We only found this one: &lt;A href="https://github.com/opencv/dldt/tree/2018/inference-engine"&gt;https://github.com/opencv/dldt/tree/2018/inference-engine&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;Thomas&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Update:&lt;/P&gt;
&lt;P&gt;&lt;S&gt;No memory leak was observed with the C++ interface. So the problem is either in the python script above (is there a function, which has to be called to free old inference results or something like that?) or in the python-interface.&lt;/S&gt;&lt;/P&gt;
&lt;P&gt;The memory leak was tracked down to the InferRequest::Infer function. When running on the CPU, no memleak was observed, but in GPU-mode executing the InferRequest::Infer function leads to growing memory consumption. This was tested using the hello_classification sample, where the Infer function was executed in an endless loop.&lt;/P&gt;</description>
    <pubDate>Thu, 10 Jan 2019 09:50:36 GMT</pubDate>
    <dc:creator>Senfter__Thomas</dc:creator>
    <dc:date>2019-01-10T09:50:36Z</dc:date>
    <item>
      <title>Memory Leak in InferRequest::Infer in GPU mode</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130930#M8601</link>
      <description>&lt;P&gt;Hello&lt;/P&gt;&lt;P&gt;Assuming we are using Inference Engine correctly, there has to be a memory leak somewhere in the python interface or the inference engine. This has been tested with the following python script on a NUC7i3 with OpenVino 2018 R5.&lt;/P&gt;
&lt;PRE class="brush:python; class-name:dark;"&gt;from openvino.inference_engine import IENetwork, IEPlugin
import numpy as np

net = IENetwork(model="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml",
                weights="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin")
plugin = IEPlugin(device="GPU")
network = plugin.load(network=net)
input_data = np.zeros((1, 3, 64, 64), np.float32)

i = 0
while True:
    out = network.infer({"data": input_data})
    i += 1
    print i&lt;/PRE&gt;

&lt;P&gt;For a few thousand calls to infer the memory consumption is constant at about 75MB. Then the memory consumption starts to increase (about 150MB when i=80000, about 300MB for i=200000).&lt;/P&gt;
&lt;P&gt;We didn't find a Git repository with the source code of R5 to look into the problem?&amp;nbsp; We only found this one: &lt;A href="https://github.com/opencv/dldt/tree/2018/inference-engine"&gt;https://github.com/opencv/dldt/tree/2018/inference-engine&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;Thomas&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Update:&lt;/P&gt;
&lt;P&gt;&lt;S&gt;No memory leak was observed with the C++ interface. So the problem is either in the python script above (is there a function, which has to be called to free old inference results or something like that?) or in the python-interface.&lt;/S&gt;&lt;/P&gt;
&lt;P&gt;The memory leak was tracked down to the InferRequest::Infer function. When running on the CPU, no memleak was observed, but in GPU-mode executing the InferRequest::Infer function leads to growing memory consumption. This was tested using the hello_classification sample, where the Infer function was executed in an endless loop.&lt;/P&gt;</description>
      <pubDate>Thu, 10 Jan 2019 09:50:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130930#M8601</guid>
      <dc:creator>Senfter__Thomas</dc:creator>
      <dc:date>2019-01-10T09:50:36Z</dc:date>
    </item>
    <item>
      <title>me too</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130931#M8602</link>
      <description>&lt;P&gt;me too&lt;/P&gt;</description>
      <pubDate>Thu, 10 Jan 2019 15:17:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130931#M8602</guid>
      <dc:creator>wong__kix</dc:creator>
      <dc:date>2019-01-10T15:17:42Z</dc:date>
    </item>
    <item>
      <title>i've meet a same problem</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130932#M8603</link>
      <description>&lt;P&gt;i've meet a same problem&lt;/P&gt;</description>
      <pubDate>Thu, 10 Jan 2019 15:24:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130932#M8603</guid>
      <dc:creator>wong__kix</dc:creator>
      <dc:date>2019-01-10T15:24:20Z</dc:date>
    </item>
    <item>
      <title>Updating the clDNN version to</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130933#M8604</link>
      <description>&lt;P&gt;Updating the clDNN version to Drop 12.1 fixed the problem. In&amp;nbsp;clDNN Drop 9.1 some execution data is stored infinitely which causes the growing memory consumption: &lt;A href="https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291"&gt;https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 09 Feb 2019 09:59:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130933#M8604</guid>
      <dc:creator>Senfter__Thomas</dc:creator>
      <dc:date>2019-02-09T09:59:27Z</dc:date>
    </item>
    <item>
      <title>Thomas thanks for circling</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130934#M8605</link>
      <description>&lt;P&gt;Thomas thanks for circling back and updating the forum. Glad your problem is fixed !&lt;/P&gt;</description>
      <pubDate>Mon, 11 Feb 2019 18:02:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130934#M8605</guid>
      <dc:creator>Shubha_R_Intel</dc:creator>
      <dc:date>2019-02-11T18:02:15Z</dc:date>
    </item>
    <item>
      <title>Quote:Senfter, Thomas wrote:</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130935#M8606</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Senfter, Thomas wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Updating the clDNN version to Drop 12.1 fixed the problem. In&amp;nbsp;clDNN Drop 9.1 some execution data is stored infinitely which causes the growing memory consumption: &lt;A href="https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291" rel="nofollow"&gt;https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi, I update&amp;nbsp;the clDNN version to Drop 13.1, but the problem of&amp;nbsp;growing memory consumption still exits. Could you give me some suggestion？ Thinks&lt;/P&gt;</description>
      <pubDate>Tue, 26 Feb 2019 02:57:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130935#M8606</guid>
      <dc:creator>mengnan__lmn</dc:creator>
      <dc:date>2019-02-26T02:57:47Z</dc:date>
    </item>
    <item>
      <title>Hi Imn. I'm confused - Thomas</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130936#M8607</link>
      <description>&lt;P&gt;Hi Imn. I'm confused - Thomas Sentfer said the problem is fixed in clDNN version to Drop 12.1&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 04 Mar 2019 21:00:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Memory-Leak-in-InferRequest-Infer-in-GPU-mode/m-p/1130936#M8607</guid>
      <dc:creator>Shubha_R_Intel</dc:creator>
      <dc:date>2019-03-04T21:00:00Z</dc:date>
    </item>
  </channel>
</rss>

