Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Memory Leak in InferRequest::Infer in GPU mode

Senfter__Thomas
Beginner
805 Views

Hello

Assuming we are using Inference Engine correctly, there has to be a memory leak somewhere in the python interface or the inference engine. This has been tested with the following python script on a NUC7i3 with OpenVino 2018 R5.

from openvino.inference_engine import IENetwork, IEPlugin
import numpy as np

net = IENetwork(model="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml",
                weights="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin")
plugin = IEPlugin(device="GPU")
network = plugin.load(network=net)
input_data = np.zeros((1, 3, 64, 64), np.float32)

i = 0
while True:
    out = network.infer({"data": input_data})
    i += 1
    print i

For a few thousand calls to infer the memory consumption is constant at about 75MB. Then the memory consumption starts to increase (about 150MB when i=80000, about 300MB for i=200000).

We didn't find a Git repository with the source code of R5 to look into the problem?  We only found this one: https://github.com/opencv/dldt/tree/2018/inference-engine

Thanks

Thomas

 

Update:

No memory leak was observed with the C++ interface. So the problem is either in the python script above (is there a function, which has to be called to free old inference results or something like that?) or in the python-interface.

The memory leak was tracked down to the InferRequest::Infer function. When running on the CPU, no memleak was observed, but in GPU-mode executing the InferRequest::Infer function leads to growing memory consumption. This was tested using the hello_classification sample, where the Infer function was executed in an endless loop.

0 Kudos
6 Replies
wong__kix
Beginner
805 Views

me too

0 Kudos
wong__kix
Beginner
805 Views

i've meet a same problem

0 Kudos
Senfter__Thomas
Beginner
805 Views

Updating the clDNN version to Drop 12.1 fixed the problem. In clDNN Drop 9.1 some execution data is stored infinitely which causes the growing memory consumption: https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291

0 Kudos
Shubha_R_Intel
Employee
805 Views

Thomas thanks for circling back and updating the forum. Glad your problem is fixed !

0 Kudos
mengnan__lmn
Beginner
805 Views

Senfter, Thomas wrote:

Updating the clDNN version to Drop 12.1 fixed the problem. In clDNN Drop 9.1 some execution data is stored infinitely which causes the growing memory consumption: https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291

Hi, I update the clDNN version to Drop 13.1, but the problem of growing memory consumption still exits. Could you give me some suggestion? Thinks

0 Kudos
Shubha_R_Intel
Employee
805 Views

Hi Imn. I'm confused - Thomas Sentfer said the problem is fixed in clDNN version to Drop 12.1 

0 Kudos
Reply