- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
Assuming we are using Inference Engine correctly, there has to be a memory leak somewhere in the python interface or the inference engine. This has been tested with the following python script on a NUC7i3 with OpenVino 2018 R5.
from openvino.inference_engine import IENetwork, IEPlugin import numpy as np net = IENetwork(model="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml", weights="/opt/intel/computer_vision_sdk/deployment_tools/intel_models/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin") plugin = IEPlugin(device="GPU") network = plugin.load(network=net) input_data = np.zeros((1, 3, 64, 64), np.float32) i = 0 while True: out = network.infer({"data": input_data}) i += 1 print i
For a few thousand calls to infer the memory consumption is constant at about 75MB. Then the memory consumption starts to increase (about 150MB when i=80000, about 300MB for i=200000).
We didn't find a Git repository with the source code of R5 to look into the problem? We only found this one: https://github.com/opencv/dldt/tree/2018/inference-engine
Thanks
Thomas
Update:
No memory leak was observed with the C++ interface. So the problem is either in the python script above (is there a function, which has to be called to free old inference results or something like that?) or in the python-interface.
The memory leak was tracked down to the InferRequest::Infer function. When running on the CPU, no memleak was observed, but in GPU-mode executing the InferRequest::Infer function leads to growing memory consumption. This was tested using the hello_classification sample, where the Infer function was executed in an endless loop.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
me too
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i've meet a same problem
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Updating the clDNN version to Drop 12.1 fixed the problem. In clDNN Drop 9.1 some execution data is stored infinitely which causes the growing memory consumption: https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thomas thanks for circling back and updating the forum. Glad your problem is fixed !
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Senfter, Thomas wrote:Updating the clDNN version to Drop 12.1 fixed the problem. In clDNN Drop 9.1 some execution data is stored infinitely which causes the growing memory consumption: https://github.com/opencv/dldt/blob/17e66dc5a6631d630da454506902bd7c25d4170b/inference-engine/thirdparty/clDNN/src/network.cpp#L291
Hi, I update the clDNN version to Drop 13.1, but the problem of growing memory consumption still exits. Could you give me some suggestion? Thinks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Imn. I'm confused - Thomas Sentfer said the problem is fixed in clDNN version to Drop 12.1
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page