Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Memory Leak OpenVino 2021.4 LTS Linux

Tye__Stephen
Beginner
1,285 Views

There appears to be a small but persistent memory leak OpenVino 2021.4 LTS on Linux. When I run the benchmark_app using the mobilenet-v2 from the public model zoo for an extended period of time the memory slowly increases. It is more evident when using a fast model like mobilenet as it completes more iterations in a shorter period of time.

 

I have been running a test for over a month and have inferences over 60 million images and in that time the memory has increased from 1.7% to 22.5% of memory consumption (on my 8GB machine). I assume if it keeps running it will eventually consume all the memory on the machine and will crash. 

 

We are also seeing the same situation in our Linux C++ integration of OpenVino in our production product.

 

To reproduce the issue run the benchmark_app for a extended period of time with the mobilenet-v2 public model, the memory consumption increases by 0.1% roughly every hour. 

 

Are you able to reproduce this issue and fix it asap as it is basically a ticking time bomb before the issue causes a crash. 

 

Regards,

Stephen

 

 

 

 

 

 

0 Kudos
7 Replies
Iffa_Intel
Moderator
1,204 Views

Greetings,

 

Did you ever tried the Post Training Optimization ? This might help you as it provides compression for different hardware targets such as CPU and GPU, and API that helps to apply optimization methods within a custom inference script written with OpenVINO Python* API.

 

Plus, could you share what hardware you are using?(CPU model, etc)

 

 

 

0 Kudos
Tye__Stephen
Beginner
1,195 Views

Sorry I am not sure what that has to do with the issue of the memory leak highlighted above. But yes I have used the Post Training Optimization tools, the problem occurs with both FP32 and INT8 models (I have not tried FP16).'

 

I am running the test on an AWS instance c5.xlarge  which have the following specs:

Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz or 1st generation Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz. Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo

 

The issue is, on Linux during inference OpenVino's memory usages appears to grow slowly but persistently. You should be able reproduce the issue using the benchmark_app as described above.

 

Stephen

 

 

0 Kudos
Tye__Stephen
Beginner
1,162 Views

I restarted the benchmark_app test and you can see in the attached image that the memory usage has doubled in a 10 hour period from 1.5% (left) of 8GB to 3.2% (right). If it keeps on increasing at this rate all the memory will be consumed within 30 days. 

 

This is a problem as on Linux OpenVino will often be running as a service and it will eventual consume all the memory. 

 

Stephen 

0 Kudos
Iffa_Intel
Moderator
1,137 Views

Generally, smaller memory leak is not a leak itself. benchmark_app stores 8 bytes for each inference iteration for statistics.


For instancing at inference speed 1600 fps, benchmark_app additionally uses about 50Mb per hour and smaller memory leak (8 bytes per iteration) is design intent.


This is basically specific to benchmark_app internal statistic collection during all working periods.

 

Plus, it is not recommended to use benchmark_app like application for stress testing as it uses private std::vector<double> _latencies in InferRequestsQueue class which collects each latency value during working benchmark_app, and uses for calculation median value of latency for all time.

 


Sincerely,

Iffa


0 Kudos
Tye__Stephen
Beginner
1,096 Views

Hi Iffa,

 

Thank you very much for the clarification, it was very helpful.

 

Regards

Stephen

0 Kudos
Iffa_Intel
Moderator
1,082 Views

Glad to hear that.

If you don't have any other inquiries, shall I close this thread?



Sincerely,

Iffa


0 Kudos
Iffa_Intel
Moderator
1,055 Views

Greetings,


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Sincerely,

Iffa


0 Kudos
Reply