Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Memory Constantly increasing in infer

verma__Ashish
Beginner
1,426 Views

Hello,

I have trained a model using mxnet and converted into .xml and .bin files using openvino model optimizer.

I have created Infer request (using inference engine samples), but when I am doing

Infer_request.infer() and running my program , I am also keep monitoring the memory usage using Top. My process memory usage is constantly increasing, do I have to free something or anything else after that.

Thanks

Ashish

0 Kudos
14 Replies
Monique_J_Intel
Employee
1,427 Views

Hi Ashish,

Can you give more information so that I can reproduce it on my side and see what the issue is such as, model IR files, application source files, the version of OpenVINO you are using, and the hardware type you are deploying your application on?

Kind Regards,

Monique Jones

 

0 Kudos
verma__Ashish
Beginner
1,427 Views

Hi Jones,

I am using OpenVino version 2018.3.343 and running it on ubuntu 16.04 on CPU. I have converted my mxnet (.param model file) to optimized openVino .xml and .bin file, for inference engine I have followed Inference Engine  Hello Classification sample and then i am facing this issue.

My application source files is proprietary , so i cannot share it here.

infer_request.Infer(); //If I comment this line then memory is not increasing
// --------------------------- 8. Process output ------------------------------------------------------
Blob::Ptr output = infer_request.GetBlob(output_name);
auto output_data = output->buffer().as<PrecisionTrait<Precision::FP32>::value_type*>();

Do I have to release the output pointer or free anything else after Infer?

Regards

Ashish

0 Kudos
Monique_J_Intel
Employee
1,427 Views

HI Ashish,

It is best practice to set the pointers to NULL for the instances of the following classes: CNNNetReader, CNNNetwork, InputsDataMap, and OutputsDataMap after loading the model to the plugin.

Kind Regards,

Monique Jones

 

 

0 Kudos
verma__Ashish
Beginner
1,427 Views

Hi Jones,

I have set all parameters like that only  and Instead of taking InputsDataMap and OutputsDataMap , i am using inputInfo and DataPtr.

For Example -

CNNNetReader network_reader;
CNNNetwork network = network_reader.getNetwork();
InferenceEngine::InputInfo::Ptr input_info;
DataPtr output_info = network.getOutputsInfo().begin()->second;

Can you correct me If I am wrong somewhere?

Regards

Ashish

0 Kudos
Monique_J_Intel
Employee
1,427 Views

Hi Ashish,

I corrected my comment above. "I mean to say set to NULL for the following instances of the classes:". The backend source code handles the full release of the memory that the pointers are pointing to when you do this.

Kind Regards,

Monique Jones

0 Kudos
verma__Ashish
Beginner
1,427 Views

Hi Jones,

I have initialize all these instances in my constructor as provided in the samples. I have directly initialize, so I think setting to NULL not required. Kindly correct me if I am wrong.

If the backend source code handle all the full release of memory , why I am seeing increase in memory if I use infer_request.infer() and just if I comment that there is no increase in memory usage. I am not able to figure out that. Any help will be appreciated

Regards

Ashish

0 Kudos
Shubha_R_Intel
Employee
1,427 Views

Hi Ashish. I have reproduced your issue on Windows 10/Visual Studio 2017 by using the classification-sample. I am curious, which sample did you use ? Our internal OpenVino engineering team is currently investigating the issue.

0 Kudos
verma__Ashish
Beginner
1,427 Views

Hi Shubha,

Just to reproduce this issue, I have run hello_classification sample code in an infinite while loop and after around 20 hours It started using swap memory and it is increasing continuously . I am running it on Ubunutu 16.04 on cpu using (person-reidentification-retail-0031/FP32/person-reidentification-retail-0031.xml) intel models as given in computer_vision_2018.3.343 directory. I have run it on one single person .jpg file. Here is the command I have used -

./hello_classification /opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/intel_models/person-reidentification-retail-0031/FP32/person-reidentification-retail-0031.xml  test/00.jpg

Here is the code -

#include <iomanip>
#include <vector>
#include <memory>
#include <string>
#include <cstdlib>

#include <opencv2/opencv.hpp>
#include <inference_engine.hpp>

using namespace InferenceEngine;

int main(int argc, char *argv[]) {
    try {
        // ------------------------------ Parsing and validation of input args ---------------------------------
        if (argc != 3) {
            std::cout << "Usage : ./hello_classification <path_to_model> <path_to_image>" << std::endl;
            return EXIT_FAILURE;
        }

        const std::string input_model{argv[1]};
        const std::string input_image_path{argv[2]};
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 1. Load Plugin for inference engine -------------------------------------
        PluginDispatcher dispatcher({"/opt/intel/computer_vision_sdk_2018.3.343/inference_engine/lib/ubuntu_16.04/intel64", ""});
        InferencePlugin plugin(dispatcher.getSuitablePlugin(TargetDevice::eCPU));
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
        CNNNetReader network_reader;
        network_reader.ReadNetwork(input_model);
        network_reader.ReadWeights(input_model.substr(0, input_model.size() - 4) + ".bin");
        network_reader.getNetwork().setBatchSize(1);
        CNNNetwork network = network_reader.getNetwork();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 3. Configure input & output ---------------------------------------------
        // --------------------------- Prepare input blobs -----------------------------------------------------
        InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
        std::string input_name = network.getInputsInfo().begin()->first;

        input_info->setLayout(Layout::NCHW);
        input_info->setPrecision(Precision::U8);

        // --------------------------- Prepare output blobs ----------------------------------------------------
        DataPtr output_info = network.getOutputsInfo().begin()->second;
        std::string output_name = network.getOutputsInfo().begin()->first;

        output_info->setPrecision(Precision::FP32);
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 4. Loading model to the plugin ------------------------------------------
        ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 5. Create infer request -------------------------------------------------
        InferRequest infer_request = executable_network.CreateInferRequest();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 6. Prepare input --------------------------------------------------------
    while(1)
    {
        cv::Mat image = cv::imread(input_image_path);

        /* Resize manually and copy data from the image to the input blob */
        Blob::Ptr input = infer_request.GetBlob(input_name);
        auto input_data = input->buffer().as<PrecisionTrait<Precision::U8>::value_type *>();

        cv::resize(image, image, cv::Size(input_info->getTensorDesc().getDims()[3], input_info->getTensorDesc().getDims()[2]));

        size_t channels_number = input->getTensorDesc().getDims()[1];
        size_t image_size = input->getTensorDesc().getDims()[3] * input->getTensorDesc().getDims()[2];

        for (size_t pid = 0; pid < image_size; ++pid) {
            for (size_t ch = 0; ch < channels_number; ++ch) {
                input_data[ch * image_size + pid] = image.at<cv::Vec3b>(pid)[ch];
            }
        }
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 7. Do inference --------------------------------------------------------
        /* Running the request synchronously */
        infer_request.Infer();
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 8. Process output ------------------------------------------------------
        Blob::Ptr output = infer_request.GetBlob(output_name);
        auto output_data = output->buffer().as<PrecisionTrait<Precision::FP32>::value_type*>();

        std::vector<unsigned> results;
        /*  This is to sort output probabilities and put it to results vector */
        TopResults(10, *output, results);

        std::cout << std::endl << "Top 10 results:" << std::endl << std::endl;
        for (size_t id = 0; id < 10; ++id) {
            std::cout.precision(7);
            auto result = output_data[results[id]];
            std::cout << std::left << std::fixed << result << " label #" << results[id] << std::endl;
        }
    }
        // -----------------------------------------------------------------------------------------------------
    } catch (const std::exception & ex) {
        std::cerr << ex.what() << std::endl;
        return EXIT_FAILURE;
    }
    return EXIT_SUCCESS;
}

I think this will help you to reproduce it on your side

Regards

Ashish

0 Kudos
Monique_J_Intel
Employee
1,427 Views

Hi Ashish,

We have identified the memory leak issue as a bug from our side and it will be fixed in the next release which will be in a few weeks. Please stay tuned for the announcement of the next release, re-test your application with the new release, and let us know if there are anymore issues.

Kind Regards,

Monique Jones

0 Kudos
mengnan__lmn
Beginner
1,427 Views

Hi, I'm using OpenVINO R5, clDNN Drop13.1, object_detection_demo_ssd_async memory usage is constantly increasing on GPU

0 Kudos
Huang__Luke
Beginner
1,427 Views

Hi, I'm using OpenVINO R5 too, run classification sample on Ubuntu 16.04, I use "new" and "delete" operator  before do inference, and using Top to monitor the memory, but the memory cannot be free after that. Here is my code.


        // -----------------------------------------------------------------------------------------------------
        getchar();        
        std::cout << "new" << std::endl;        
        double *test = new double[1000000];
        for (int i = 0;i < 100000;i++)
            test = i;
        getchar();
        std::cout << "delete" << std::endl;
        delete[] test;
        getchar();
        // --------------------------- 7. Do inference ---------------------------------------------------------
        slog::info << "Starting inference (" << FLAGS_ni << " iterations)" << slog::endl;

        typedef std::chrono::high_resolution_clock Time;
        typedef std::chrono::duration<double, std::ratio<1, 1000>> ms;
        typedef std::chrono::duration<float> fsec;
        double total = 0.0;
        /** Start inference & calc performance **/
        for (int iter = 0; iter < FLAGS_ni; ++iter) {
            auto t0 = Time::now();
            infer_request.Infer();
            auto t1 = Time::now();
            fsec fs = t1 - t0;
            ms d = std::chrono::duration_cast<ms>(fs);
            total += d.count();
        }
        // -----------------------------------------------------------------------------------------------------

        

0 Kudos
Mounagurusamy__Guruv
1,427 Views

Hi Folks,

Can you please let me know, if the memory leak issue is fixed.

Thanks

Guru

0 Kudos
verma__Ashish
Beginner
1,427 Views

I have checked in the latest OpenVINO Release 2019 R2, It got fixed

Regards

Ashish

0 Kudos
Shubha_R_Intel
Employee
1,427 Views

Dear verma, Ashish,

This is great. Thanks for reporting back to the OpenVino community !

Shubha

0 Kudos
Reply