Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6482 Discussions

Inference is stuck when using shared_ptr for ExecutableNetwork

Ohad_M_Intel
Employee
1,486 Views

Hi.

I'm trying to optimize my code and I found that loading the network takes a lot of time. So I want to do it once in the constructor.

So I have this code in the constructor:

OpenVinoInference::OpenVinoInference(const std::string &modelPath)
{
    // --------------------------- 1. Load Plugin for inference engine -------------------------------------
    mInferencePlugin = std::make_shared<InferencePlugin>(PluginDispatcher().getSuitablePlugin(TargetDevice::eCPU));

    // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
    CNNNetReader network_reader;
    network_reader.ReadNetwork(modelPath + ".xml");
    network_reader.ReadWeights(modelPath + ".bin");
    network_reader.getNetwork().setBatchSize(1);
    mCNNNetwork = std::make_shared<CNNNetwork>(network_reader.getNetwork());
    // -----------------------------------------------------------------------------------------------------

    // --------------------------- 3. Configure input & output ---------------------------------------------
    // --------------------------- Prepare input blobs -----------------------------------------------------
    InputInfo::Ptr input_info = mCNNNetwork->getInputsInfo().begin()->second;
    std::string input_name = mCNNNetwork->getInputsInfo().begin()->first;

    input_info->setLayout(Layout::NCHW);
    input_info->setPrecision(Precision::FP32);

    // --------------------------- Prepare output blobs ----------------------------------------------------
    DataPtr output_info = mCNNNetwork->getOutputsInfo().begin()->second;
    std::string output_name = mCNNNetwork->getOutputsInfo().begin()->first;

    output_info->setPrecision(Precision::FP32);
    // -----------------------------------------------------------------------------------------------------

    // --------------------------- 4. Loading model to the plugin ------------------------------------------
    mExecutableNetwork = std::make_shared<ExecutableNetwork>(mInferencePlugin->LoadNetwork(*mCNNNetwork, {}));
    // -----------------------------------------------------------------------------------------------------

    // Set private members
    mInputName = input_name;
    mOutputName = output_name;
}

And I have this Infer function:

std::shared_ptr<InferenceEngine::Blob>
OpenVinoInference::Infer(const cv::Mat& image)
{
    auto t = TimeMeasurement();

    // --------------------------- 5. Create infer request -------------------------------------------------
    mInferRequest = mExecutableNetwork->CreateInferRequestPtr();
    // -----------------------------------------------------------------------------------------------------
        
    // --------------------------- 6. Prepare input --------------------------------------------------------
    Blob::Ptr input = mInferRequest->GetBlob(mInputName);
    auto input_data = input->buffer().as<PrecisionTrait<Precision::FP32>::value_type *>();

    int image_size = image.cols * image.rows;
    for (size_t pid = 0; pid < image_size; ++pid) {
        for (size_t ch = 0; ch < 1; ++ch) {
            input_data[ch * image_size + pid] = image.at<cv::Vec3b>(pid)[ch];
        }
    }
    // -----------------------------------------------------------------------------------------------------

    std::cout << "OpenVinoInference::Infer 6. "; t.Test();

    // --------------------------- 7. Do inference --------------------------------------------------------
    /* Running the request synchronously */
    mInferRequest->Infer();
    // -----------------------------------------------------------------------------------------------------

    std::cout << "OpenVinoInference::Infer 7. "; t.Test();

    // --------------------------- 8. Process output ------------------------------------------------------
    Blob::Ptr output = mInferRequest->GetBlob(mOutputName);
    // -----------------------------------------------------------------------------------------------------

    std::cout << "OpenVinoInference::Infer 8. "; t.Test();

    return output;
}

The problem that I face is that when I create the ExecutableNetwork in the constructor, inference is stuck. The mInferRequest->Infer(); does not return.
This doesn't happen when I create the ExecutableNetwork in the Infer function. But then the inference process takes much too long.

 

Any advice?

0 Kudos
6 Replies
nikos1
Valued Contributor I
1,486 Views

What happens if you add 

if (InferenceEngine::OK == mInferRequest->Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY))
{
    // --------------------------- 8. Process output ------------------------------------------------------
    Blob::Ptr output = mInferRequest->GetBlob(mOutputName);
}
    
 

 

0 Kudos
Ohad_M_Intel
Employee
1,486 Views

Thanks for the reply.

Tried as suggested:
 

// --------------------------- 8. Process output ------------------------------------------------------
    if (InferenceEngine::OK == mInferRequest->Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY))
    {
        Blob::Ptr output = mInferRequest->GetBlob(mOutputName);

        std::cout << "OpenVinoInference::Infer 8. "; t.Test();

        return output;
    }

With no luck.....

Also tried to use async:
 

mInferRequest->StartAsync();

 

0 Kudos
nikos1
Valued Contributor I
1,486 Views

Interesting issue!  Just wondering if it is related to some other issues we have seen in this forum around InferenceEngine allocation of resources. Could it be related to https://software.intel.com/en-us/forums/computer-vision/topic/804912 ? 

In the meantime, are you able to repro on GPU path too or just CPU? That could hep narrow down to MKLDNN vs. CLDNN.

Cheers,

Nikos

 

0 Kudos
Ohad_M_Intel
Employee
1,486 Views

Please ignore this issue for now as there is a good chance that this issue was happening because of flows in my app. Probably multi threading issues.

0 Kudos
_riki_
Beginner
1,486 Views

Ohad M. (Intel) wrote:

Please ignore this issue for now as there is a good chance that this issue was happening because of flows in my app. Probably multi threading issues.

Ohad, how did you solve the multi-threading issue? I'm likely stuck in the same situation you were..

0 Kudos
Catastrophe
Novice
1,486 Views

Hi! Wondering if you were able to solve this issue? I would also like to perform the loading of the network once.

0 Kudos
Reply