Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6434 Discussions

Forked process doesn't finish after calling ov::InterRequest.infer()

Hula
Beginner
244 Views

Hello,

I have a simple program C++ where I do inference with some models in IR format. These models are converted successfully from the pretrained models Resnet18 and Resnet100 downloaded using the link in the github repository of Insightface here:

https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d

 

In my C++ program, I create a forked process. The forked process does the model inference and the main process is waiting for the forked process to finish. However, the forked process doesn't finish at all. It just go to sleep.

I tested this program with both Resnet18, Resnet100, Ubuntu 18.04, Ubuntu 20.04 with OpenVino 2024, 2023 binaries and the binaries I compiled from the latest Openvino github source code. In all cases, the forked process doesn't end, which makes the main process waits forever. Here is my program:

#include <memory>
#include <string>
#include <iostream>
#include <openvino/openvino.hpp>


// For fork and friends
#include <iterator>
#include <sys/wait.h>
#include <unistd.h>
#include <csignal>

using namespace std;

int main() {    
    ov::Core core;
    string model_path = "path-to-converted-openvino-model/iresnet18.xml";
    shared_ptr<ov::Model> model = core.read_model(model_path);
    ov::CompiledModel compiled_model = core.compile_model(model, "CPU");
    ov::Shape input_shape = {1,3, 112, 112};
    auto input_port = compiled_model.input();
    ov::Tensor input_tensor = ov::Tensor(input_port.get_element_type(), input_shape);
    // Fill in data
    float* input_data = input_tensor.data<float>();
    size_t input_size = input_tensor.get_size();
    for (size_t i = 0; i < input_size; i++){
        input_data[i] = 0.5;
    }

    ov::InferRequest infer_request = compiled_model.create_infer_request();    
    bool parent = false;
    // Let's try fork
    switch(fork()) {
        case 0: // Child
        {
            cout << "Inside child process " << endl;            
            infer_request.set_input_tensor(input_tensor);
            infer_request.infer();
            const ov::Tensor& output_tensor = infer_request.get_output_tensor();

            // Print out output tensor
            float* output_data = output_tensor.data<float>();
            for (size_t i = 0; i < output_tensor.get_size(); i++) {
                cout  << output_data[i] << " ";
            }
            cout << endl;            
            return 0;
        }
		case -1: /* Error */
			cerr << "Problem forking" << endl;
			break;
		default: /* Parent */
			parent = true;
			break;

    }  

    if(parent) {
        cout << "Wait for the child process to finish" << endl;
        int stat_val;
        pid_t cpid;

        cout << "Before wait ................................... " << endl;
        cpid = wait(&stat_val);
        cout << "After wait ......................................... " << endl;
    }    
    return 0;
}

 

 

What should I do to make the forked process to finish?

Thanks a lot in advance.

Best regards,

Dan

0 Kudos
1 Solution
Hula
Beginner
159 Views

Hi,

 

Thanks a lot for your reply. Yeah, I have to use fork because the real program also does other tasks too. Anyway, I figured out the reason. The compiled_model probably runs multiple threads and it likely causes the forked process to wait forever for such threads to end.

 

Best regards,

Dan

View solution in original post

0 Kudos
3 Replies
Iffa_Intel
Moderator
187 Views

Hi,


Is there any particular reason for you to be using that fork process instead of directly run the inferencing code?

Are you trying to run some other programs simultaneously with the inferencing code?


Cordially,

Iffa


0 Kudos
Hula
Beginner
160 Views

Hi,

 

Thanks a lot for your reply. Yeah, I have to use fork because the real program also does other tasks too. Anyway, I figured out the reason. The compiled_model probably runs multiple threads and it likely causes the forked process to wait forever for such threads to end.

 

Best regards,

Dan

0 Kudos
Iffa_Intel
Moderator
94 Views

Hi,


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question. 



Cordially,

Iffa


0 Kudos
Reply