Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
113 Views

[Question] Possible approach for Multi-thread, Multi-model and Multi-stick

Jump to solution

I am working with Multi-thread(5), Multi-model(5) and Multi-stick(3).

In my case,

Because each model takes independent input from different cameras and outputs independent results(multi-task), Each model should be run in multiple thread..

However, unfortunately, It seems that accessing to a stick taken by other thread is impossible. Actually I have only 3 sticks...

How can I implement pipeline in this restrictive conditions?

Please let me know if any suggestion........

 

Thanks.

Sangyun Lee.

0 Kudos

Accepted Solutions
Highlighted
Employee
113 Views

Dear Lee, Sangyun,

For question 1) you can look at the Async API.  Please study the below document:

http://docs.openvinotoolkit.org/latest/_docs_IE_DG_Integrate_with_customer_application_new_API.html

And the following code snippet:

infer_request->StartAsync();

infer_request.Wait(IInferRequest::WaitMode::RESULT_READY);

Also look at our samples (both Python and C++) which end with _async in the title.

For question 2) You cannot allocate threads(models) to specific shaves. You are correct - Inference Engine allocates the threads(models) automatically.

Hope it helps.

Thanks,

Shubha

 

View solution in original post

0 Kudos
9 Replies
Highlighted
Beginner
113 Views

Hello, I met a similar problem with you: I want to use an NCS2 to connect four independent cameras and output independently. Can the performance of NCS2 meet the requirements?

0 Kudos
Highlighted
Employee
113 Views

Dear Lee, Sangyun and Monica, Zhao,

It is definitely possible to access an NCS stick taken by another thread. I'm not sure why you're having these difficulties. For instance an NCS2 has 16 shaves or logical cores. You can definitely run a model per shave in it's own thread, per NCS2.

Thanks for using OpenVIno,

Shubha

0 Kudos
Highlighted
113 Views

Dear Shubha,

 

Thanks for your comment.

Actually, I successfully loaded multiple networks on single stick and called Infer() function in each thread.

Here, I have two questions...

 

1. When inference function is called in A thread while the stick is running by B thread, Do the stick carry out those two requests together? or the request by A thread is postponed until the inference by B thread is finished?

2. Can I allocate models to specific shaves? (A model -> 0~5 shaves, B model -> 6~15 shaves).. Or, If not, the inference engine allocates them automatically?

 

Thanks, Regards

Sangyun Lee

0 Kudos
Highlighted
Employee
114 Views

Dear Lee, Sangyun,

For question 1) you can look at the Async API.  Please study the below document:

http://docs.openvinotoolkit.org/latest/_docs_IE_DG_Integrate_with_customer_application_new_API.html

And the following code snippet:

infer_request->StartAsync();

infer_request.Wait(IInferRequest::WaitMode::RESULT_READY);

Also look at our samples (both Python and C++) which end with _async in the title.

For question 2) You cannot allocate threads(models) to specific shaves. You are correct - Inference Engine allocates the threads(models) automatically.

Hope it helps.

Thanks,

Shubha

 

View solution in original post

0 Kudos
Highlighted
113 Views

Dear Shubha,

 

Thanks for your comment.

I studied the documents for ASYNC inference, and i have further questions..

 

1. mechanism of ".getSuitablePlugin()" function

when i implemented below pseudo code to load 5 models to 3 sticks,

-------------------------------------------------------------------

<1+1+3 allocation> the below code is working!

get plugin1 using .getSuitablePlugin()

load network1 to plugin1

get plugin2 using .getSuitablePlugin()

load network2 to plugin2

get plugin3 using .getSuitablePlugin()

load network3 to plugin3

load network4 to plugin3

load network5 to plugin3

-------------------------------------------------------------------

<2+2+1 allocation> the below code throw error! (cannot find NCS....)

get plugin1 using .getSuitablePlugin()

load network1 to plugin1

load network2 to plugin1

get plugin2 using .getSuitablePlugin()

load network3 to plugin2

load network4 to plugin2

get plugin3 using .getSuitablePlugin()

load network5 to plugin3

-------------------------------------------------------------------

 

In both cases, I called three .getSuitablePlugin()... but any configuration throw NCS, and any configuration is working properly...

I tested various configurations.. But i couldn't find any consistency..

I thought that .getSuitablePlugin() returns a new sticks which have no loaded model.  is that right?

 

How can i load specific models to specific sticks? please let me know if there is any safe method...

 

Regards,

Sangyun Lee

0 Kudos
Highlighted
Beginner
113 Views
from NCS2.yolo.yolov2_only import NcsWorker
import threading
from openvino.inference_engine import IENetwork, IEPlugin
import cv2
import numpy as np


net1 = IENetwork(
        model='/home/model_data/yolov2/FP16_new/yolov2.xml',
        weights='/home/sdu/model_data/yolov2/FP16_new/yolov2.bin')

video_capture01 = "/home/sdu/视频/output02.avi"
video_capture02 = "/home/sdu/视频/output_<VideoCapture 0x7f79007195d0>.avi"
video_capture03 = "/home/sdu/output03.avi"
video_capture04 = "/home/output_<VideoCapture 0x7fc23d680f30>.avi"

plugin = IEPlugin(device="MYRIAD")
# plugin.set_config({"KEY_VPU_FORCE_RESET": "NO"})   #
# lock = threading.Lock()


class myThread(threading.Thread):
    def __init__(self, threadID, name, video_path, window_name, net):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.name = name
        self.video_path = video_path
        self.window_name = window_name
        self.net = net

    def run(self):
        # save_boject=False: 默认不保存检测后的对象
        # lock.acquire()
        video_capture = cv2.VideoCapture(self.video_path)

        detection = NcsWorker(self.net, plugin)

        # cv2.namedWindow(self.window_name, flags=cv2.WINDOW_FREERATIO)
        cur_request_id = 0
        next_request_id = 1
        # lock.release()
        while True:

            try:
                ret, frame = video_capture.read()
                if ret != True:
                    break
                boxs = detection.detect(frame, next_request_id, cur_request_id)
                cur_request_id, next_request_id = next_request_id, cur_request_id
                print('-----------------------box----------------')
                print(boxs)
                for box in boxs:
                    box_xmin = (box[0] - box[2] / 2.0)
                    box_xmax = (box[0] + box[2] / 2.0)
                    box_ymin = (box[1] - box[3] / 2.0)
                    box_ymax = (box[1] + box[3] / 2.0)

                    center_x = box[0]
                    center_y = box[1]
                    cv2.circle(frame, (int(center_x), int(center_y)), 3, (255, 0, 0), 2)

                    cv2.rectangle(frame, (int(box_xmin), int(box_ymin)), (int(box_xmax), int(box_ymax)), (255, 0, 0), 2)

                # cv2.imshow(self.window_name, frame)
                height, width = frame.shape[:2]
                resized_show = cv2.resize(frame, (int(width / 2), int(height / 2)), interpolation=cv2.INTER_CUBIC)
                cv2.imshow(self.window_name, resized_show)
                # cv2.waitKey(0)

                # Press Q to stop!
                if cv2.waitKey(1) & 0xFF == ord('q'):
                    break

            except Exception as e:
                pass
        cv2.destroyAllWindows()


# 创建新线程
thread1 = myThread(1, 'Thread-1', video_capture01, '01', net1)
thread2 = myThread(2, 'Thread-2', video_capture02, '02', net1)
thread3 = myThread(3, 'Thread-3', video_capture03, '03', net1)
thread4 = myThread(4, 'Thread-4', video_capture04, '04', net1)
# 开启线程
thread1.start()
thread2.start()
thread3.start()
# thread4.start()
------------------------------------------------------------------------------------------------

When I executed the above code, there was an error:

[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
python3: ../../src/xcb_io.c:179:dequeue_pending_request: 假设 ‘!xcb_xlib_unknown_req_in_deq’ 失败。


-----------------------------------------------------------------------------------

Is this because NCS2 uses threads? How should I solve it? I look forward to your reply.

 

0 Kudos
Highlighted
Employee
113 Views

Dear Monica,

from where did you get this ? from NCS2.yolo.yolov2_only import NcsWorker

That is not part of OpenVino code.

Thanks,

Shubha

0 Kudos
Highlighted
Beginner
113 Views

Hello, this is my own yolo parsing function, you can not pay attention to it, it has nothing to do with the problem I encountered, I encountered the problem is when I enable four threads to report the above error.

0 Kudos
Highlighted
Employee
113 Views

Dear Monica,

It's difficult to pinpoint exactly what's wrong by reading your code. Here is my suggestion:

Rebuild a DEBUG version of Inference Engine using DLDT github. Follow this README:

https://github.com/opencv/dldt/blob/2019/inference-engine/README.md

Regenerate your IR also using the dldt github.

Then step through your code.

I would reduce it down to 1 thread first and see if that works. Then add the 2nd thread. Step through and debug and figure out where the error is. With the https://github.com/opencv/dldt/ you not only get access to the full inference engine source code but now you get access to the VPU Plugin source as well.

I wish you luck and thank you for using OpenVino !

Shubha

0 Kudos