- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi.
In a near future I'm going to see this problem, definitely.
I want to know if the drivers for NCS2 (and other targets, like OpenCL or OpenCL_FP16) have thread/process safety, when doing inference. If there is some sort of thread/process safe queue in the drivers that will just allow me to write data to perform inference (on one device, without changing ever the model loaded in the device after a 1st inference).
Or if I have to use some sort of semaphores/mutex (I'm coding in python currently, maybe later in C++) in my application.
This, thinking in the case I'll need to use multiprocessing or multi-threading to speed up frame processing, whe using Inference Engine backend.
Thank you in advance.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Sapiain, Roberto,
I hope the Integrate the Inference Engine Doc answers your questions. InferRequest, for example, is completely thread-safe. In general, Inference Engine calls you'd want to make are thread-safe. What does thread-safe mean ? To use the Core Inference Engine API from different threads, you don't need mutexes and semaphores and stuff.
Please see this excerpt from the doc:
Both requests are thread-safe: can be called from different threads without fearing corruption and failures.
Multiple requests for single ExecutableNetwork are executed sequentially one by one in FIFO order.
Hope it helps,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha.
Thank you very much for the reply.
king regards.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page