- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
We are not sure that it is correct usage way to manually attach each model to a separate thread. Inference Engine should do this automatically.
Please help to clarify these details so we could test it from our side:
1) Operating system
2) Their custom application code for replication
3) Execution command
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Iffa_Intel,
Thanks for the reply. It's surprising to hear that running models in separate threads is discouraged. There's a need for threading in my pipeline. I'm using ubuntu 18.04. When my custom sdk is run on Intel CPU, I can't reproduce the issue whereas deploying it in NCS2 produces that issue. I have modified the Interactive Face Detection Demo application code (Intel openvino sample demo) for reproducing the issue. I'm passing zero matrix (cv::mat) to every model with a print of number of times it has run. Initially all the model loops were printing debug print and after sometime only one model was printing the debug print. Also I'm using FP16 model files for all. Please find the attachment of the modified file and refer to the steps for building along with it.
Steps to build the application code -
1. git clone https://github.com/openvinotoolkit/open_model_zoo.git
2. git checkout 2021.2
3. cd into open_model_zoo/demos/interactive_face_detection_demo/
4. replace the main.cpp with the attached main.cpp Please find the attachment
5. mkdir build && cd build
6. cmake -DCMAKE_BUILD_TYPE=Release ../
7. make interactive_face_detection_demo
Execution command-
./interactive_face_detection_demo -m ./face-detection-0200.xml -m_lm ./facial-landmarks-35-adas-0002.xml -m_hp ./head-pose-estimation-adas-0001.xml -m_ag ./age-gender-recognition-retail-0013.xml -d MYRIAD -d_hp MYRIAD -d_lm MYRIAD -d_ag MYRIAD -i ./test.mp4
Note - input video file is not used inside the modified application so you can pass any video. A dummy zero image matrix is created for each model.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Iffa_Intel Sorry please checkout 2021.1 instead of 2021.2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
It is validated that the original interactive face recognition demo works well with those models. The problem must lie within your modified code. try to cross-check with the original main file.
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ,
I myself have mentioned when using the interactive face demo application, the issue doesn't occur. I have only added basic threading functionality in the main.cpp . Kindly check my modified main.cpp, it's just four models running on loop for a dummy image in separate threads. It randomly gets struck while calling Infer() in the model. The custom application runs properly in CPU. Only when deployed in NCS2, I'm getting this freeze issue randomly. If something's wrong with my code, it wouldn't have run on CPU properly. This freeze issue occurs randomly. Sometimes it occurs within minutes. Sometimes only after hours. Should I have to follow any specific threading method when deploying in NCS2? I'm not able to debug any further than the Infer() call.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We managed to run your code after successful compile with 2021.1 version of OV and run with CPU only without any issue and the application is running without freeze as well.
From what we've seen, you are using MYRIAD and perform Multithreading with VPU.
The GCC we have in Linux can't support VPU threading since GCC so far support CPU threading only.
So our advice is for you to avoid Multithreading in VPU as openVINO has the capability to auto assign the threading.
Or else, change the device to CPU if threading is required (-d CPU -d_hp CPU -d_lm CPU -d_ag CPU)
https://docs.openvinotoolkit.org/latest/ie_plugin_api/group__ie__dev__api__threading.html
Besides, you can also use Inference Engine Async API. This Object Detection SSD Demo would be a good reference and this is the Optimization guide: https://docs.openvinotoolkit.org/latest/openvino_docs_optimization_guide_dldt_optimization_guide.html
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Sincerely,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page