Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Inference engine in a custom DLL called from MingW project gives no result

dnl_anoj
Beginner
1,183 Views

Hello.

We are developing an cross-platform application on Qt/MingW and we want to use a custom Mask RCNN model. The model have been trained using Tensorflow Object Detection API then integrated into our application using OpenCV everything works fine. However the inference was a bit long so we decided to use OpenVINO inference engine. Here again we had no particular issues on Mac and Linux to integrate it but as our Windows project is built with MingW we had to find a workaround.

We decided to build a dynamic library on MSVC that uses the InferenceEngine to perform a segmentation on an image and to link this library to our MingW project with a C interface. 

To try the code we have a MSVC project that communicates through QLocalSockets with our application and send back the results and it works perfectly. However if we now call the dynamic library to perform the segmentation we have no results just as if it wasn't able to detect anything when it should. I checked the preprocessing steps several times, checked the dimensions of every objects and they're fine etc.. The main difference between the DLL and the MSVC project at runtime is that the Loading of the Network to the target (CPU) takes around 15 seconds on the DLL and around 1 second on the MSVC project. 

Hence I am suspecting that the 'core.loadNetwork(...)'  is failing and all the values I get thereafter are just uninitialized. 

Does anyone would have an idea of why it is failing, some ideas to debug it or any other kind of help!
I'm putting two files in the attachment, the library.h which contains the base class/functions to be exported for MingW and the Segmentator_OpenVINO.h which contains the functions to perform the inference for the Mask RCNN (works perfectly on a specific project). 

Kind regards,

Labels (1)
0 Kudos
1 Solution
Iffa_Intel
Moderator
1,134 Views

Greetings,


Unfortunately, OpenVINO does not support AMD Ryzen 5 processor.

You may refer here for supported hardware and their performances: https://docs.openvinotoolkit.org/latest/openvino_docs_performance_benchmarks.html


The method that we had provided you previously had been validated to work. Any chance for you to try out the latest version of OpenVINO or the latest of the git master branch. However, back to my first point, the AMD Ryzen 5 processor is not supported.



Sincerely,

Iffa


View solution in original post

0 Kudos
5 Replies
Iffa_Intel
Moderator
1,164 Views

Greetings,


First of all, please help to refer to this guide - "Inference Engine Developer Guide" https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html

And especially the section "Integrate the Inference Engine with Your Application" - https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Integrate_with_customer_application_new_API.html

These pages describe recommended methods of how to use IE in a custom application.

 

Also, if you would like to further debug this use-case, I believe, you could build open-source version of OpenVINO in debug mode using a command like >cmake -DCMAKE_BUILD_TYPE=Debug -G "Visual Studio 16 2019" ..

Here's a Linux example for your reference on how to build OV from source - https://www.intel.com/content/www/us/en/support/articles/000057448/software/development-software.html

Or you could also try to debug this from MSVC side using https://docs.microsoft.com/en-us/visualstudio/debugger/?view=vs-2019 in order to understand where exactly the problem is.



Sincerely,

Iffa


0 Kudos
dnl_anoj
Beginner
1,150 Views

Hi,

thank you for your reply I managed to build my library using the Release DLL of OpenVINO and I have successfully used it in Qt/MingW. My problem was that the Debug library couldn't load the network on the CPU.

I however have another issue that seems quite weird so I think I should report it. My application crashes when I am inferring the InferRequest created, this happens once every 10 times after the graph is loaded on my CPU.

What happens is:

    - Create the Core application

    - Read the network via the xml file, set input precisions

    - Load the Network on the CPU

    - Create InferRequest

    - Set Input Blobs

    - Finally the call to infer_request_.Infer() crashes after a few seconds (same behaviour in asynchronous mode) 

I have an AMD Ryzen 5 processor so I know that I am really not fulfilling the requirements but it is usually working really great. I think it might be coming from the loading of the network to the CPU which for some reasons is corrupted but as I can't build OpenVINO in Debug I can't really see. 

I haven't found much things on the documentation about that so I guess my question is:

Is there a way to determine if the ExecutableNetwork is corrupted of as it is said "when simultaneously loaded networks not expected" ? I'd like to call InferenceEngine::ExecutableNetwork::reset in this cases. 

I am using OpenVINO 2030.3.341.

Thank you for your time,

Kind regards, 

0 Kudos
Iffa_Intel
Moderator
1,135 Views

Greetings,


Unfortunately, OpenVINO does not support AMD Ryzen 5 processor.

You may refer here for supported hardware and their performances: https://docs.openvinotoolkit.org/latest/openvino_docs_performance_benchmarks.html


The method that we had provided you previously had been validated to work. Any chance for you to try out the latest version of OpenVINO or the latest of the git master branch. However, back to my first point, the AMD Ryzen 5 processor is not supported.



Sincerely,

Iffa


0 Kudos
dnl_anoj
Beginner
1,130 Views

Greetings,

as I said OpenVINO is working great on my AMD Ryzen 5 processor, just this crash that was happening is bothering but I updated from version 2020.3.341 to version 2021.2.185 and it doesn't happen anymore! 

I just have an exception "[PARAMETER_MISMATCH] Failed to set input blob. Blocking descriptor mismatch." during the setBlobs call but it doesn't affect the results so I'm ignoring it.

Thank you very much for your time, I've marked your post as the solution as updating OpenVINO solved my problem.

Kind regards.

 

0 Kudos
Iffa_Intel
Moderator
1,121 Views

Glad to know that helps!


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question


Sincerely,

Iffa


0 Kudos
Reply