Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1033 Discussions

HeadDetector SDK uses only 1 core while the Intel example uses all physical cores

Demiray__Baris
486 Views

Hello,

We have been testing the HeadDetector example on https://devcloud.intel.com/edge/advanced/licensed_applications/ and are happy with the results, so we went forward and did a quick integration of it into our SDK following the example code before doing the final purchase.

And yet when we run an inference in our own application, it is taking around 750 ms while the Intel example takes only 60 ms. We noticed that Intel example uses all 6 cores (on a machine with 6 physical and 12 logical cores) but our integration uses only 1 core on the same machine, and when we launch around 100 inferences it even jumpes between cores. We have tried,

vas::hd::HeadDetector::Builder hd_builder;
hd_builder.ie_config["CPU_BIND_THREAD"] = "YES";
hd_builder.ie_config["CPU_THREADS_NUM"] = "6";
and also to set (KEY_)CPU_BIND_THREAD on the commandline before running the app.
 
and yet the result is the same. We're using the CPU backend, by the way. We compared almost everything between the two applications, CMake configuration, libraries linked and loaded at runtime and environment variables at runtime.
 
What could we be missing, please?
 
Thanks a lot.
0 Kudos
1 Solution
Demiray__Baris
414 Views

Hello Sahira,

Again updates, we have found the problem. It was about our configuration that was getting corrupted and we were passing an invalid value as threshold.

Cheers,

View solution in original post

3 Replies
Demiray__Baris
460 Views

Hello, updates to this problem.

When we moved the model initialization (basically running vas::hd::HeadDetector::Build()) into the same method that runs the inference on vas::hd::HeadDetector returned by the previous call, inference times got down to 6 ms from around 750 ms.

So we have came to the conclusion that our unique_ptr returned by Build() (that we keep as a class member) is actually being reset somewhere between our initialization method and the inference method, and the excessive time is caused by the model load at each inference (we measured model load times, numbers match).

I tried the code below yet it's still not working (m_ prefix is for class members)

// Inside our Init()
m_hd = std::move(m_hd_builder.Build(model.c_str()));

// Then inside our Inference()
m_hd.Detect(...)

I even tried to get the internal vas::hd::HeadDetector pointer by get() and release() yet still no luck.

Any tips or ideas, please? Why would the model be disappearing between these two calls? 

Sahira_Intel
Moderator
450 Views

Hi Demiray__Baris,

Thank you for providing the update. I am looking into this issue further and will let you know when I have a solution.

Best Regards,

Sahira 

Demiray__Baris
415 Views

Hello Sahira,

Again updates, we have found the problem. It was about our configuration that was getting corrupted and we were passing an invalid value as threshold.

Cheers,

Reply