Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud
1637 Discussions

Running a custom network with HDDL plugin fails

Biomegas
New Contributor I
2,332 Views

This is the original post  about HDDL plugin. It kept being placed in the DevCloud for edge forum, which is read-only and I have to keep making new post here. But anyway, the attachment is the onnx model and to run it, you can run the following:  @JesusE_Intel 

 

input_image = np.random.random((1, 1, 10130))

ie = IECore()
net_onnx = ie.read_network(model=onnx_path)
exec_net_onnx = ie.load_network(network=net_onnx, device_name="CPU")

input_layer_onnx = next(iter(exec_net_onnx.input_info))
output_layer_onnx = next(iter(exec_net_onnx.outputs))start = time.perf_counter()
res_onnx = exec_net_onnx.infer(inputs={input_layer_onnx: input_image})

 

0 Kudos
1 Solution
Biomegas
New Contributor I
1,866 Views

Hey,

 

The problem has been resolved. I believe the update is just not finished when I was working on it. Now it is working out fine. Thanks!

View solution in original post

0 Kudos
15 Replies
JesusE_Intel
Moderator
2,284 Views

Hi Biomegas,


Apologies for the discussions moving to DevCloud for the Edge forum. I've reached out to the team to understand why that is happening. For now, I moved the discussion back to the DevCloud forum.


I can see the HDDL device time out when running inference with your model. Please allow me some time to look into the model and root cause the issue.


Regards,

Jesus


0 Kudos
Biomegas
New Contributor I
2,254 Views

Hey,

 

I just want to follow up on the topic. Were you able to find out what causes the timeout or what is the problem with the network? Another questions is how should I change the plugin config such as timeout time and custom kernel? Please let me know. Thanks!

0 Kudos
JesusE_Intel
Moderator
2,237 Views

Hi Biomegas,


Apologies for the delay, I've been looking into your model and checked that all layers are supported by the HDDL Plugin. However, I have not been able to determine why the model doesn't inference on HDDL. I've reached out to the development team to further debug the issue and to confirm all layers are supported by the HDDL plugin.


Increasing the timeout is not possible on DevCloud. I tested on a local system with an HDDL device and increasing the common timeout to 60000 did not work. I will keep you posted on what I find out.


Regards,

Jesus


0 Kudos
Biomegas
New Contributor I
2,230 Views

Hey,

 

Thanks for the reply. I was doing some modifications to the network and found out that if I remove dilation of the conv1d, the network can then be properly compiled and run on Myriad and HDDL plugins. So maybe you can let the dev team know about that and dilation is the core functionality of the model. The other thing about this is that the inference time of HDDL is magnitude slower compared to CPU and GPU, which lead me to believe that many operations are not supported and are run on CPU instead (large overhead of data transfer) Is it possible for us to get benchmark result for each layer just like CPU plugin since it is giving me an assertion error right now? Please let me know if there is any update and I really appreciate your help. Thanks!

 

Biomegas

0 Kudos
JesusE_Intel
Moderator
2,213 Views

Hi Biomegas,


Thanks for the update, I'm still waiting to hear back from the dev team. In regards to your other questions, have you tried using -pc and -report_type with the benchmark_app? I believe this will display the model layer information.


-report_type "<type>" Optional. Enable collecting statistics report. "no_counters" report contains configuration options specified, resulting FPS and latency. "average_counters" report extends "no_counters" report and additionally includes average PM counters values for each layer from the network. "detailed_counters" report extends "average_counters" report and additionally includes per-layer PM counters and latency for each executed infer request.


-pc Optional. Report performance counters.


Regards,

Jesus


0 Kudos
Biomegas
New Contributor I
2,192 Views

Hi,

 

I just want to follow up on the problem with having dilation in convolution. Was the dev team able to find out what is going on? The functionality is crucial to my project and thus I would like to have any update or any advices on workaround. Thanks!

 

Biomegas

0 Kudos
Biomegas
New Contributor I
2,184 Views

Also I would like to know if there is any fpga node available for me to run CNN IR inference. I recalled that the plugin is only available in 2020 version of Openvino and maybe I am wrong. Please let me know as well. Thanks!

0 Kudos
JesusE_Intel
Moderator
2,109 Views

Hi Biomegas,


Apologies for the delay in my response, I have not heard back from the development team on dilation of conv1d. I just followed up this morning and will let you know what I find out.


Correct, FPGA is only available on OpenVINO toolkit 2020.3.X LTS release. Please take a look at the following page for available accelerators. Please note support for FPGA devices is on a separate forum topic: https://forums.intel.com/s/topic/0TO0P0000001AUUWA2/intel-high-level-design


FPGA Acceleration with Intel® DevCloud


Regards,

Jesus


0 Kudos
Biomegas
New Contributor I
2,099 Views

Hello,

 

Thanks for the reply. Regarding HDDL, since the update to 2022.1, running my network on HDDL plugin has not been possible.  Here is the message:. This is running on idx002 on myriadx-8. Please let me know how I should resolve this. Thanks!

 

[ ERROR ] _client->query(QUERY_TYPE_DEVICE, &query) failed: HDDL_NOT_INITIALIZED
Traceback (most recent call last):
File "/opt/intel/openvino_2021/python/python3.6/openvino/tools/benchmark/main.py", line 255, in run
exe_network = benchmark.load_network(ie_network)
File "/opt/intel/openvino_2021/python/python3.6/openvino/tools/benchmark/benchmark.py", line 63, in load_network
num_requests=1 if self.api_type == 'sync' else self.nireq or 0)
File "ie_api.pyx", line 403, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 442, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: _client->query(QUERY_TYPE_DEVICE, &query) failed: HDDL_NOT_INITIALIZED
[ INFO ] Statistics report is stored to /home/u119369/Benchmark_WaveNet/Wavenet2D_ND/results/hddl/benchmark_report.csv
[09:50:45.9726][24694]ERROR[GlobalMutex_linux.cpp:23] Error: Open GlobalMutex /var/tmp/hddl_start_exit.mutex failed. errno = 13 [Permission denied]
[09:50:45.9728][24694]ERROR[GlobalMutex_linux.cpp:23] Error: Open GlobalMutex /var/tmp/hddl_service_alive.mutex failed. errno = 13 [Permission denied]
[09:50:45.9728][24694]ERROR[GlobalMutex_linux.cpp:23] Error: Open GlobalMutex /var/tmp/hddl_service_ready.mutex failed. errno = 13 [Permission denied]
[09:50:45.9728][24694]ERROR[GlobalMutex_linux.cpp:23] Error: Open GlobalMutex /var/tmp/hddl_service_failed.mutex failed. errno = 13 [Permission denied]
[09:50:45.9728][24694]ERROR[GlobalMutex_linux.cpp:45] Error: GlobalMutex /var/tmp/hddl_start_exit.mutex is not initialized.
[09:50:45.9729][24694]ERROR[ServiceStarter.cpp:118] Error: Lock StartExitMutex:/var/tmp/hddl_start_exit.mutex failed. errno = 13 [Permission denied]
[09:50:45.9729][24694]ERROR[ServiceStarter.cpp:30] Error: Failed to start HDDL Service
[09:50:45.9729][24694]ERROR[HddlClient.cpp:256] Error: Failed to boot service.

0 Kudos
Biomegas
New Contributor I
2,099 Views
0 Kudos
JesusE_Intel
Moderator
2,052 Views

Do you know what node you are submitting the job to? Same model used to work on 2021.4.2?


0 Kudos
Biomegas
New Contributor I
2,002 Views

I have tried all nodes with HDDL. 2021.4.2 works sometimes but 2022 never works for me. Thanks!

0 Kudos
JesusE_Intel
Moderator
1,872 Views

Hi Biomegas,


I didn't have any issues running inference on idc002mx8, idc014, idc023, idc038, idc044 nodes with HDDL. Could you try on one of these with your model? Please also share your latest model, I believe you had made some changes to the one you provided in your initial post.


Regards,

Jesus


0 Kudos
Biomegas
New Contributor I
1,867 Views

Hey,

 

The problem has been resolved. I believe the update is just not finished when I was working on it. Now it is working out fine. Thanks!

0 Kudos
JesusE_Intel
Moderator
1,781 Views

If you need any additional information, please submit a new question as this thread will no longer be monitored.


0 Kudos
Reply