I am trying to run my custom network on the neural stick/VPU on devcloud but it does not work. Here is the error message.
** Suggestion: This error is probably caused by the inference requires more time than the set timeout (12000 ms).
You can try to increase the timeout with following steps:
(1) increasing the "common_timeout" value in $HDDL_INSTALL_DIR/hddl_service.config, and restart the service;
(2) kill hddldaemon: [Linux] kill -9 $(pidof hddldaemon); [Windows] kill with Windows Task Manager;
(3) restart your application again.
** OR caused by the graph used doesn't match to firmware running on MyriadX.
In this case, you need to check your network to see whether it's supported by MyriadX.
The network does not use conv2d but instead use conv1d and I am wondering if that could be the reason. However, the converted IR model was able to run on CPU and GPU and also I don't know where is HDDL is installed on devcloud. Please let me know if there is any insight on getting it work. Thanks!
Given that your model runs on CPU and GPU, I believe it may have unsupported layers for the VPU plugin. Take a look at the Supported Layers section in this document to verify all the model layers are supported on VPU.
I can also take a look at your model. Could you share your model in native format (tf, caffe, onnx, etc.) and model optimizer command?