OpenVINO able to run custom models. The models should contain layers that are supported by the OpenVINO toolkit.
The list of known layers is different for each of the supported frameworks. To see the layers supported by your framework, refer to the following link:
To see the layers that are supported by each device plugin for the Inference Engine, refer to the following link:
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.