We developed our own vision acceleration board with an Intel Arria 10 FPGA. I know that OpenVino can be used to deploy Neural Network models on Intel Boards such as the Intel Vision Accelerator Design. my question is, can the OpenVino toolkit be used to deploy Neural Network models to our own custom board and run alongside our other IP blocks also implemented on the FPGA.
It sounds like you built their own A10 board. If the question is, can bitstreams for your custom board be shipped with OpenVINO, the answer would be no. If the question is, can OpenVINO work with your custom board and bitstreams if you distribute the bitstreams separately from OpenVINO, the answer is possibly, but Intel won’t be able to validate the system, nor will Intel engineering be able to provide support for it (in general, there may be exceptions to this, depending on whether you have dedicated account support, etc).
Hope it helps,