I have a general question regarding FPGA acceleration of OpenVINO models. I am not currently doing any OpenCL development, for now I just want to use OpenVINO-provided bitstreams and models out of the box. Let's say I have a model and I want to know what bitstream would best support it. Currently I can read this document and try to make an educated guess. Then I can manually program the bitstream to the card, call Core::QueryNetwork, and generate a list of returned layers. I can look at the percentage of the layers that are supported; if it isn't a pretty high percentage, I'd guess that model/bitstream combo probably isn't a good candidate.
Is there a more efficient way I can go about this? The handful of combinations I've tried, such as (vehicle-detection-adas-0002, 2019R2_RC_FP16_MobileNet_Clamp.aocx) have resulted zero or a very small number of supported layers. I am left wondering if I have a configuration issue (very possible), or the provided bitstreams are written mainly for models I don't use. (I am mostly interested in object detection.)
A matrix of bitstream/model support would seem ideal- does anything like that exist? Or could I generate one with a script?
Any advice would be most appreciated, including "you're doing it wrong", which is pretty likely the case. :)
In this document https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_FPGA.html we have the "Translate from Architecture to FPGA Bitstream Files" section which provides information about supported layer by bitstreams in addition to conv, resample, deconv, pool, fc layers.
All other layers are not supported by FPGA plugin.
The names in the bitstreams are just ones the DLA team has measured to perform the best on. We recommend that the user runs their graph on all of the bitstreams to find out which one is the best.
I hope it helps,