I have the SSD convolutional network that is currently working on the Android device. How it is possible to run this network on Intel FPGA? The network is in the protobuf (TF) format.
The first thing as I understand I should use model Optimizer script from OpenVino to convert this model into appropriate format. But what I should do next?
Does Intel CNN library for FPGA support SSD-mobilenet architecture?
Link Copied
Thank you for the answer. My model has FP32 format of weights. As I know Model optimizer converts TF model into Intermediate Representation. But what I should do next to prepare data for FPGA?
Thank you for your answer. Could you, please, give me the link to the step by step instruction of FPGA SSD model deployment?
Is it possible to emulate inference of FPGA model representation on the CPU?
For more complete information about compiler optimizations, see our Optimization Notice.