Community
cancel
Showing results for 
Search instead for 
Did you mean: 
li__lang
Beginner
48 Views

My model is based on mobilenet. The inference time on fpga platform is three times over that of on cpu platform. Why?

testing environment:

    openvino version: 2018 R4

    inference engine: classification_sample

    parameter of running on fpga paltform: -d HETERO:FPGA,CPU (and IR's precision is FP16)

    parameter of running on fpga paltform: -d CPU (and IR's precision is FP32)

    batchsize: 1 to 32 

0 Kudos
0 Replies
Reply