Community
cancel
Showing results for 
Search instead for 
Did you mean: 
liang__heng1
Beginner
52 Views

do infernce on GPU cost 57ms but 30ms on CPU

I have a model ,it  only cost 30ms on CPU, but 57ms on GPU.  the time cost is average value by 1000 times. 

how can I reduce execution time on GPU? 

0 Kudos
2 Replies
liang__heng1
Beginner
52 Views

there are five  "Interp" layers in the model, is  this the main reson that doing inference on GPU device cost much more time?

liang__heng1
Beginner
52 Views

Does anyone know the reason?