Community
cancel
Showing results for 
Search instead for 
Did you mean: 
chen__bruce
Beginner
201 Views

Performance (FPS) of pre-trained Models

Hi,
 
I have few questions regarding to the pre-trained model performance in document.
 
To take face-reidentification as example, the performance table listed below .
1. The value of "Caffe* CPU" means the fps of the model running without inference engine.
     Is it correct?
 
2. Why Inference Engine GPU slower than Inference Engine CPU?
     Doesn't GPU speed up the inference?
 
3. Why Inference Engine MYRIAD much more slower than OpenVINO Inference Engine?
    My understanding is VPU is designed for inference use. should be faster than GPU and CPU.
 

1.png

 

Thank you,

Bruce

0 Kudos
3 Replies
Monique_J_Intel
Employee
201 Views

Hi Bruce,

1.That's correct the metric is Frames Per Second(FPS) and that's just running native Caffe framework without Inference engine.

2.GPU doesn't speed up all computations and this BTW is common false logic. It really depends on the type of operations you have occurring on the GPU, the amount of data and if there is data reuse or not. The biggest hit that you normally see with GPUs is the cost to move memory to and from GPUs and just those functions alone can cause a decrease in performance that allows CPUs to have better performance numbers.

3. I can't fully comment on this but understand that MYRIAD has great performance for the low power use cases. I encourage you to take a look at the details here.

Kind Regards,

Monique Jones

 

rachel_r_Intel
Employee
201 Views

Hi All

Can you give a sample command that just running native Caffe framework without Inference engine?

Thank you very much,

Rachel

Monique_J_Intel
Employee
201 Views

Hi Rachel,

You can run inference with native caffe with the following python scripts here.

There is a classify.py and a detect.py depending on what type of model you'd like to use. I also recommend that you create a separate post to ask any further questions.

Thanks,

Monique Jones

Reply