1.That's correct the metric is Frames Per Second(FPS) and that's just running native Caffe framework without Inference engine.
2.GPU doesn't speed up all computations and this BTW is common false logic. It really depends on the type of operations you have occurring on the GPU, the amount of data and if there is data reuse or not. The biggest hit that you normally see with GPUs is the cost to move memory to and from GPUs and just those functions alone can cause a decrease in performance that allows CPUs to have better performance numbers.
3. I can't fully comment on this but understand that MYRIAD has great performance for the low power use cases. I encourage you to take a look at the details here.
You can run inference with native caffe with the following python scripts here.
There is a classify.py and a detect.py depending on what type of model you'd like to use. I also recommend that you create a separate post to ask any further questions.