I have the same problem as the post below.
I converted my own "UNet" model from .pb to .bin.
$ python3 mo_tf.py \ --input_model semanticsegmentation_frozen_person_32.pb \ --output_dir FP16 \ --input input \ --output output/BiasAdd \ --data_type FP16 \ --batch 1
And I measured inference time for each combination of devices.
"CPU" vs "Neural Compute Stick v1" vs "Neural Compute Stick v2"
Inference time is longer for MYRIAD than for CPU.
The reasoning time measurement results are organized below.
(1) Is not "Shave Core" used up to the maximum?
(2) Is there a way to speed up?
Please help me.