I'm running the Python Text Detection Model PixelLink Demo Models on Raspberry Pi and Neural Compute Stick 2 with Open Vino.
The spec is 51.256 GFlops http://docs.openvinotoolkit.org/latest/_text_detection_0002_description_text_detection_0002.html
I'm seeing inference speed of ~> 30 seconds for one image.
Is this is what would be expected of a model of this complexity or should I continue to investigate possible configuration issues ?
For more complete information about compiler optimizations, see our Optimization Notice.