- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello.
I have the same problem as the post below.
https://software.intel.com/en-us/forums/computer-vision/topic/800112
I converted my own "UNet" model from .pb to .bin.
$ python3 mo_tf.py \ --input_model semanticsegmentation_frozen_person_32.pb \ --output_dir FP16 \ --input input \ --output output/BiasAdd \ --data_type FP16 \ --batch 1
And I measured inference time for each combination of devices.
OpenVINO with
"CPU" vs "Neural Compute Stick v1" vs "Neural Compute Stick v2"
Inference time is longer for MYRIAD than for CPU.
The reasoning time measurement results are organized below.
(1) Is not "Shave Core" used up to the maximum?
(2) Is there a way to speed up?
Please help me.
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The following is additional discussion at the NCS official forum.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you Katsuya for sharing!
Regards,
Joel
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page