- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I have a neural network consisting of a few basic layers - Conv2Ds, MatMuls, Normalizations, etc.
Creating a .xml and .bin by model optimizer in Version 2.300 and processing the images in my test folder getting an output for each using VINO generated model takes actually longer than taking shorter than without using VINO.
Can anyone explain why? My VINO does not have Dropout layers but still is VINO not supposed to condense the remaining layers or at-least give similar time performance when using a .xml/bin generated model from a regular tensorflow file?
Best
Mike
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Mike,
Can you kindly try your experiments on the newly released 2019 R1 OpenVino version ? Several fixes and performance improvements have been made in that release.
Also I am not sure what you mean by this :
getting an output for each using VINO generated model takes actually longer than taking shorter than without using VINO.
How are you getting an output ? Are you running one of the OpenVino samples on the generated IR ?
Please read more about our new release here:
Thanks,
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page