- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Folks,
I am into human activity recognition using Tensorflow and UCF101 dataset, NVIDIA 1080 TI training for a video analytics solution and the solution is deployed in Intel i3 or i5 8 gen for inference / testing, it lacks performance.
Can you please let me know, If I can use OpenVINO to improve the performance?
Thanks
Guru
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Guruvishnuvardan,
Model Optimizer (MO) can support many if not most TensorFlow models - and it doesn't care whether or how your models were trained. That's right -- you can even feed an untrained model into Model Optimizer, and sometimes it makes sense to do so for debugging purposes. You must feed in a frozen Tensorflow protobuf into MO however. Once you use MO to successfully generate IR, then there's an extremely high probability that your inference performance will be fantastically improved by using OpenVino Inference Engine - especially if your model is complex and deeply pipelined. I encourage you to download 2019 R1 and give it a test drive !
Here is the Model Optimizer Developer Guide
And here is the Inference Engine Developer Guide
Please peruse those documents and post your questions here.
Thanks for your interest in OpenVino !
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page