- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Is there a plan to support conversion of TensorFlow Lite models to IR using Model Optimizer?
Regards
Reagan
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Reagan,
Thanks for reaching out to us.
Model Optimizer supports several frozen quantized topologies hosted on the TensorFlow Lite site. The frozen model file (.pb file) is required for conversion to IR format by Model Optimizer.
More information is available in the section “Supported Frozen Quantized Topologies” at the following link:
As of now, Model Optimizer doesn’t support native conversion from .tflite file to IR format. However, we are unable to comment on future support or any planned enhancements, as they are subject to changes.
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Reagan,
This thread will no longer be monitored since we have provided explanations. If you need any additional information from Intel, please submit a new question.
Regards,
Munesh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page