Thanks for reaching out to us.
Model Optimizer supports several frozen quantized topologies hosted on the TensorFlow Lite site. The frozen model file (.pb file) is required for conversion to IR format by Model Optimizer.
More information is available in the section “Supported Frozen Quantized Topologies” at the following link:
As of now, Model Optimizer doesn’t support native conversion from .tflite file to IR format. However, we are unable to comment on future support or any planned enhancements, as they are subject to changes.
This thread will no longer be monitored since we have provided explanations. If you need any additional information from Intel, please submit a new question.