Could we please augment the Model Optimizer to accept tflite models that are precompiled like the ones from mediapipe.dev ? Since we don't have access to the underlying tensorflow models, using packages like tflite2onnx does not work because of various limitations (quantization etc.).
It would be great if mo.py can accept a tflite model with fp16 quantization and directly convert to IR representation. Since platforms like mediapipe are opening up, this will help Intel target a wider audience with networks that already run efficiently on iOS, Android and Desktop.
We have forwarded your request to support TF Lite over to our developer team. However, we are unable to comment on future support or any planned enhancements, as they are subject to changes.
This thread will no longer be monitored since we have provided an update. If you need any additional information from Intel, please submit a new question.