- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So I posted this question yesterday and thanks for the answer @JesusE_Intel . However, it became read-only and I can't reply. The pytorch model is in the attachment.
Here is the optimizer command
mo_command = f"""/opt/intel/openvino/deployment_tools/model_optimizer/mo.py
--input_model "{onnx_path}"
--input_shape "[1, 1, {LENGTH}]"
--data_type FP16
--output_dir "{model_path.parent}"
"""
1 Solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Biomegas,
Strange, thanks for opening a new thread. I will go ahead and close the other one. Could you provide the exported ONNX model ready to inference?
Regards,
Jesus
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Biomegas,
Strange, thanks for opening a new thread. I will go ahead and close the other one. Could you provide the exported ONNX model ready to inference?
Regards,
Jesus
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page