Thank you for using OpenVINO™ Toolkit and Deep Learning Workbench!
Typically, an autoencoder has two components/subnetworks, encoder and
OpenVINO™ toolkit provides several Intel’s Pre-Trained Models which contain encoder and decoder, and are available in the Open Model Zoo. For example: handwritten-score-recognition-0003, text-recognition-0012, text-recognition-0014, text-recognition-0015, driver-action-recognition-adas-0002, text-spotting-0005. You may download these models using Model Downloader.
Models downloaded from Intel’s Pre-Trained Models consist of three precisions: FP32, FP16, and FP16-INT8, whereby FP16-INT8 is an optimized model that can be used on OpenVINO™ Toolkit and Deep Learning Workbench.
On another note, if you want to optimize your autoencoder custom model, please ensure the layer of your custom model is supported by OpenVINO™ Toolkit. You may refer to the Supported Framework Layers. Then, you can convert your custom model to Intermediate Representation (IR) using Model Optimizer.
After that, to further optimize your converted model on OpenVINO™ Toolkit, you can use Post-Training Optimization Tool. To further optimize your converted model on Deep Learning Workbench, you can use INT8 Calibration or Winograd Algorithm Tuning.
To clarify Wan's reply a bit, support for all of the OpenVINO components described above is integrated into DL Workbench. Therefore, whether you want to try out one of the pre-trained models or test one of your custom ones - you can both download and convert them using DL Workbench with no issue.
Assuming then that you have a custom model on you hands, you may also refer to Workbench's documentation on model conversion to obtain an IR of your model. Generally speaking, if your model is a simple autoencoder, providing a color space as well as model inputs and their shapes should be sufficient to convert the model and successfully use it withing DL Workbench.
Thank you for providing your explanations in the community!
This thread will no longer be monitored since we have provided a solution.
If you need any additional information from Intel, please submit a new question.