Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

OpenVINO Deep Learning Workbench Dec. 1st Webinar Q&A

JesusE_Intel
Moderator
463 Views
  1. Is it possible to have multiple Inference Engines (CPU/GPU/Other supported hardware) execute a single model?

Yes, it is possible. One of the main OpenVINO functions is to provide a unified application programming interface for the execution of neural models. If it comes to evaluating performance on different engines, you can do it in the DL Workbench; for serving in production, you can use OpenVINO™ Model Server https://docs.openvino.ai/2021.4/openvino_docs_ovms.html.

 

  1. Is there fine-grained control for selecting the Precision (INT8/FP32) for specific layers of the model? To balance performance accuracy tradeoffs?

In the DL Workbench, you can select the calibration method and preset that allows you to tune the settings

https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Int_8_Quantization.html#int8-calibration-methods. If fine-grained control is required, there is an additional option in the Post-training Optimization Tool, which allows you to exclude layers from calibration, leaving them in the floating-point precision https://github.com/openvinotoolkit/openvino/blob/master/tools/pot/configs/accuracy_aware_quantization_spec.json#L116 .

 

  1. For classification application can the target image vary over time or has to be static for a minimum amount of time? We have an application where we need to qualify LED color based on Ethernet link speed being tested where blinking and static LEDs are normal. Will blinking LEDs pose a problem for classification?

The size of the image can change over time, in order to work with such images, you need to either resize the images to a single size, or resize the network. Also OpenVINO provides support for models with dynamic inputs, the feature is currently available only in the GitHub repo master branch https://github.com/openvinotoolkit/openvino.  The application qualifying LED color is a very interesting idea. It is difficult to say how much influence it will have on the classification. It usually depends on how well the network is trained, so when training the network, we recommend saturating the dataset with similar images (blinking LEDs).

 

  1. Can you please explain more about the smart quantization? That sounds extremely interesting!

During model calibration, an intelligent analysis of the model takes place and, depending on the specified accuracy requirements, the model is partially or completely translated to INT8 precision. You can find more details here https://docs.openvino.ai/latest/pot_compression_algorithms_quantization_README.html 

 

  1. The example I m have software platform is works only 11 generation 1050 and high i intel processor is to don't have option old processor works

Please send the ticket to GitHub issues https://github.com/openvinotoolkit/openvino/issues  or intel community forum https://community.intel.com/t5/forums 

 

  1. When using a model from OMZ, how to know the label from this model for my dataset annotation?

The labels of the model are set during the training, to find out this information, read the description (including the dataset on which the model has been trained)  in Open Model Zoo https://docs.openvino.ai/latest/model_zoo.html

 

  1. Can we develop the example project in two-three different servers to see performance compared?

In DL Workbench, you can create a project on local devices or test the model on intel accelerators in the DevCloud https://docs.openvino.ai/2021.4/workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud.html, and compare the resulting performance, as well as model characteristics before and after optimization https://docs.openvino.ai/2021.4/workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models.html

 

  1. Why INT8 calibration and not something like INT16?

Recent advances in Intel hardware architectures introduced special processor instructions (DL Boost) designed to work with INT8 data, allowing to increase the inferencing performance, especially for optimized models. However, INT8 models are also generally faster on Intel platforms that do not support VNNI instruction set.

Learn more https://www.intel.com/content/dam/www/public/us/en/documents/product-overviews/dl-boost-product-overview.pdf 

 

  1. Is there API or SDK for this whole thing?

The OpenVINO toolkit has APIs available for Python, C, and C++  https://docs.openvino.ai/latest/api/api_reference.html. DL Workbench can help with learning OpenVINO™ Python and C++ API in JupyterLab* environment https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application.html#use-streams-and-batches-in-your-application 

 

  1. When transferring to the IR, can the number of outputs of the classification layer be changed as it is a transfer learning model optimization method? I guess it would be automated depending on the number of classes of input images (or non-image data)

Transferring  the model into an IR does not change its topology, but you can change it, for example, by replacing network outputs or replacing a subgraph with another subgraph.  Find more details in the Model Optimizer documentation https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model.html 

 

  1. What is the difference FP16-INT8 vs. INT8? Is it the same?

In FP16-INT8 case, despite the fact that an optimized model is executed in INT8 precision, a part of the layers can have original precision, for example, FP16. This depends on the requirements and configurations that were used for optimization.

 

  1. Can we import multiple models and do some Performance comparison between them?   

Yes, in DL Workbench you can compare performance between several models on the Create Project page https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_View_Inference_Results.html or between the model's projects https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models.html#compare-performance-between-two-versions-of-a-model

 

  1. Where can I find the spec for IR models?

You can find it in the operation sets documentation https://docs.openvino.ai/latest/openvino_docs_ops_opset.html

 

  1. Can the workbench run on my own local machine?  What are the optimal hardware requirements for it?

Yes, you can install it with Docker and run locally. Watch the installation video https://www.youtube.com/watch?v=JBDG2g5hsoM, check the prerequisites https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Prerequisites.html and obtain a command for running the DL Workbench on your machine here https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Run_Locally.html

 

  1. I got an Intel Neural Compute Stick 2 (FPGA over a USB3 connection) how is this hardware supported and integrated with Intel OpenVINO? I also have an Nvidia JetsonNano (ARM+GPU) is there support for these non-Intel hardware accelerator in OpenVINO?

At the moment, OpenVINO supports Intel accelerators, including Neural Compute Stick 2. You can read more in this document https://docs.openvino.ai/latest/openvino_docs_IE_DG_supported_plugins_MYRIAD.html 

 

  1. Heatmaps are a good next step for trust enhancement but are there other more advanced ways to increase trust in the decision process?

The DL Workbench currently presents the possibilities of using the xAI heatmap method for classification models and enables users to visualize the decision-making process, which is useful for computer vision tasks. One of the potential areas of work is the expansion of use cases with object detection and segmentation tasks.

 

  1. Is there any difference (if any) between Windows and Linux versions of OpenVINO?

There are different delivery channels, deployment specifics and environment settings for different operating systems, but the programming interface is identical and does not depend on the operating system and accelerators. Learn about different distributions of OpenVINO here https://docs.openvino.ai/2021.4/get_started.html 

 

  1. Which versions of TF are supported for converting own models to IR? 

Model Optimizer model allows to get an IR for TensorFlow models of different formats (Frozen graph, Checkpoint, Metagraph, SavedModel, Keras H5). You can learn more about TF models conversion in this document https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

 

  1. The main issue with IR is the custom Layers and custom Loss function (aka custom objects), how does that is supported in IR?

IR is an OpenVINO representation format of the model. Intermediate format is used for achieving optimal inference on various accelerators, so the loss function is not applicable in this context. OpenVINO supports custom layers, find more details at the Custom Layers guide https://docs.openvino.ai/latest/openvino_docs_HOWTO_Custom_Layers_Guide.html

 

  1. Can I compare accuracy between my model and INT8? Is there a big loss?

That's the great part as we will be able to control the loss with the tool, often we will have very minimal and you can see from this demo. Usually, even with default settings, the accuracy loss is around 1%. But there are some settings that you can select to explicitly limit acceptable accuracy loss to e.g. 0.5%. This might come with some tradeoff with max achievable performance but you have this option. Learn more here https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Int_8_Quantization.html#accuracyaware-method 

 

  1. It's my first session with OpenVINO, can you tell me how to access the platform?

The easiest way to start working with OpenVINO toolkit is to use its official GUI - DL Workbench https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html. You can start it on your machine https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Run_Locally.html or in the DevCloud https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud.html. If you prefer to work with OpenVINO CLI, learn how to get started here https://docs.openvino.ai/latest/get_started.html

 

  1. A Linux build environment needs these components: GNU Compiler Collection (GCC)* 3.4 or higher CMake* 3.10 or higher Python* 3.6 to 3.8   Does it mean OpenVino doesn't work on Intel GPU in Linux, CPU only?

OpenVINO works with the Intel GPU on Linux. You can evaluate the performance in the DL Workbench, since it supports the analysis of the model on the Intel GPU.

 

  1. I have already used quantization for a CNN model to predict battery SOC, now I wanna use recurrent networks such as LSTM and GRU for the same task. I don't know if they are supported by TensorFlow or is this possible to work with them in OpenVINO?

These networks are supported by TensorFlow and OpenVINO. You can learn more in this document https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_lm_1b_From_Tensorflow.html

 

  1. What is a model format?

The format is a way of describing the architecture of the model and its weights. A neural model, trained in one of the Deep Learning frameworks, is represented by the corresponding format, for example, TensorFlow (Frozen graph, Checkpoint, Metagraph, SavedModel), ONNX, OpenVINO™ IR, etc. The DL Workbench supports the following formats: https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Select_Models.html#supported-frameworks You can find more info about the model formats here: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html

 

  1. Сan we input complex models of the same category? If the input is gloomy and blurred will it predict the model accurately?

It really depends on your model. If it could make good predictions in the first place, the same is going when you use your model with OpenVINO, but doing it faster.

 

  1. Is there any course available for this?

DL workbench has a series of online tutorials on YouTube.

intro - https://www.youtube.com/watch?v=on8xSSTKCt8 

installation - local https://www.youtube.com/watch?v=JBDG2g5hsoM and for DevCloud https://www.youtube.com/watch?v=rygSRiKn0oY 

get started - https://www.youtube.com/watch?v=gzUFYxomjn8 

On the educational resources page, you can find useful links to all videos, articles, and webinars about DL Workbench https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Additional_Resources.html 

For OpenVINO, we have our certification program which you can learn more here: intel.ly/edgeaicert  

 

  1. Could you please talk about Explainable AI?

We expose the ability to see the attention map in the form of heatmap for Computer Vision Classification use cases. So, you can drag and drop your picture and see the heatmap of model attention that caused the corresponding prediction. You can learn more in this document https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Select_Model.html#visualize-model-predictions-with-importance-map 

 

  1. How does INT8 quantization deal with the training parameters? Does it also transform these?

INT8 is a post-training method for optimizing models and does not transform training parameters, it requires the representative data and, optionally, accuracy configurations. If you want to optimize the network at the training stage, you can learn about the corresponding tool here  https://github.com/openvinotoolkit/nncf 

 

  1. What are other compressions OpenVINO allows, except int8 quantization?

 In addition to INT8, there is a conversion to FP16 format, recommended for all models in OpenVINO. Such models take up less space and are supported by all plugins. Learn more https://docs.openvino.ai/latest/pot_docs_LowPrecisionOptimizationGuide.html

 

  1. Does the Workbench runs locally (Docker) on macOS with Apple silicon?

At the moment, validation for macOS with Apple silicon has not been performed. We will be grateful if you share your feedback with us and write on the Intel community forum https://community.intel.com/t5/forums

 

  1. Can I install the DL workbench using pip without installing docker on Windows? I found this command: python -m pip install -U openvino-workbench, will it work without having docker installed and configured?

You can not install the DL workbench without installing Docker. Docker is required to run the DL Workbench locally, and  -pip install can be used to simplify the working process.  Also, you can always try to work with Intel accelerators inside the DL Workbench in DevCloud https://docs.openvino.ai/2021.4/workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud.html

 

  1. I have OpenVINO installed in WSL 1. If I install DL workbench through OpenVINO, will it be able to open the browser in localhost inside WSL environment? Do I need to upgrade to WSL 2 for it to work properly or is WSL 1 is also supported?

This has not been tested yet, We will be grateful if you share your feedback with us and write on the Intel community forum https://community.intel.com/t5/forums.

  1. What info does .xml IR contain?   Are the throughput/latency values hardware-dependent?

XML file describes the network topology. The throughput/latency values are hardware and model-dependent therefore, you need to evaluate the model and optimize it for the target you planning to use.

 

  1. Can we use the same INT8 model for different batch-size ?

Yes, if the model, taking in account its architecture,  can work with image batches.

 

  1. If I build my own model with Pytorch, will it be compatible with OpenVINO inference?

There are recommendations  for working with original models in the DL Workbench and OpenVINO in general - https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Tutorial_Import_Original.html. Following the recommendations, you need to convert your Pytorch model to ONNX https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html#export-pytorch-model-to-onnx-format. Once it's in ONNX format, you will be able to import it into OpenVINO and DL Workbench.

 

  1. It appears the model optimization is comparing the before and after on the same Intel processor, has any effort been made to quantify the improvement by comparing to a TPU

No, OpenVINO allows to accelerate neural networks on Intel hardware

0 Replies
Reply