Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

NEW RELEASE: Intel® Distribution of OpenVINO™ toolkit 2022.3

Luis_at_Intel
Moderator
1,304 Views

What’s New in This Release 

This long-term support (LTS) release provides functional bug fixes, and capability changes for the previous 2022.2 release. Developers can expect new performance enhancements, more deep learning models, more device portability and higher inferencing performance with less code changes. 

 

This update includes: 

Broader model and hardware support - See a performance boost straight away with automatic device discovery, load balancing & dynamic inference parallelism across CPU, GPU, and more. 

  • Full support for 4th Generation Intel® Xeon® Scalable processor family for deep learning inferencing workloads from edge to cloud. 
  • Full support for Intel’s discrete graphics cards, such as Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.  
  • Improved performance when leveraging throughput hint on CPU plugin for 12th and 13th Generation Intel® Core™ processor family. 
  • Enhanced “Cumulative throughput” and selection of compute modes added to AUTO functionality, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance. 

Expanded Model Coverage - Optimize and deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware​. 

  • New Jupyter notebook tutorials for Stable Diffusion text-to-image generation, YOLOv7 optimization and 3D Point Cloud Segmentation. 
  • Broader support for NLP models and use cases like text to speech and voice recognition. 
  • Continued performance enhancements for computer vision models Including StyleGAN2, Stable Diffusion, PyTorch RAFT and YOLOv7. 
  • Significant quality and model performance improvements on Intel GPUs compared to previous OpenVINO toolkit release. 

Improved API & More Integrations - It’s easier to adopt and maintain your code. These updates require fewer code changes and aligns better with frameworks to minimize conversions. 

  • NEW: Hugging Face Optimum Intel – Gain the performance benefits of OpenVINO (including NNCF) when using Hugging Face Transformers. Initial release supports PyTorch models. 
  • Preview of TensforFlow Front End – Load TensorFlow models directly into OpenVINO Runtime and easily export OpenVINO IR format without offline conversion. New “–use_new_frontend” flag enables this preview – see further details below in Model Optimizer section of release notes. 
  • Intel® oneAPI Deep Neural Network Library (oneDNN) has been updated to 2.7 for further refinements and significant improvements on performance for the latest Intel CPU and GPU processors. 
  • Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.  

Download the 2022.3 LTS Release 

Download Latest Release

 

Get all the details
See 2022.3 LTS release notes
See long-term support (LTS) policy


RESOURCES

0 Replies
Reply