Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6430 Discussions

NEW RELEASE: OpenVINO toolkit  2023.0 – Easier to deploy and accelerate AI


What’s new in this release?

Portability and performance 

  • Instant performance boost through automatic device detection, load balancing, and dynamic inference parallelism across various processors including CPU, GPU, and more. 
  • Optimize for performance or for power saving; our CPU plugin now offers thread scheduling on  12th gen Intel® Core™ and up, where developers can choose to run inference on E-cores, P-cores, or both depending on your application’s configurations. 
  • Default Inference Precision - No matter the device you are using, OpenVINO toolkit will default to the format that provides optimal performance whether it’s BF16 or FP16, you don’t have to worry, and other formats and options are still available to you. 
  • Improved model caching on GPU with more efficient model loading/compiling. 

More integrations, minimizing code changes 

  • TensorFlow models no longer need offline conversion - it happens automatically at runtime. Now you can take a standard TensorFlow model and load it directly in OpenVINO Runtime or OpenVINO Model Server. 
  • Latest release includes support for Python 3.11
  • C++ developers can now install OpenVINO Runtime from Conda Forge. 
  • Arm processor support now included in OpenVINO CPU plugin, including dynamic shapes, full processor performance and broad sample code available in our notebooks Officially validated for Raspberry Pi 4 and Apple® Mac M1/M2. 

Broader model support and more optimizations 

  • Including support for generative AI: CLIP, BLIP, Stable Diffusion 2.0, text processing models, transformer models (i.e. S-BERT, GPT-J, etc.), and others of note: Detectron2, Paddle Slim, RNN-T, Segment Anything Model (SAM), Whisper, and YOLOv8 to name a few. 
  • Initial support for dynamic shapes on GPU - Developers no longer need to change to static shapes when leveraging the GPU giving you more flexibility 
  • Neural Network Compression Framework (NNCF) is the quantization tool of choice. Making it easier to greatly improve model performance by compressing your model.   

Download the 2023.0 Release 
Download Latest Release Now


0 Replies