Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

NEW RELEASE: Intel® Distribution of OpenVINO™ toolkit 2021.1

Max_L_Intel
Moderator
837 Views

Adds TensorFlow 2.x and New Hardware Support, and Goes Beyond Vision

Increase inference performance with 11th Gen Intel® Core™ processors and go beyond computer vision with added support for diverse workloads

  • This major release introduces new and important capabilities, plus breaking and backward-incompatible changes. All users are highly encouraged to upgrade to this version.
  • Introduces official support for models trained in the TensorFlow* 2.x framework.
  • Get support for the latest hardware, which includes support for 11th generation Intel® Core® processors (formerly code named Tiger Lake.) It also includes new inference performance enhancements with Intel® Iris® Xe graphics, Intel® Deep Learning Boost instructions, and Intel® Gaussian & Neural Accelerator 2.0 for low-power speech processing acceleration.
  • Enables end-to-end capabilities to leverage Intel® Distribution of OpenVINO™ toolkit for workloads beyond computer vision. These capabilities include audio, speech, language, and recommendation with new pretrained models; support for public models, code samples, and demos; and support for non-vision workloads in the DL Streamer component.
  • Adds a beta release that integrates the Deep Learning Workbench with the Intel DevCloud for the Edge. (The full release is expected in Q4 2020.) Developers can now graphically analyze models using the Deep Learning Workbench on Intel® DevCloud for the Edge (instead of a local machine only) to compare, visualize, and fine-tune a solution against multiple remote hardware configurations.
  • Includes OpenVINO™ Model Server, which is an add-on to the Intel Distribution of OpenVINO toolkit and a scalable microservice. This add-on provides a gRPC or HTTP/REST endpoint for inference, making it easier to deploy models in cloud or edge server environments. It is now implemented in C++, to enable reduced container footprint (for example, less than 500 MB), and deliver higher throughput and lower latency.
  • Now available through Gitee* and PyPI*.

RESOURCES

0 Replies
Reply