With data being the ever-present core for business growth and insights and machine learning becoming exponentially more prevalent across industries in harnessing said data, so too has the need for simple, yet powerful, solutions to optimize data processing. Rather than relying on the often challenging and time-consuming process of creating Kubernetes clusters and complicated infrastructure setups, developers can use the Intel® AI Analytics Toolkit (AI Kit) on the Red Hat* OpenShift* Data Science (RHODS) platform to speed up many parts of their AI workflow, such as with drop-in acceleration replacements for frameworks like TensorFlow* and tools such as pandas. Secondly, with automatic speech recognition being a major facet of AI persona creation and often a major challenge to incorporate properly, the Intel® Distribution of OpenVINO™ toolkit will vastly optimize the accuracy of these models as well as expediate the deployment process using OpenShift and the OpenVINO Model Server.
The AI Kit provides clear-cut computational improvements that benefit companies by saving greatly on operational expenditures (e.g., expensive resources like GPUs) and by iterating on the data science workflow more quickly. Computations finishing more quickly means solving more problems more efficiently, which in turn allows developers more personal efficiency. Furthermore, the acceleration of model deployment times provides increased development workflow efficiency and reduces operational costs from expensive computational resources by decreasing the time they are in use. One example of the AI Kit showing clear improvements to model performance is with the use of the Intel® Neural Compressor, with the sample for TensorFlow*. In this use case, the AI Kit significantly increased the model training time while retaining the accuracy of the original version within 0.01%.
OpenVINO is a toolkit that has proven to greatly increase model efficiency by optimizing models in several key ways. For example, using a hybrid cloud approach with high performance model inferencing, OpenVINO trims model sizes down to the most critical components. It also utilizes quantization to reduce the numerical composition of longer values, especially infinitely long irrational numbers like pi. An example of OpenVINO’s benefits can be seen in the deployment process and output of the same Neural Compressor model sample, which clearly displays the most useful results, such as the input and output shapes and the simplified text output after being decoded.
With AI and machine learning becoming critical to the advancement of companies across many industries, Intel has been continuing to develop AI tools technologies, and resources that will help developers optimize the complicated workflows of these evolving systems behind the scenes. The development process behind AI and machine learning systems is a continually evolving process just like AI itself.
About the Experts
Audrey Reznik is a Sr. Principal Software Engineer in the Red Hat Cloud Services – OpenShift Data Science team focusing on managed services, AI/ML workloads, and next-generation platforms. She has been working in the IT Industry for over 20 years in full stack development to data science roles.
As a former technical advisor and data scientist, Audrey has been instrumental in educating data scientists and developers about what the OpenShift platform is and how to use OpenShift containers (images) to organize, develop, train, and deploy intelligent applications using MLOps. She is passionate about Data Science and in particular the current opportunities with ML and Open Source technologies.
Sean Pryor is a Sr. Software Engineer at Red Hat, working on OpenDataHub and Red Hat Open Data Science integrating ISV partners.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.