Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
491 Discussions

AI Everywhere: Innovation 2022 Wrap-Up

Jack_Erickson
Employee
2 0 16.2K

The Intel Innovation 2022 conference featured a wide array of AI topics throughout keynote presentations, session tracks, demos, workshops, and hackathons. Over the next few weeks, we will post in-depth coverage of many of these. In the meantime, you can access recordings here:

Intel® On 365: Innovation 2022

 

Looking across all the AI topics, one trend that stands out is the maturation of the full AI application lifecycle. In the past, a lot of AI conferences have centered around doing cool things with models or benchmarking training performance. Based on the themes at Innovation 2022, there is tremendous momentum in moving from pilot to production and applying AI to solve real problems everywhere.

Domain-Specific Problem Solving

Pat Gelsinger, in his keynote, spoke about how building and deploying AI models for the real world requires specialty skill sets, but that domain experts are often out-of-the-loop. This is because applying AI models still requires the involvement of AI practitioners and data scientists. The Intel® Geti™ demo showed how domain experts can start with a computer vision inspection model and refine it for their application. Via “active learning”, they interactively annotate images, train, correct missed detections, retrain, then deploy. This simplified transfer learning technique can drastically reduce the amount of training data required.

As part of Greg Lavender’s Day 2 keynote, Intel announced three new AI reference kits for healthcare. These new kits will help clinicians review medical code classifications, identify imaging anomalies, and facilitate claims reviews. They add to the four kits announced in July, with more than 30 planned in the next year. AI reference kits are open source, and include AI software, models and algorithms, datasets, Python code, and documentation, tailored to a specific domain. The AI for Social Good Hackathon at Innovation attracted over 100 developers who used the AI reference kits and other tools and framework optimizations in Intel’s AI software portfolio to build applications across classical machine learning, computer vision, and natural language processing.

In his keynote “Unlocking the Full Potential of AI by Leveraging Data-Centric AI”, Andrew Ng said “don’t iterate on the model, improve the data”. In other words, customize the dataset and not the model. For instance, a visual inspection model can be applied to inspecting pills or semiconductor wafers, the only difference is the data. There is a long-tail of small projects that require a vertical-specific focus. These projects cannot afford to spend millions of dollars on a huge team of AI specialists and a year training a model.

Domain-specific transfer learning is becoming the prevalent approach. This disaggregation between “foundation” models and versions customized for a domain via transfer learning shows that the market is maturing.

Democratizing AI

For domain experts to be productive with AI, it needs to be more accessible. In the panel discussion during the Tech Insights presentation, Julien Simon, Chief Evangelist at Hugging Face summed it up:

“Everybody gets excited about the humungous models, but I'm not. Customers I talk to don't care – they need to solve real-life problems, they can't work on something for twelve months. They want something trained off the shelf, deployable in one line of code. It's a building block in their app, to make their app smarter.”

Hugging Face presented during one of the AI/ML track sessions – their Transformers models range in size from 100 million to billions of parameters. Models of this size require months to train from scratch. Hugging Face offers APIs to access pre-trained models and datasets, so domain experts can start with off-the-shelf models and customize them via transfer learning.

Intel Geti, described earlier, was designed to democratize AI through a no- or low-code approach for transfer learning with computer vision models. This frees domain experts to focus on improving their data.

When deploying these large models for production, quantizing to INT8, BFloat16, or a mixture, can greatly reduce model size and latency. But quantization often requires adding a lot of code and performing a lot of experimentation. To address this, Neural Coder offers a single-click, no-code plug-in for Intel Neural Compressor model compression. Pat’s keynote featured a demo of this capability, quantizing an image classification model to INT8. The quantized model performed inference over 10x faster than the original full-precision model, while maintaining accuracy. This automation greatly reduces the amount of coding and iteration required for production deployment.

Enabling End-to-End AI Workflows via Open Ecosystems

Quantization is just one technique required for production deployment. There’s an entire AI application lifecycle - from early exploration and training, to production inference, to application monitoring and retraining. Plus there are domain-specific considerations such as explainability, privacy, and security. Innovation across so many areas at the speed of AI requires an open ecosystem approach, which could be seen across the sessions.

Greg Lavender, in his day 2 keynote, reiterated Intel’s commitment to open source and ecosystem collaboration by showcasing engagements with Red Hat, engineering work with TensorFlow, and our founding membership in the OpenXLA project. He also invited Brian Martin, Head of AI in R&D Information Research at AbbVie, to present how the collaboration between Katana Graph and Intel enabled his organization to connect massive amounts of diverse data via graph neural networks, greatly accelerating their drug discovery and development process.

With such broad and diverse needs spanning the end-to-end AI development lifecycle, rapid innovation requires open ecosystems. This was truly on display at Innovation 2022, which in addition to the aforementioned collaborations, included presentations and demos covering the following:

  • The AI reference kits were the result of collaboration between Intel and Accenture
  • Hugging Face has become the destination for large foundation models and data sets, hosting over 70,000 models and over 10,000 datasets.
  • TruEra offers a solution to measure and improve AI quality, including test harness creation, interactive debugging, reporting, and monitoring.
  • Zeblok Computational introduced their “AI-Optimization-as-a-Service”, a container solution that can run your end-to-end AI pipelines on hybrid cloud environments, as well as try different optimization engines
  • cnvrg.io (now an Intel company) presented their Metacloud offering, a managed cloud platform built on their MLOps solution for end-to-end AI development
  • Red Hat OpenShift Data Science, a hosted or on-prem solution that brings together a variety of common AI/ML tools, including MLOps from cnvrg.io
  • AWS Sagemaker is a fully-managed offering built on top of hybrid hardware architectures and AI platforms

Looking Ahead

It was encouraging to see the focus on building out full AI solutions that could be used to solve business-specific problems. In the coming weeks, we will be posting some deeper insights into some of the topics covered during the conference. We will also cover some of the more forward-looking topics, especially from the AI Tech Insights session “AI and Data Science: Productivity and Performance at Scale” by Kavitha Prasad, which covered topics central to providing production solutions to businesses. Also part of the talk were a presentation by Sundar Ranganathan of AWS on their full-stack AI offerings powered by Intel AI technologies and a panel discussion about developer pain points with collaborators from HCL, Hugging Face, Meta, and Zoom. It’s clear that with all the current innovations in AI, broad adoption is becoming a reality.

About the Author
Technical marketing manager for Intel AI/ML product and solutions. Previous to Intel, I spent 7.5 years at MathWorks in technical marketing for the HDL product line, and 20 years at Cadence Design Systems in various technical and marketing roles for synthesis, simulation, and other verification technologies.