Big Ideas
See how hardware, software, and innovation come together.
65 Discussions

Confidential AI: The Convergence of Security, Privacy and AI

Greg_Lavender
Employee
3 0 3,021

Organizations of all sizes face several challenges today when it comes to AI. According to the recent ML Insider survey, respondents ranked compliance and privacy as the greatest concerns when implementing large language models (LLMs) into their businesses. Of course, GenAI is just one slice of the AI landscape, yet a good example of industry excitement when it comes to AI.

Data is one of your most valuable assets. Modern organizations need the flexibility to run workloads and process sensitive data on infrastructure that is trustworthy, and they need the freedom to scale across multiple environments.

We recognize a computing environment outside your control is an environment you may not be able to trust. This is why Intel developed technology that allows you to separate and process code in a trusted execution environment. Intel pioneered data center confidential computing technologies with the introduction of Intel® Software Guard Extensions (Intel® SGX) for application isolation in 2018, followed by Intel® Trusted Domain Extensions (Intel® TDX) for virtual machines and most recently, the independent attestation service in Intel® Trust Authority.

These foundational technologies help enterprises confidently trust the systems that run on them to provide public cloud flexibility with private cloud security. Today, Intel® Xeon® processors support confidential computing, and Intel is leading the industry’s efforts by collaborating across semiconductor vendors to extend these protections beyond the CPU to accelerators such as GPUs, FPGAs, and IPUs through technologies like Intel® TDX Connect.

Last year, I had the privilege to speak at the Open Confidential Computing Conference (OC3) and noted that while still nascent, the industry is making steady progress in bringing confidential computing to mainstream status. As an industry, there are three priorities I outlined to accelerate adoption of confidential computing:

  1. Develop tooling for high-volume deployment of confidential computing applications and services.
  2. Advance trusted connectivity between trusted execution environments and PCI Express devices.
  3. Raise awareness of confidential computing.

In parallel, the industry needs to continue innovating to meet the security needs of tomorrow. Rapid AI transformation has brought the attention of enterprises and governments to the need for protecting the very data sets used to train AI models and their confidentiality. Concurrently and following the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, the National Institute of Standards and Technology created the U.S. AI Safety Institute Consortium that Intel has joined to support development and deployment of policies for the safe and trustworthy use of AI. Similar regulations such as GDPR and the EU AI Act have been introduced in Europe.

The need to maintain privacy and confidentiality of AI models is driving the convergence of AI and confidential computing technologies creating a new market category called confidential AI.

This years OC3 conference included an industry panel discussion on the potential of confidential AI where I was pleased to participate along with my peers from AMD (Mark Papermaster), Microsoft Azure (Mark Russinovich) and Nvidia (Ian Buck).

Actionable AI Insights Are Dependent on Secure and Privacy-Preserving Access to Broad Data Sets

Data scientists and engineers at organizations, and especially those belonging to regulated industries and the public sector, need safe and trustworthy access to broad data sets to realize the value of their AI investments.

The inability to leverage proprietary data in a secure and privacy-preserving manner is one of the barriers that has kept enterprises from tapping into the bulk of the data they have access to for AI insights.

Confidential AI enables enterprises to implement safe and compliant use of their AI models for training, inferencing, federated learning and tuning. Its significance will be more pronounced as AI models are distributed and deployed in the data center, cloud, end user devices and outside the data center’s security perimeter at the edge. It allows organizations to protect sensitive data and proprietary AI models being processed by CPUs, GPUs and accelerators from unauthorized access. 

I refer to Intel’s robust approach to AI security as one that leverages “AI for Security” — AI enabling security technologies to get smarter and increase product assurance — and “Security for AI” — the use of confidential computing technologies to protect AI models and their confidentiality. Both approaches have a cumulative effect on alleviating barriers to broader AI adoption by building trust.

For example, mistrust and regulatory constraints impeded the financial industry’s adoption of AI using sensitive data. The use of confidential AI is helping companies like Ant Group develop large language models (LLMs) to offer new financial solutions while protecting customer data and their AI models while in use in the cloud.

During the panel discussion, we discussed confidential AI use cases for enterprises across vertical industries and regulated environments such as healthcare that have been able to advance their medical research and diagnosis through the use of multi-party collaborative AI. Other use cases for confidential computing and confidential AI and how it can enable your business are elaborated in this blog.

Regulations and AI Applications Will Accelerate Use of Confidential AI

Intel strongly believes in the benefits confidential AI offers for realizing the potential of AI. The panelists concurred that confidential AI presents a major economic opportunity, and that the entire industry will need to come together to drive its adoption, including developing and embracing industry standards.

Although confidential AI is a new market category, regulations such as the EU AI Act, President Biden’s Executive Order, GDPR and HIPAA among others will accelerate its proliferation. AI is a big moment and as panelists concluded, the “killer” application that will further boost broad use of confidential AI to meet needs for conformance and protection of compute assets and intellectual property.

All of these together — the industry’s collective efforts, regulations, standards and the broader use of AI — will contribute to confidential AI becoming a default feature for every AI workload in the future.

You can learn more about confidential computing and confidential AI through the many technical talks presented by Intel technologists at OC3, including Intel’s technologies and services.

Bringing “AI Everywhere” cannot be achieved in isolation and without securing AI workloads. Intel is committed to advancing AI technology responsibly. Confidential AI is a major step in the right direction with its promise of helping us realize the potential of AI in a manner that is ethical and conformant to the regulations in place today and in the future.

For additional information resources on Intel’s confidential computing technologies and how enterprises are successfully using them, please visit: Intel Confidential Computing Solutions 

About the Author
Greg Lavender is executive vice president, chief technology officer (CTO) and general manager of the Software and Advanced Technology Group (SATG) at Intel Corporation. As CTO, he is responsible for driving Intel’s future technical innovation through his leadership of Intel Labs, Intel Federal LLC and Intel Information Technology (IT). He is also responsible for defining and executing Intel’s software strategy across artificial intelligence, confidential computing and the growing need for open accelerated computing to support Intel’s range of business and hardware offerings. Lavender joined Intel in June 2021 from VMware, where he served as senior vice president and CTO. He has 40 years of experience in software and hardware product engineering, cloud-scale systems architecture and engineering, and advanced research and development. Prior to his role at VMware, Lavender held executive and technology leadership positions at Citigroup, Cisco Systems and Sun Microsystems. Lavender holds a Bachelor of Science in computer science from the University of Georgia, and a Master of Science and Ph.D. in computer science from Virginia Tech.