Protect AI with confidential computing

Confidential IT is an innovative technology designed to improve the security and privacy of data during processing. By leveraging certified, hardware-based trusted execution environments (TEEs), confidential computing helps ensure that sensitive data remains secure, even when in use.

For artificial intelligence (AI), sensitive computing is emerging as a key solution to address growing security and privacy concerns related to AI. GPU-only processing is already available for small and medium-sized models. As the technology advances, Microsoft and NVIDIA plan to offer scalable solutions to support large language models (LLMs).

Address data privacy and sovereignty concerns

Privacy and data sovereignty are among the main concerns of organizations, especially those in the public sector. Governments and institutions that handle sensitive data are wary of using conventional AI services due to potential data breaches and misuse.

Private AI is emerging as a crucial solution for these scenarios. Confidential AI fulfills two primary security objectives:

  1. Protection against infrastructure access: Ensure that AI prompts and data are protected by cloud infrastructure providers, such as Azure, where AI services are hosted.

  2. Protection against service providers: Maintenance of privacy by operators of AI services, such as OpenAI.

Examining potential use cases

Imagine a pension fund working with highly sensitive citizen data when processing applications. AI can speed up the process significantly, but the fund may be reluctant to use existing AI services for fear of data leaks or the information being used for AI training purposes.

Another use case involves large companies wanting to analyze board meeting protocols, which contain highly sensitive information. While they might be tempted to use AI, they refrain from using any existing solutions for such critical data due to privacy concerns.

Confidential AI mitigates these concerns by protecting AI workloads with confidential compute. When applied correctly, confidential processing can effectively prevent access to user prompts. It even becomes possible to ensure that prompts cannot be used to retrain AI models.

Confidential Computing achieves this with encryption and runtime memory isolation, as well as remote attestation. Attestation processes use evidence provided by system components such as hardware, firmware, and software to demonstrate the reliability of the confidential computer program or environment. This provides an additional level of security and trust.

Overcoming barriers in regulated industries

Sensitive and highly regulated industries like banking are particularly cautious about adopting AI due to data privacy concerns. Confidential AI can fill this gap by helping ensure that cloud AI deployments are secure and compliant. With confidential computing, banks and other regulated entities can use AI at scale without compromising data privacy. This allows them to benefit from AI-driven insights while meeting stringent regulatory requirements.

Securely run AI deployments in the cloud

For organizations that prefer not to invest in on-premises hardware, confidential computing offers a viable alternative. Instead of purchasing and managing physical data centers, which can be expensive and complex, companies can use confidential computing to secure their AI deployments in the cloud.

For example, an internal administrator can create a reserved computing environment in Azure using reserved virtual machines (VMs). By installing an open source AI stack and deploying models like Mistral, Llama, or Phi, organizations can manage their AI deployments securely without the need for large hardware investments. This approach eliminates the challenges of managing additional physical infrastructure and provides a scalable solution for AI integration.

Building secure AI services

Confidential computing not only enables the safe migration of self-managed AI deployments to the cloud. It also enables the creation of new services that secure user requests and shape burdens against the cloud infrastructure and service provider.

For example, Continuousa new service offered by Edgeless Systems, takes advantage of Azure reserved VMs with NVIDIA H100 GPUs. Inside reserved VMs, Continuum runs AI code within a sandbox based on the open source gVisor software. This architecture allows the Continuum service to isolate itself from the sensitive computing environment, preventing AI code from leaking data. Combined with end-to-end remote attestation, this ensures robust protection for user prompts.

Imagine confidential AI as the standard

As confidential AI becomes more widespread, it is likely that such options will be integrated into mainstream AI services, providing a simple and secure way to use AI. This could transform the landscape of AI adoption, making it accessible to a wider range of industries while maintaining high standards of data privacy and security.

Private computing offers significant advantages for AI, particularly in addressing data privacy, regulatory compliance and security issues. For highly regulated industries, confidential computing will enable entities to harness the full potential of AI more securely and effectively. Confidential AI could even become a standard feature in AI services, paving the way for broader adoption and innovation across industries.

Learn more about GPU-enhanced Reserved VMs at the Microsoft technology community.


#Protect #confidential #computing

Leave a Comment