Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
791 Discussions

Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI

Anthony_Rhodes
Employee
0 0 1,584

Anthony Rhodes is a research scientist at Intel Labs where he develops AI algorithms focusing on enhancing trust and explainability for users.

Highlights

  • The Trusted Media research team at Intel Labs is working on several projects to help artists and content owners protect their data from being used in generative artificial intelligence (AI) applications.
  • An AI art protection tool empowers content owners by protecting their materials from being utilized by diffusion models.
  • A different AI voice protection tool safeguards voices from being misused by generative models.

The use of generative AI is on the rise, enabling anyone to produce realistic content via publicly available interfaces. Especially in guided image generation, diffusion models are changing the creator economy by producing high quality, low cost content. But these creative technologies come at the expense of artists whose artwork is leveraged, distributed, and dissimulated by large generative models without payment or attribution to the originator.

The Trusted Media research team at Intel Labs is engaged in several initiatives to help users protect their data and voices from being used in generative AI applications. Based on research by Intel Labs and collaborators, the AI art protection tool secures ownership of artists' image data by preventing the unauthorized use in AI-based generative image synthesis applications while a separate AI voice protection tool safeguards audio data from AI-based manipulation and voice cloning. The art protection tool learns to generate adversarial “protected” versions of images which can “break” diffusion models. The voice protection tool is a lightweight adversarial generation approach that learns a protected version of an audio clip to confound voice cloning techniques.

Protecting Artists and Their Data

Since the regulation and policy space is not sufficiently mature to protect creative rights, the art protection tool enables artists and content owners to seal their material with adversarial protection to prevent online copyrighted images from being exploited by diffusion models. This cross-domain protector is ideal for interrupting many diffusion-based tasks, such as personalization, style transfer, and any guided image-to-image translation. When these imperceptibly different versions of original images are fed to diffusion models for various tasks, the tool renders a corrupted output (see Figure 1).

Figure 1 art protection tool Intel Labs.png
Figure 1. The art protection tool aims to learn how to confuse diffusion models by degrading their output across various tasks.

We frame this problem as an adversarial attack to black-box diffusion models where only the input/output pairs are known. The tool optimizes for the perceptual resemblance of the input and protected images, and structural and generative atrophy of the diffusion output image. By design, the tool enables the artist to be in control, thus, we craft and expose a balance variable for the artist to tune between the fidelity and robustness of the protected image.

The tool employs a simple U-Net architecture to learn this generation process, consisting of blocks with convolutional layers followed by up/downsampling (see Figure 2). The tool is trained using a multi-objective function that combines loss objectives, including reconstruction, content, style and noise loss — that work in concert to preserve the fidelity of the protected image while maximizing the corruption of the synthesized output of this image (see Figure 2).

Figure 2 art protection tool Intel Labs.png
Figure 2. The art protection tool input/output, generator architecture loss formulation.

The tool can be used in a number of critical protection use cases, including artist style infringement (see Figure 3), deepfake and generative identity infringement, and other image related manipulation tasks such as inpainting.

Figure 3 art protection tool Intel Labs.png
Figure 3. Examples of artist style protection.

Protecting Voices

In a similar vein, the Intel Labs Trusted Media team is working on a new AI voice protection tool that empowers people to have control over their voices by preventing their audio signature from being replicated by generative AI. The tool learns to generate adversarial samples as similar as possible to the original sound while also degrading the quality of the voice clone as much as possible.

Modern voice cloning approaches contain common modules for source separation, vocoder, synthesizer, pitch extraction, or encoder. While attacking these modules separately can also degrade performance, some may cancel the adversarial changes introduced at the input. As such, the tool is designed to attack voice cloning systems in an end-to-end manner, independent of their components. The tool is architected as a plug-and-play U-Net model flexible for any attacked model (see Figure 4).

Figure 4 voice protection tool Intel Labs.png
Figure 4. Overview. (Left) Without the audio protection tool, attack model imitates a voice with a speaker embedding. With the tool protecting the voice, attack model produces distorted audio. (Middle) Our model architecture is depicted, operating with input (black), converted input (black with speaker), protected (green), and broken (red) voices. (Right) We also relate our loss terms to the overall system.

The tool is trained using a multi-objective loss function consisting of several components, including reconstruction loss, perception loss, distortion loss, and opinion loss terms. Together, these objectives operate to preserve the fidelity of the protected audio with the original audio, while maximizing the degradation of the synthesized, protected output.

The tool works in several essential audio/identity protection use cases, including spoken and singing voice cloning.

About the Author
AI Research Scientist @ Intel Labs