Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
631 Discussions

Intel and Intel Labs Develop New AI Methods to Restore Trust in Media

ScottBair
Employee
1 0 2,873

Published March 7th, 2022

Scott Bair is a key voice at Intel Labs, sharing insights into innovative research for inventing tomorrow’s technology.

 

Highlights

  • Intel Labs’ trusted media research team investigates new approaches to help determine the authenticity of media content.

  • Research areas include using AI and other methods for deepfake detection, deepfake source detection, and media authentication.

  • Intel joins industry leaders to develop standards and combat the rise of media deception and disinformation.

Almost everything we do today is increasingly digital, from working and learning from home to interacting with family and friends. At the same time, our trust in media has eroded due to deliberate deception via disinformation and manipulated content.  Intel Labs’ trusted media research team is stepping in with several initiatives to help users restore trust in media by being able to distinguish between real and fake content.

In partnership with Intel’s security research team and internal business units, Intel Labs is leading and coordinating its Trusted Media research efforts. The team is exploring how to incorporate detection technology and media provenance in Intel products and is looking at how customers can integrate new technologies into their platforms. Two initial research areas include deepfake detection, or the fake production or modification of media using machine learning and AI; and media authentication technology to confirm the validity of the content, among other solutions. 

Intel’s Innovation in Deepfake Detection 

As tools for creating video and animation become more sophisticated and easier to use, users are able to create more realistic fake media.  Take, for example, the deepfake “deeptomcruise” channel on Tiktok. Millions of viewers visit the channel to view fake videos of Tom Cruise. While this may seem harmless, there are several instances where deepfakes are causing harm, including illegal activity, identity theft, forging, and propaganda. 

The popular show on CBS, “60 Minutes,” did a story entitled “Synthetic Media: How deepfakes could soon change our world,” on deepfakes. The report highlights the malicious use of deepfakes for criminal purposes and growing governmental concern. 

Intel is working to combat this issue and build trust by developing algorithms and architectures that determine whether content has been manipulated using AI techniques. Intel has already incorporated deepfake detection technology on the Intel® Xeon® Scalable Processor, which uses algorithms designed by Intel Research Scientist Ilke Demir and the State University of New York Binghamton University Professor Umur Ciftci, called FakeCatcher

The technology uses remote photoplethysmography techniques to look at the subtle “blood flow” in the pixels of an image, examines the signals from multiple frames, and then runs the signatures through a classifier. The classifier determines whether the video in question is real or fake. 

Further Research in Deepfake Detection

Additional novel deepfake research evaluated how to separate deepfakes from real videos and discover how they are generated in the first place. Using deep learning (DL) approaches, the same researchers classify deepfakes’ generators using convolutional neural networks (CNN) that learn the residuals of the generator. The premise is that residuals contain information about the source that can be disentangled with biological signals. The results indicate that this approach detects fake videos with 97.29 percent accuracy and detects the source model behind fake videos with 93.39 percent accuracy.

Additional research includes determining video authenticity using other biological priors. In addition to FakeCatcher for biological signals, the team also looks at several prominent eye and gaze features that deepfakes exhibit differently and analyzes both real and fake videos. Geometric, visual, metric, temporal, and spectral variations are analyzed and generalized into a gaze signature to enable a deep neural network (NN) to classify videos. Using this approach on several deepfake datasets achieved a 92.48 percent accuracy on FaceForensics++, and 99.27 percent accuracy on DeeperForensics datasets, outperforming most deep and biological fake detectors.

Industry Efforts in Deepfake Detection

The industry is also working to develop technologies and approaches to solve these challenges. Intel’s Ilke Demir joined the industry and academic researchers and contributed to a collaborative white paper presented at an exploratory workshop at the Institute for Pure and Applied Mathematics at the University of California, Los Angeles. The paper, entitled, “DEEP FAKERY – An Action Plan,” was written by members of the mathematics, machine learning, cryptography, philosophy, social science, legal, and policy communities. It discusses the impact of deep fakery and how to respond to it.

Intel is also a member of The Coalition for Content Provenance and Authenticity (C2PA), which is focused on addressing misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. The standards body is an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, which aims to provide publishers, creators, and consumers the ability to trace the origin of different types of media. 

A specification, informed by work conducted through industry organizations including the Project Origin Alliance and the Content Authenticity Initiative (CAI), has been published to enable global adoption of digital provenance techniques by creating secure provenance-enabled applications. The specification attempts to tackle the problem when media is created, however since not all media goes through this certification process, other forms of end-to-end authenticity are needed to determine the validity of the content. 

Conclusion:

While a great deal of disinformation today draws on relatively low-fidelity misuses such as mislabeling and mischaracterization, there is an increase in media disinformation that relies on the widespread availability of video editing tools and AI to manipulate and create believable, life-like media. Left unchecked, manipulated media can have a tangibly detrimental impact on today's society.  

Intel is looking to leverage its strengths in algorithm and architecture design to offer solutions that can detect fraudulent media and bolster faith in news media as a reliable source of fact-based events. Be on the lookout for new developments in trusted media as Intel continues to innovate and find ways to provide critical value to our customers and end-users.

Related Papers & Video: 

FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals
https://ieeexplore.ieee.org/document/9141516

Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking
https://dl.acm.org/doi/10.1145/3448017.3457387  

How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
https://ieeexplore.ieee.org/document/9304909 

White Paper: DEEP FAKERY—An Action Plan 
http://www.ipam.ucla.edu/wp-content/uploads/2020/01/Whitepaper-Deep-Fakery.pdf

Intel exhibitor session at SIGGRAPH
https://www.youtube.com/watch?v=KE0y3jr8izU

Tags (1)
About the Author
Scott Bair is a Senior Technical Creative Director for Intel Labs, chartered with growing awareness for Intel’s leading-edge research activities, like AI, Neuromorphic Computing and Quantum Computing. Scott is responsible for driving marketing strategy, messaging, and asset creation for Intel Labs and its joint-research activities. In addition to his work at Intel, he has a passion for audio technology and is an active father of 5 children. Scott has over 23 years of experience in the computing industry bringing new products and technology to market. During his 15 years at Intel, he has worked in a variety of roles from R&D, architecture, strategic planning, product marketing, and technology evangelism. Scott has an undergraduate degree in Electrical and Computer Engineering and a Masters of Business Administration from Brigham Young University.