Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
801 Discussions

Improving Video Understanding Through Graph-Based AI for Better Human Activity Recognition

Subarna_Tripathi
Employee
0 0 371

Researchers from Intel and the University of Colorado Boulder have developed an innovative approach to help artificial intelligence (AI) systems better understand complex human activities captured in egocentric or first-person videos. The GLEVR framework for graph learning on egocentric videos for keystep recognition addresses a critical challenge in computer vision: accurately recognizing fine-grained keysteps in procedural tasks when viewed from a person's perspective, such as cooking recipes or repair procedures. Conducted using the extensive Ego-Exo4D dataset containing over 76,000 keystep action segments, GLEVR outperforms existing egocentric methods by more than 16% on the validation set. It can leverage multi-view (egocentric and exocentric or third person views) and multimodal alignment to further improve performance, outperforming existing multi-view methods by more than 19%.

Existing multimodal systems struggle with first-person videos because the camera constantly moves, backgrounds change rapidly, and objects frequently move in and out of view. These challenges make it difficult for an ML system to distinguish between similar actions or understand the sequence of steps in complex tasks. GLEVR solves this by representing each video segment as a node in a graph structure, allowing the system to capture relationships between different time periods and leverage multiple camera angles during training. This advancement has important implications for developing smarter virtual assistants that can provide contextual help during cooking or repairs, training systems that can automatically assess skill development, and accessibility tools that can describe ongoing activities for visually impaired users.

 

The Challenge of Understanding First-Person Video

First-person or egocentric videos present unique difficulties for AI systems. Unlike fixed-camera footage, these videos capture the world from a constantly moving viewpoint, creating dynamic backgrounds and frequent occlusions as the person moves their head and interacts with objects. This creates what researchers call a signal in noise problem, where the important action information gets buried in visual confusion.

The challenge becomes even more complex when dealing with fine-grained keystep actions that look similar. For example, cutting vegetables might involve nearly identical hand movements whether someone is chopping onions, carrots, or celery, requiring the AI to detect subtle contextual differences in hand movements and object interactions, often while dealing with partial views and motion blur. Traditional AI approaches that work well on fixed-camera videos often fail when applied to these dynamic first-person perspectives.

 

A Novel Graph-Based Approach to Video Understanding

GLEVR addresses these challenges by changing how AI systems process video information. Instead of analyzing videos as continuous streams, the framework breaks them into segments and represents each segment as a node in a graph structure. Think of it like creating a map where each location (video segment) is connected to related locations (other segments) through pathways (graph edges) that indicate relationships.

This graph structure allows the system to capture long-term dependencies in videos more effectively. When someone is cooking, for instance, the system can understand that earlier preparation steps influence later cooking actions, even if they're separated by several minutes. The graph connections help the AI maintain context across the entire activity sequence in long videos.

During training, GLEVR can incorporate multiple camera viewpoints of the same activity, creating a richer understanding of each action. However, during actual use, the system only needs input from a single first-person camera, making it practical for real-world deployment in wearable devices or mobile applications.

 

Advancing Performance Through Multimodal Integration

Beyond visual information, GLEVR can incorporate additional types of data to improve its understanding. The researchers experimented with three additional modalities: automatically generated narrations describing what's happening in the video, depth maps that provide spatial information about the scene, and object detection labels that identify specific items being manipulated.

The system showed particular improvement when incorporating generated narrations, which help distinguish between similar-looking actions by providing textual context. For example, while two cutting actions might look visually similar, the narration can help clarify whether someone is cutting vegetables or meat, providing crucial context for accurate recognition.

 

Key Findings and Performance Improvements

Testing on the comprehensive Ego-Exo4D dataset, GLEVR demonstrated substantial improvements over existing methods. The framework achieved 52.36% accuracy on egocentric-only testing, representing a 16% improvement over the previous best method. When multiple viewpoints were available during training, performance increased further to 53.08%.

Figure 1 Graph based keystep recognition Intel.png

Figure 1. The graph-based representation shows how GLEVR connects video segments (nodes) across time and multiple viewpoints during training, while inference operates only on the egocentric view.

The system proved particularly effective at leveraging multiple camera angles during training while maintaining efficiency during deployment. Unlike some competing approaches that actually performed worse when additional viewpoints were added, GLEVR successfully extracted complementary information from multiple perspectives to improve single-camera performance.

Importantly, GLEVR achieves these improvements while remaining computationally efficient. The framework can be trained on a single high-end GPU and processes graphs that are sparse and memory-efficient, making it practical for deployment in resource-constrained environments.

 

About the Author
Subarna Tripathi is a Principal Engineer at Intel, working in computer vision and machine learning. She leads a team of talented researchers focusing on visual algorithms research with a focus on scene graphs, multimodal learning and video understanding. She received her PhD in Electrical and Computer Engineering from University of California San Diego. She has been an area chair of WiML and serving as a program committee member in venues such as CVPR, ECCV, ICCV, NeurIPS, WACV, AAAI, ICLR, IEEE journals.