"Responsible_AI" Posts in "Artificial Intelligence (AI)"

cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
801 Discussions
Latest Tagged

Intel Labs Open Sources Adversarial Image Injection to Evaluate Risks in Computer-Use AI Agents

Adversarial examples can force computer-use artificial intelligence (AI) agents to execute arbitrary...
0 Kudos
0 Comments

Robots Meet Humans: Intel Labs Extends Robotics Safety to Cover 3D Environments

Intel Labs researchers have developed a new set of safety concepts for mobile and stationary robots ...
0 Kudos
0 Comments

Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students

Scientists involved in the multi-year research project completed several prototype studies, demonstr...
0 Kudos
0 Comments

New Atlas CLI Open Source Tool Manages Machine Learning Model Provenance and Transparency

Intel Labs offers Atlas CLI, an open source tool for managing machine learning (ML) model provenance...
1 Kudos
0 Comments

Evaluating Trustworthiness of Explanations in Agentic AI Systems

Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable AI Workshop found that ...
0 Kudos
0 Comments

The Secret Inner Lives of AI Agents: Understanding How Evolving AI Behavior Impacts Business Risks

Part 2 in Series on Rethinking AI Alignment and Safety in the Age of Deep Scheming
0 Kudos
0 Comments

The Urgent Need for Intrinsic Alignment Technologies for Responsible Agentic AI

Rethinking AI Alignment and Safety in the Age of Deep Scheming
1 Kudos
0 Comments

Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI

The Trusted Media research team at Intel Labs is working on several projects to help artists and con...
0 Kudos
0 Comments

Understanding and Addressing Bias in Conversational AI

Conversational artificial intelligence (AI) is becoming deeply embedded in everyday life. This wides...
2 Kudos
0 Comments

LLMart: Intel Labs' Large Language Model Adversarial Robustness Toolkit Improves Security in GenAI

Intel Labs open sources LLMart, a toolkit for evaluating the robustness of generative artificial int...
2 Kudos
0 Comments

Exploring Using AI for Early Detection of Climate Change Signals

ClimDetect, a benchmark dataset of over 816,000 climate data samples, enables the standardization of...
0 Kudos
0 Comments

From FLOPs to Watts: Energy Measurement Skills for Sustainable AI in Data Centers

Energy transparency is increasingly a priority for policymakers in the responsible deployment and us...
1 Kudos
1 Comments

CLIP-InterpreT: Paving the Way for Transparent and Responsible AI in Vision-Language Models

CLIP-InterpreT offers a suite of five interpretability analyses to understand the inner workings of ...
0 Kudos
0 Comments

LVLM-Interpret: Explaining Decision-Making Processes in Large Vision-Language Models

Understanding the internal mechanisms of large vision-language models (LVLMs) is a complex task. LVL...
1 Kudos
0 Comments

Building Trust in AI: An End-to-End Approach for the Machine Learning Model Lifecycle

At Intel Labs, we believe that responsible AI begins with ensuring the integrity and transparency of...
0 Kudos
0 Comments