Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Intel Labs Open Sources Adversarial Image Injection to Evaluate Risks in Computer-Use AI Agents
07-28-2025
Adversarial examples can force computer-use artificial intelligence (AI) agents to execute arbitrary...
0
Kudos
0
Comments
|
Robots Meet Humans: Intel Labs Extends Robotics Safety to Cover 3D Environments
07-21-2025
Intel Labs researchers have developed a new set of safety concepts for mobile and stationary robots ...
0
Kudos
0
Comments
|
Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students
06-17-2025
Scientists involved in the multi-year research project completed several prototype studies, demonstr...
0
Kudos
0
Comments
|
New Atlas CLI Open Source Tool Manages Machine Learning Model Provenance and Transparency
06-12-2025
Intel Labs offers Atlas CLI, an open source tool for managing machine learning (ML) model provenance...
1
Kudos
0
Comments
|
Evaluating Trustworthiness of Explanations in Agentic AI Systems
05-20-2025
Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable AI Workshop found that ...
0
Kudos
0
Comments
|
The Secret Inner Lives of AI Agents: Understanding How Evolving AI Behavior Impacts Business Risks
04-29-2025
Part 2 in Series on Rethinking AI Alignment and Safety in the Age of Deep Scheming
0
Kudos
0
Comments
|
The Urgent Need for Intrinsic Alignment Technologies for Responsible Agentic AI
03-07-2025
Rethinking AI Alignment and Safety in the Age of Deep Scheming
1
Kudos
0
Comments
|
Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI
02-09-2025
The Trusted Media research team at Intel Labs is working on several projects to help artists and con...
0
Kudos
0
Comments
|
Understanding and Addressing Bias in Conversational AI
01-30-2025
Conversational artificial intelligence (AI) is becoming deeply embedded in everyday life. This wides...
2
Kudos
0
Comments
|
LLMart: Intel Labs' Large Language Model Adversarial Robustness Toolkit Improves Security in GenAI
01-27-2025
Intel Labs open sources LLMart, a toolkit for evaluating the robustness of generative artificial int...
2
Kudos
0
Comments
|
Exploring Using AI for Early Detection of Climate Change Signals
01-21-2025
ClimDetect, a benchmark dataset of over 816,000 climate data samples, enables the standardization of...
0
Kudos
0
Comments
|
From FLOPs to Watts: Energy Measurement Skills for Sustainable AI in Data Centers
01-15-2025
Energy transparency is increasingly a priority for policymakers in the responsible deployment and us...
1
Kudos
1
Comments
|
CLIP-InterpreT: Paving the Way for Transparent and Responsible AI in Vision-Language Models
12-17-2024
CLIP-InterpreT offers a suite of five interpretability analyses to understand the inner workings of ...
0
Kudos
0
Comments
|
LVLM-Interpret: Explaining Decision-Making Processes in Large Vision-Language Models
12-13-2024
Understanding the internal mechanisms of large vision-language models (LVLMs) is a complex task. LVL...
1
Kudos
0
Comments
|
Building Trust in AI: An End-to-End Approach for the Machine Learning Model Lifecycle
12-11-2024
At Intel Labs, we believe that responsible AI begins with ensuring the integrity and transparency of...
0
Kudos
0
Comments
|
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.