Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
562 Discussions

How Intel Builds AI with People in Mind

Elizabeth_Watkins
2 0 8,666

Elizabeth Anne Watkins is a research scientist focused on the social science of artificial intelligence at Intel Labs’ Intelligent Systems Research Lab

 

Highlights:

  • Intel’s Intelligent Systems Research Lab is working to understand the interplay between technical and social systems to ensure a people-first approach to AI.
  • In a recent research study with Princeton University, Intel found that users are most interested in learning how to be a good collaborator with AI.
  • To support users’ needs, we look closely at the work they are trying to accomplish, how a prototype of an AI system works, and the relationship between the two.
  • Check out my recent appearance on the MIT Sloan Management Review podcast, “Me, Myself, and AI: A Podcast on Artificial Intelligence and Business” for more information on how Intel takes a social scientific approach to AI.

 

Read more about the ways Intel takes a social scientific approach to AI

New AI tools have been making headlines, passing written exams, writing code, and drafting essays. ChatGPT and DALL-E’s performance on particularly humanistic endeavors like writing poems and creating paintings have left many wondering what’s left for humans. Where do we fit in when AI is so capable? And how do we ensure a people-first approach to AI?

At Intel, we believe that the role of humans is so foundational that we take a scientific approach to understanding people. We use sociology, anthropology, and psychology to help us understand human lives and society. We look at what problems humans are trying to solve, what tasks they excel at, and in what kinds of environments – workplaces, schools, communities – they are undertaking these tasks.

 

One Lab’s Approach

Social science draws attention to the ways that different people see and know the world. Here in Intel’s Intelligent Systems Research Lab, we apply that knowledge to AI. We set priorities for technical work through a deep understanding of a technology’s societal value and impact. Through this lens, we can also anticipate hurdles to implementation and avoid possible negative outcomes. Social science is a crucial element of both our development process and our responsible AI governance. What technology is for, and how it solves a problem, are not purely technical issues but result from an interplay between technical and social systems. Understanding that interplay is at the center of our work.

One area in which we do this work is human-AI collaboration. Human-AI collaboration is a process in which humans and AI systems work together to complete a job, each doing the types of tasks best suited to their abilities. For example, medical decision-making, complex financial markets, and even grammar-suggestion tools increasingly include AI systems. In most instances, AI systems analyze data and look for patterns to identify potential insights. When combined with what humans are good at, the effect is powerful: people can make decisions based on a much broader set of historical patterns than before. But that means we need a better understanding of what people want to achieve, which requires us to acknowledge and respect the wide variety of values and priorities that people have—all of which the social sciences analyze.

 

Explanation for whom?

One research area we are focused on is explainable AI methods and transparency of AI. Social science research has shown that socially-effective explanations differ from those that engineers are interested in. For example, Intel recently worked with Princeton University to study the different kinds of explanations people want from computer vision used in birding apps. It turns out that what they want to know is how to be a good collaborator with AI – for example, how to take a picture so the app will recognize the item they want more information about. Essentially, people don’t just want explanations; they want support for taking action.

Intel is applying this lesson to its AI research by observing and understanding what requires explanations, and what actions those explanations support. We go beyond questionnaires and spend extended periods of time with users to understand their mental models – what a person comprehends about how a system works. We look closely at the work they are trying to accomplish, how a prototype of an AI system works, and the relationship between the two. Based on what we find, we collaborate closely with engineering and design teams to make sure that what we build matches people’s expectations and provides supportive information about the system, its functions, and its purpose, to enable learning and discovery.

 

From Annotation to Partnership

The collaboration between engineers and social scientists can do more than identify needed features. It can also lead to novel approaches to solving tough technical problems. One of the great bottlenecks of AI system development today is data quality. To train an AI system on a new domain or put it to work against a new problem, lots of data needs to be collected, cleaned, and labeled so that intelligent systems have material on which to learn.

One way we’re addressing this holdup is to flip how we think about the role of humans in the AI development pipeline. We’re reimagining the data work of AI, especially when it’s done, how, and most importantly, by whom. Most often, the labor-intensive data work is done prior to deployment using third-party annotators, who usually have little visibility into the system that they end up producing. Instead, our lab builds AI systems that partner with users who are experts in specialized fields where AI can help, like manufacturing. This means that more “learning” happens during deployment. There is more give and take between the user and the system, and the user has a stronger say over what the system does and does not learn. This changes the dynamic of a tough tech challenge: what developers typically see as a “data labeling problem,” we see as a richer interaction for the users, while making data labels more aligned with the sociotechnical realities of deployment domains.

According to Intel Fellow and Intelligent Systems Research Lab Director Lama Nachman, “When we shift the AI training problem to learning from domain experts while they are doing their work, we need to address new challenges such as how domain experts teach the AI these skills without writing code as well as how they understand, correct, and manipulate what the system learned to improve it. In other words, we must innovate if we are going to support people’s priorities while also gathering the data machines need, as seamlessly as possible. Typically, these are tasks that are accomplished through coding or data labeling, but at Intel, we are addressing them through natural and physically grounded interaction in realistic environments.”

 

Looking Ahead

The truth is, we have more questions than answers on almost all these topics. There’s a lot of work ahead of us, from social aspects of trusted media and how humans trust and verify AI system outputs, to thinking about how to make AI more sustainable, to the fair application of AI in manufacturing, to what goes awry when responsibility for an AI system gets distributed across organizations. So, if the question is, where do the humans fit in in the cascade of recent AI developments, chances are pretty good that someone at Intel is working hard to put them at the center.

For more on how Intel takes a social scientific approach to AI, check out my recent appearance on the MIT Sloan Management Review podcast, “Me, Myself, and AI: A Podcast on Artificial Intelligence and Business.”

Tags (2)
About the Author
In her role as a Research Scientist in the Social Science of AI at Intel Labs Intelligent Systems Research and member of Intel's Responsible AI Advisory Council, Elizabeth Anne Watkins drives strategy and execution of social science methods to amplify human potential in Human-AI Collaboration. Prior to joining Intel, she received her PhD at Columbia University and her MS at MIT, and was a Postdoctoral Fellow at Princeton University with dual appointments at the Center for Information Technology Policy and the Human-Computer Interaction group. She has worked, consulted, and collaborated with research centers across academia and industry including the Princeton Visual AI Lab, Parity.AI, Perceptive Automata, Harvard Business School, MIT, and Google.