Human-Agent Partnerships
2020-present
It is normal to expect everyday products to respond to our actions. For example, when we open the tap on a sink, we expect the water to run. However, this simple interaction can become considerably different once such everyday products become connected (via IoT) and smart (via AI). Our mundane tap can start to talk back, judge/give advice on our water consumption, or even spy on our guests if they washed their hands after using the toilet. What kind of experiences are evoked and which consequences arise when a tap, or any other everyday product for that matter, becomes more thoughtful, nosy, social, suggestive and/or judgemental?
This project is focused on such potential symbiotic relationships between humans and agents. The word “agent” is used here for referring to embodied artifacts that are equipped with abilities to sense their environment, act on this information autonomously, communicate with users and each other, learn, and evolve (i.e., involve AI). In this sense, agents are envisioned as new forms of domestic and social robotic products. “Human-Agent Partnership” (H-A P) is introduced to refer to interactions where people collaborate with their products (e.g. tap) for tackling practical challenges (e.g. reducing the water bill), fostering shared values (e.g. limiting the use of environmental resources), or learning new knowledge and skills (e.g. practicing the COVID-19 way of washing hands). H-A P requires humans and agents to be aware of each other’s strengths and limitations, negotiate and align goals, intentions, and implications of actions, while also taking into consideration personal, ethical, and societal values. When designed properly, H-A P can amplify human capacity for reasoning, learning, decision-making and problem solving—all leading to human flourishing and empowerment.
The overarching goal of this research is to reveal and investigate:
This project particularly focuses on three key concepts required for building partnerships, i.e., Control, Trust, and Context. Each of these concepts allows for addressing multiple research questions and setting up multiple experiments to answer each question. These research questions are:
Control: Under which conditions and how much control people are willing to delegate to agents? How to include the human input in the design of agents (i.e., hybrid intelligence)? How to design for negotiations between humans and agents when their agendas differ?
Trust: What it takes people to develop and betray trust towards agents? How to make agent behaviors transparent and predictable? What are the causes and manifestations of people’s intentions so that agents can ascertain them? How can agents calibrate people’s expectations regarding agents’ performance?
Context: What can, will, and should happen when agents and people interact over many months, and how this will change the partnership dynamics? In the context of smart home/city where multiple agents work with each other and people, how to design outside of “a product as a single entity” paradigm, but for wider ecologies of humans and nonhumans?
Related Publications
Cila, N. (2022). Designing Human-Agent Collaborations: Commitment, responsiveness, and support. In CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 420, 1–18. https://doi-org.tudelft.idm.oclc.org/10.1145/3491102.3517500