Seminar
The event has passed

Natural and artificial trust for decision-making in human-machine teamwork

Speaker: Carolina Centeio Jorge, PhD student, TU Delft, the Netherlands. Organised by: CHAIR X-AI, Network on Human-Centered Collaborative Autonomy.

Overview

The event has passed
  • Date:Starts 1 December 2023, 10:00Ends 1 December 2023, 11:00
  • Seats available:30
  • Location:
    E2 Room 3364, EDIT-rummet
  • Language:English
  • Last sign up date:1 December 2023
Registration (Opens in new tab)
Carolina Centeio Jorge

Abstract, Carolina Centeio Jorge:

Human-machine teams count on both humans and artificial agents to work together collaboratively. In human-human teams, we use trust to make decisions, such as which teammate should do which task, based on what we believe might be successful. Our prediction of task success is based on our beliefs of others’ trustworthiness, which can be divided into several dimensions, e.g., competence, willingness, external factors, etc.

As artificial teammates’ autonomy increases, the variation of interdependences in human-machine teams increases too. As such, team members need to consider the different possibilities to achieve task success as a team, making the best use of human-machine collaboration. It is then important that all members involved, both humans and machines, have the necessary beliefs of trust and trustworthiness to make decisions that ensure the team’s goal and mitigate possible risks. By formalizing trust and trustworthiness beliefs, we can increase the transparency of decisions, either made by humans or machines.

In this talk, I will go over notions of trust, trustworthiness, interdependence and coactive design, all in the context of decision-making in human-machine teams. I will present our multidisciplinary user studies and results which allow us to increase our (and the machine’s) understandability of the human teammate when collaborating with a machine and, consequently, make the machine teammate more understandable to the human.