Seminar
The event has passed

Toward Concept-Based Explanations for Intelligent Systems

Organised by the CHAIR theme Interpretable AI.

Speaker: Sonia Chernova, Georgia Institute of Technology.

Overview

The event has passed
  • Date:Starts 27 June 2024, 10:00Ends 27 June 2024, 11:00
  • Location:
    EC, Campus Johanneberg
  • Language:English
Registration (Opens in new tab)
Photo of Sonia Chernova

Abstract, Sonia Chernova:

Black-box AI systems are increasingly being deployed to help end-users with commonplace tasks. Examples include doctors leveraging decision support systems to aid in diagnosis, robots providing assistance in hospitals and nursing homes, and drivers using autonomous vehicles for assisted driving. To increase the transparency of these black-box models, researchers have developed numerous techniques to provide explanations of agent decision making. A popular approach towards non-expert friendly explanations has been to attribute higher-level “concepts” to an agent’s decision making.

In this talk, I discuss the application of concept-based explanations for sequential decision-making systems in particular, which are relatively less explored in the literature. Given that sequential decision making agents engage in long-term interaction with their environment, I posit that the scope of concept-based explanations should span beyond representing preconditions and action cost and control logic. Specifically, I show that concept-based explanations in sequential decision making should function at a much higher-level of abstraction, highlighting knowledge that are applicable across multiple states, and most importantly, expressing a positive or negative influence towards the agent’s goal. Additionally, I will discuss how concept-based explanations may not only benefit the end-user but also benefit the AI agent.

Bio:

Sonia Chernova is an Associate Professor in the College of Computing at Georgia Tech. She directs the Robot Autonomy and Interactive Learning lab, where her research focuses on the development of intelligent and interactive robotic systems, with a focus on robot learning, human-robot interaction, and explainable AI.  She also leads the NSF AI-CARING Institute, a collaboration between eight universities to develop collaborative AI systems that help support independent home life for older adults experiencing cognitive decline.

 

Interpretable AI

Interpretable AI is an emerging field, focused on developing AI systems that are transparent and understandable to humans.