Organised by the CHAIR theme Interpretable AI.
Speaker: Nadia Figueroa, University of Pennsylvania.
Overview
- Date:Starts 27 May 2024, 10:00Ends 27 May 2024, 11:00
- Location:EC, Campus Johanneberg
- Language:English
Abstract, Nadia Figueroa:
For the last decades we have lived with the promise of one day being able to own a robot that can coexist, collaborate and cooperate with humans in our everyday lives. This has motivated a vast amount of research on robot control, motion planning, machine learning, perception and physical human-robot interaction (pHRI). However, we are yet to see robots fluidly collaborating with humans and other robots in the human-centric dynamic spaces we inhabit. This deployment bottleneck is due to traditionalist views of how robot tasks and behaviors should be specified and controlled.
For collaborative robots to be truly adopted in such dynamic, ever-changing environments they must be adaptive, compliant, reactive, safe and easy to teach or program. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity, time-critical, safety-critical requirements and contradicting goals.
In this talk, I will show that with a Dynamical Systems (DS) approach for motion planning and pHRI we can achieve reactive, provably safe and stable robot behaviors while efficiently teaching the robot complex tasks from a handful of demonstrations. Such an approach can be extended to offer task-level reactivity, transferability and can be used to incrementally learn from new data and failures in a matter of seconds and even during physical interactions, just as humans do.
Furthermore, I will show that such DS perspective to robot motion planning naturally allows for compliant and passive robot behaviors that inherently ensure human safety. While reactivity and compliance are favourable from the human perspective, it is often difficult to enforce any type safety-critical constraints for the robot with classical reactive and impedance control techniques - leading roboticists to favor optimization-based techniques such as MPC.
Hence, I will finalize the talk showing some recent work where we offer the best of both worlds, real-time reactivity and compliance while ensuring safety-critical constraints allowing the robot to be passive only when feasible and performing constraint-aware physical interaction tasks such as dynamic co-manipulation of large and heavy objects.
Bio:
Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania. She holds secondary appointments in Computer and Information Science and Electrical and Systems Engineering and is primary faculty of the GRASP laboratory.
She received a B.Sc. degree in Mechatronics from the Monterrey Institute of Technology, Mexico in 2007, an M.Sc. degree in Automation and Robotics from the Technical University of Dortmund, Germany in 2012 and a Ph.D. in Robotics, Control and Intelligent Systems at the Swiss Federal Institute of Technology in Lausanne, Switzerland (EPFL) in 2019. Prior to joining Penn, she was a Postdoctoral Associate in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology from 2020 to 2022.
Her research focuses on developing safety, control, estimation and learning methods for collaborative human-aware robotic systems: robots that can safely and efficiently interact with humans and other robots in the human-centric dynamic spaces we inhabit.
Her Ph.D thesis was a finalist for the Georges Giralt Ph.D. award in 2020 - the best European Ph.D. thesis in robotics, the ABB PhD Award and the EPFL Doctoral Distinction Award. Her work on multi-robot human collaboration was a finalist for the KUKA Innovation Award in 2017, Best Systems and Best Conference Paper Award and winner of the Best Student Paper Award at the 2016 Robotics: Science and Systems (RSS) Conference.
Interpretable AI
Interpretable AI is an emerging field, focused on developing AI systems that are transparent and understandable to humans.