Speaker: Plamen Angelov, Professor, Lancaster University, UK.
Organised by: CHAIR theme Interpretable AI.
Overview
- Date:Starts 22 January 2024, 13:45Ends 22 January 2024, 15:00
- Seats available:158
- Location:SB-H2, Campus Johanneberg
- Language:English
- Last sign up date:19 January 2024
Abstract
Machine Learning (ML) and AI justifiably attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. However, even the most powerful (in terms of accuracy) algorithms such as deep learning (DL) can give a wrong output, which may be fatal. Due to the hyper-parametric, cumbersome and opaque model used by DL, some authors started to talk about a dystopian “black box” society.
Despite the success in this area, the way computers learn is still principally different from the way people acquire new knowledge, recognise objects and make decisions. People do not need a huge amount of annotated data. They learn by example, using similarities to previously acquired prototypes, not by using parametric analytical models.
Current ML approaches are focused primarily on accuracy and overlook explainability, the semantic meaning of the internal model representation, reasoning and its link with the problem domain. They also overlook the efforts to collect and label training data and rely on assumptions about the data distribution that are often not satisfied. The ability to detect the unseen and unexpected and start learning this new class/es in real time with no or very little supervision is critically important and is something that no currently existing classifier can offer. The challenge is to fill this gap between high level of accuracy and the semantically meaningful solutions.
The most efficient algorithms that have fuelled interest towards ML and AI recently are also computationally very hungry – they require specific hardware accelerators such as GPU, huge amounts of labelled data and time. They produce parameterised models with hundreds of millions of coefficients, which are also impossible to interpret or be manipulated by a human. Once trained, such models are inflexible to new knowledge. They cannot dynamically evolve their internal structure to start recognising new classes. They are good only for what they were originally trained for. They also lack robustness, formal guarantees about their behaviour and explanatory and normative transparency.
This makes problematic use of such algorithms in high stake complex problems such as aviation, health, bailing from jail, etc. where the clear rationale for a particular decision is very important and the errors are very costly. All these challenges and identified gaps require a dramatic paradigm shift and a radical new approach.
Recently, Transformers and Foundation Models offered new possibilities and these can allow new type of ML to be developed that brings closer the statistical in nature Deep Learning and the logic-based Reasoning. This talk will explore this direction with some examples.
Bio
Prof. Angelov (PhD 1993, DSc 2015) holds a Personal Chair in Intelligent Systems at Lancaster University and is a Fellow of the IEEE, IET, AAIA and of ELLIS. He is member-at-large of the Board of Governors (BoG) of the International Neural Networks Society (INNS) and of the Systems, Man and Cybernetics Society of the IEEE (SMC-S) as well as Program co-Director of the Human-Centered Machine Learning for ELLIS.
He has 400 publications in leading journals, peer-reviewed conference proceedings, 3 granted patents, 3 research monographs (published by Springer (2002 and 2018) and Wiley, 2012) cited over 15000 times (h-index 63). Prof. Angelov is has an active research portfolio in the area of explainable deep learning and its applications to autonomous driving, Earth Observation and pioneering results in online learning from streaming data and evolving systems. His research was recognised by multiple awards including 2020 Dennis Gabor award "for outstanding contributions to engineering applications of neural networks".
He is the founding co-Editor-in-Chief of Springer’s journal on Evolving Systems and Associate Editor of other leading scientific journals, including IEEE Transactions (IEEE-T) on Cybernetics, IEEE-T on Fuzzy Systems, IEEE-T on AI. He gave over 30 keynote talks and co-organised and co-chaired over 30 IEEE conferences (including several IJCNN), workshops at NeurIPS, ICCV, PerCom and other leading conferences. Prof Angelov chaired the Standards Committee of the Computational Intelligent Society of the IEEE initiating the IEEE standard on explainable AI (XAI).
More details can be found at www.lancs.ac.uk/staff/angelov
Interpretable AI
Interpretable AI is an emerging field, focused on developing AI systems that are transparent and understandable to humans.