UNIVERSITY OF HERTFORDSHIRE COMPUTER SCIENCE RESEARCH COLLOQUIUM presents "Are “Explanations” in Artificial Intelligence (AI) and Machine Learning (ML) based systems “good” enough?" Dr. Epaminondas Kapetanios (University of Hertfordshre) 31 January 2024 13:00 -14:00 Room C154 Everyone is Welcome to Attend Abstract: The rise of increasingly automated systems driven by AI/ML approaches also fuelled the resurgence of the notion of Explainable AI (XAI), as well as of Interpretable ML (IML), as a response of researchers and practitioners to increasing worries about lack of trust and transparency in intelligent agents and the decisions these may make. To this extent, several algorithmic approaches have been taken, e.g., Permutation Importance, LIME, SHAP values, to address this challenge and provide explanations or interpretations about decisions or predictions being made. Despite the recent resurgence of explanation and interpretability in AI, most of the research and practice in this area seems to use the researchers' intuitions of what constitutes a ‘good’ explanation. Explanation, however, as a concept has a long standing tradition in the Philosophy of Science being closely associated with Causation and Positivism in Natural Sciences as well as with other frameworks of explanations in Social and Cognitive Sciences, where empirical studies and experiments are not always possible. The latter is significant in the context of Human – AI interaction, e.g., Social Robots, where humans follow different ways in generating and evaluating explanations. Explanation and interpretation have also a long standing tradition in Computer Science in the context of Semantic Computing, which does not seem to be taken into consideration either. In this talk, we will briefly survey the field of XAI and IML by showcasing the current state of the art in the field to demonstrate what is currently being perceived as a “good explanation”. Then, we will embark on how we envision a “good explanation” for humans, which is inspired and informed by the following key ingredients: a) the definition of a “good explanation” in the context of Constructive Realism, a sub-field in the Philosophy of Science, particularly suitable for explanations in Social Sciences, b) the relationship among explanandum, explainer and explainee as weighted variables in a common formula suitable for the evaluation of a “good explanation”, c) “Explainability by Design” in the making of an intelligent system or agent as the only presupposition to provide “good explanations” to humans, where the “intelligent system” is knowledgeable of all its interacting parts and entities, d) some first impressions and results from pilot studies conducted in the context of providing “good explanations” to British Sign Language users, where automated classifiers about dementia predictions have been applied (funded research in collaboration with UCL). --------------------------------------------------- Hertfordshire Computer Science Research Colloquium http://cs-colloq.cs.herts.ac.uk