Thursday, October 20, 2022

LHS Collaboratory

12:00 PM to 1:30 PM

Virtual via Zoom

Explainability - AI and Ethics


Alex John London, PhD

Clara L. West Professor of Ethics and Philosophy
Director of the Center for Ethics and Policy at Carnegie Mellon University

Explainability Is Not the Solution to Structural Challenges to AI in Medicine  

Explainability is often treated as a necessary condition for ethical applications of artificial intelligence (AI) in Medicine.  In this brief talk I survey some of the structural challenges facing the development and deployment of effective AI systems in health care to illustrate some of the limitations to explainability in addressing these challenges.  This talk builds on prior work (London 2019, 2022) to illustrate how ambitions for AI in health care likely require significant changes to key aspects of health systems.  

Melissa McCradden, PhD, MHSc

John and Melinda Thompson Director of AI in Medicine (Integration lead), Bioethicist
The Hospital for Sick Children

On the Inextricability of Explainability from Ethics: Explainable AI does not Ethical AI Make

Explainability is embedded into a plethora of legal, professional, and regulatory guidelines as it is often presumed that an ethical use of AI will require explainable algorithms. There is considerable controversy, however, as to whether post hoc explanations are computationally reliable, their value for decision-making, and the relational implications of their use in shared decision-making. This talk will explore the literature across these domains and argue that while post hoc explainability may be a reasonable technical goal, it should not be offered status as a moral standard by which AI use is judged to be ‘ethical.’