An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems

The evolution of explainability approaches in Artificial Intelligence (AI) is a significant journey, mirroring the advancements in AI methods from expert systems to modern deep learning. This thesis takes a step further by advancing user-centered explainability, a specific form of tailoring explanations to end-users. It does so by explaining decisions of various machine learning (ML) methods, in dimensions that are important to end-users including domain knowledge and context. The thesis addresses three key challenges: 1). Formal Representation of Explanations: We develop an Explanation Ontology (EO), a semantic framework representing fifteen literature-derived and user-centered explanation types. The EO is designed to aid system designers in categorizing and generating explanations across various use cases. We demonstrate the utility of the EO in representing explanations across exemplar use cases and guide how it can be applied to other settings. 2). Utility and Feasibility of User-centered Explanations in Clinical Settings: We design a clinical question-answering (QA) system using large language models (LLMs) to support clinicians in interpreting risk prediction scores within clinical practice guidelines (CPGs). The system’s feasibility and performance are evaluated using quantitative metrics, and its qualitative value is assessed through feedback from an expert panel of clinicians. 3). General-Purpose Framework for Explanations: We create the MetaExplainer framework, which provides natural-language explanations based on user questions by integrating outputs from multiple explainer methods. This three-stage framework (Decompose, Delegate, Synthesis) uses LLMs, EO, and explainer methods to generate user-centered explanations. We demonstrate the MetaExplainer on open-source tabular datasets with plans to adapt to other modalities. Overall, the thesis aims to enhance user-centered explainability by integrating symbolic and neural AI strengths. This will make AI decisions more interpretable and understandable for end-users across various domains, not limited to the demonstrated field of healthcare.
Date
Location
Winslow 1140
Speaker: Shruthi Chari from Committee: Prof. Deborah L. McGuinness (Chair), Prof. Oshani Seneviratne (Co-Chair), Prof. James A. Hendler, Dr. Prithwish Chakraborty, Dr. Pablo Meyer
Back to top