Human-ML Collaboration and the Role of Explainable ML

Machine Learning (ML) systems that inform real-world decisions are typically parts of larger sociotechnical systems, involve multiple human stakeholders, and rely on human-ML collaboration at different stages of the development and deployment pipeline. Given that ML systems increasingly inform decisions of consequence (e.g., loan decisions, criminal justice decisions), human decision-makers effectively interacting with the ML model is critical. As such, “explainability” has become a highly desired feature in ML models we deploy in the real world. In this talk, we will overview how the explainability of ML models fits into sociotechnical systems and discuss the popular explainable ML methods, limitations of existing work, and open research questions. Bio. Kasun Amarasinghe is a Senior Research Scientist at the Machine Learning Department of Carnegie Mellon University (CMU) in the Data Science for Public Policy Lab. Kasun studies human-ML collaborative decision making systems in the public sector and how to develop responsible ML systems for such contexts. Before this role, Kasun was a Postdoctoral Research Associate at CMU. Kasun received his Ph.D. in Computer Science from Virginia Commonwealth University (Thesis: Explainable Deep Neural Networks for Cyber-Physical Systems) and his BSc. in Computer Science from University of Peradeniya, Sri Lanka
Date
Location
CII 3206
Speaker: Kasun Amarasinghe from Carnegie Mellon University
Back to top