Case-based argumentation for explanation
Henry Prakken
Date: 16:00 – 16:30, Thursday, 11.03.2021
Location: MS Teams ICS Colloquium
Title: Case-based argumentation for explanation
Abstract: In this talk I will outline a formal model of explaining the outputs of machine-learning-based decision-making applications. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the approach is motivated by legal decision making, it also applies to other kinds of decision making, such as commercial decisions about loan applications or employee hiring, as long as the outcome is binary and the input conforms to the model’s factor- or dimension format. The model is top-level in that it can be extended with more refined accounts of similarities and differences between cases. I will first outline the formal model and then discuss some experiments done by Rosa Ratsma in which the model was applied to several machine-learning-based decision-making applications.