📄️ Introduction
Two user studies were run. Each study consisted of one on one interviews with around 12 participants and took around 20 minutes to complete. Both studies shared the same introduction to the concept of XAI, and conclusion which gauge participant trust and preference for each model. The goals of the study was to:
📄️ Limitations
MOOC Survey Limitations
📄️ MOOC: Methodology
A user study was designed around these explainability methods, which set out to assess these methods in a variety of ways. Loosely modeled after the anchoring paper (Ribeiro 2016), the first component exposes respondents to a data point, and one explanatory method. To assess the explanatory power of the method, the respondent’s accuracy at predicting the model’s outcome in subsequent data points is assessed. If the explanation was effective at helping the respondent understand how the model makes its predictions, they should increase the respondent’s ability to accurately predict the model. Following exposure to all the methods across a variety of samples, respondents provide feedback on each.
📄️ MOOC: Comparative Results
"It really breaks down the idea of the black box model."
📄️ ResNet: Methodology
We designed a user study to determine how explanatory – and convincing – our explanations of ResNet predictions were to users. Here is how we conducted this half of the study.
📄️ ResNet: Comparative Results
Summary
📄️ ResNet: Qualitative Takeaways
LIME and Anchoring more intuitive...