Skip to main content

Explainable AI: Breaking Down the Black Box

Adrian Boskovic, Sam Johnson-Lacoss, Chris Melville, Josh Moore, Thomas Pree, Lev Shuster, and Prof. Anna Rafferty

LIME

Written by Ribeiro et al., this method trains a local surrogate model using linear regression between the input and black box prediction. Weights are determined by proximity to the original prediction.

Shapley values

Shapley values leverage cooperative game theory to quantify each feature's contribution to the prediction's deviation from the expected result.
Prediction is but a game, and all feature values are merely players.

Anchors

Written by the same researchers as LIME, this method “anchors” a precise data point locally, by finding a decision rule such that changes in other feature values do not affect the prediction.