๐๏ธ Introduction
Shapley values offer retroactive local explanations of an AI's decisions. As the name suggests, this technique is named after the award-winning mathematician and computational theorist Lloyd Shapley, who developed it back in the 1950s in his original papers on cooperative game theory (Shapley et al., 1952). The technique derives an explanation for machine learning models through a practical application of cooperative game theory โ that is, the XAI treats each feature of an ML model as a โplayerโ, which contributes a value that either adds or subtracts from the average prediction. This value, called the Shapley value, is calculated by looking at all possible coalitions and calculating the average marginal contributions of the given feature (i.e. the difference in predictions with and without the feature). Compared to other XAI techniques, particularly LIME, Shapley's method guarantees the predictions are fairly distributed and is based on solid theory, but is computationally expensive and, in some cases, may require access to the model's training data (Molnar, 2023).
๐๏ธ The EU's Right to Explainability
(Disclaimer: none on this team are certified lawyers. This is an exploration of Shapley as a concept and is not legal advice)
๐๏ธ Shapley's Math
Due to their strong mathematical backing, Shapley values are incredibly widely used in the field, thus they are almost obligatory to include in the project. But how do Shapley values work?
๐๏ธ Shapley and MOOC
This section discusses the application of the `shap` package to the Multi-Layer Perceptron that we built for the MOOC dataset. For image data, please refer to Shapley and ResNet.
๐๏ธ Applying Shapley to the ResNet network
This section discusses the application of the `shap` package to the standard image-recognition architecture ResNet - both a standard version and a modified one we trained on brain tumor images.