📄️ What is Anchoring
Ribeiro, Singh, and Guestrin, the original authors of LIME, also created anchoring as an Explainable AI model, which has some similarities with LIME, but outputs its explanations in a different form. Like LIME, Anchoring involves perturbing the data point in question to see how the results from the black box change. Ribeiro et al. define their anchor like so: “An anchor explanation is a rule that sufficiently “anchors” the prediction locally – such that changes to the rest of the feature values of the instance do not matter” (Ribeiro et al, 2018). For example, a picture of a dog in the ocean would still be a picture of a dog even if the background were changed to grass. In this case, the dog would be our anchor. The same principle can be used for multiple types of data, not just image data. For tabular data, the algorithm will seek to find which feature values were the most important in coming to a particular decision, and for image classification the anchor will be a set of superpixels that have the most importance in determining the prediction of the model. A clear advantage of anchoring is that its output is intuitive and easy to interpret, as opposed to being a sea of coefficients.
📄️ Anchors on MOOC
Introduction
📄️ Anchors on ResNet
Anchoring Successes
📄️ Tumors
XAI in this case is meant as a way of highlighting tumors in brain scans. If a model predicts that an image has a tumor, then ideally our techniques would find and highlight the tumor as the reason the model made its prediction.