Lecture 5 — Explainable AI in practice

XAI Course Notes

How to properly incorporate explanations in machine learning projects and what aspects should you keep in mind? Over the past few years the need to explain the output of machine learning models has received growing attention. Explanations not only reveal the reasons behind models predictions and increase users’ trust in the model, but they can be used for different purposes. To fully utilize explanations and incorporate them into machine learning projects the following aspects of explanations should taken into consideration — explanation goals, the explanation method, and explanations’ quality. In this talk, we will discuss how to select the appropriate explanation method based on the intended purpose of the explanation. Then, we will present two approaches for evaluating explanations, including practical examples of evaluation metrics, while highlighting the importance of assessing explanation quality. Next, we will examine the various purposes explanation can serve, along with the stage of the machine learning pipeline the explanation should be incorporated in. Finally we will present a real use case of script classification as malware-related in Microsoft and how we can benefit from high-dimensional explanations in this context.

explainable AI
XAI
machine learning
ML
data science
contrafactuals
casual inference
CI
Author

Oren Bochman

Published

Tuesday, March 28, 2023

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability.

Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

overview link

Series Poster

series poster

series poster

Session Description

  • How to properly incorporate explanations in machine learning projects and what aspects should you keep in mind?

  • Over the past few years the need to explain the output of machine learning models has received growing attention.

  • Explanations not only reveal the reasons behind models predictions and increase users’ trust in the model, but they can be used for different purposes.

  • To fully utilize explanations and incorporate them into machine learning projects the following aspects of explanations should taken into consideration — explanation goals, the explanation method, and explanations’ quality.

  • In this talk, we will discuss how to select the appropriate explanation method based on the intended purpose of the explanation.

  • Then, we will present two approaches for evaluating explanations, including practical examples of evaluation metrics, while highlighting the importance of assessing explanation quality.

  • Next, we will examine the various purposes explanation can serve, along with the stage of the machine learning pipeline the explanation should be incorporated in.

  • Finally we will present a real use case of script classification as malware-related in Microsoft and how we can benefit from high-dimensional explanations in this context.

Session Video

slide

slide

slide

slide

slide

slide
  • The task here was to identify bad insurgence claims. e.g.

    • when the product was out of warranty

    • the item was not insured

    • the damage was not covered.

  • the model found many claim that the insurance people had not and they were skeptical.

  • the data scientist was on the spot and needed local explanations.


slide

slide

slide

slide

slide

slide

so we have some use cases, how do we do it!?


Adding post-hoc XAI to an ML project

the idea is a process with 5 steps:

  1. understand stakeholder - i.e. the end users of the project
  2. identify the goals of using explanations
  3. chose XAI method that fits with the project properties
  4. implication - taking the goal one step further
  5. decide on an evaluation metrics

1. Who are the explanation for ?

Different stakeholders have different needs from an ML model. Once we captured these needs we can decide on a suitable strategy for providing a suitable explanation. Understanding these needs will also aid in selecting the metrics utilized to determine the effectiveness of explanation.

Stakeholder Their Goal
Data scientists Debug and refine the model
Decision makers Assess model fit to a business strategy
Model Risk Analyst Assess the model’s robustness
Regulators Inspect model’s reliability and impact on costumers
Consumers Transparency into decisions that affect them
Table 1: XAI Stakeholders and their Needs

Explanation goals ?

  1. understanding predictions

  2. model debugging and verification

  3. improving performance

  4. increasing trust


XAI method Selection

  • Data
    • what is the scope of the explanation?
    • what is the data type ?
    • what output are we trying to explain ?
  • Access level
    • Can the explainer access the model?
    • Can the explainer access the data?
  • Stage
    • At What stage does the model require the explanation ?

Explanation methods map

How do we pick the correct approach to explaining the model ?

The next figure on the right can let us pick

map of explainability approaches

map of explainability approaches

from (Belle and Papantonis 2021)

Anchors and InTrees were not discussed earlier in the course.

Table 2: Comparing models on the kinds of transparency that are enabled

from (Belle and Papantonis 2021)

XAU methods 1

XAU methods 1

from (Belle and Papantonis 2021)

XAU methods 2

XAU methods 2

from (Belle and Papantonis 2021)

  • We covered SHAP which identifies the contribution of of each feature to the game of cooperative prediction

  • PDP / ICE - use a local model’s decision boundary to identify at what point we switched class.

  • Counterfactuals utilize the original model’s decision boundary identify the intervention that would lead to the effect of switching class.

  • Anchors, introduced in (Ribeiro, Singh, and Guestrin 2018) and called Scoped Rules in [Molnar (2022) §9.4]

  • Deletion Diagnostics: How would removing the highest leverage points change the model’s decision boundary

  • InTrees: What rules approximate our decision ?


XAI Implications


slide

slide

from (DARPA, n.d.)

c.f. https://www.darpa.mil/program/explainable-artificial-intelligence


slide

slide

Model Verification

slide

slide

Model Debugging

slide

slide
  • look at the errors of the models - false positives and false negatives

  • use SHAP to identify features that should be excluded.1


slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

slide

References

Belle, Vaishak, and Ioannis Papantonis. 2021. “Principles and Practice of Explainable Machine Learning.” Frontiers in Big Data 4 (July). https://doi.org/10.3389/fdata.2021.688969.
DARPA. n.d. “XAI Concept.” https://www.darpa.mil/program/explainable-artificial-intelligence.
Molnar, Christoph. 2022. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd ed. https://christophm.github.io/interpretable-ml-book.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2018. “Anchors: High-Precision Model-Agnostic Explanations.” Proceedings of the AAAI Conference on Artificial Intelligence 32 (1). https://doi.org/10.1609/aaai.v32i1.11491.

Footnotes

  1. But we should be using Causal Inference for this, since fixing using a local explanation will surely impact all the data set. If we have a Causal DAG we can probably use this concept to identify nodes as confounders on some Causal path.↩︎

Reuse

CC SA BY-NC-ND

Citation

BibTeX citation:
@online{bochman2023,
  author = {Bochman, Oren},
  title = {Lecture 5 -\/-\/- {Explainable} {AI} in Practice},
  date = {2023-03-28},
  url = {https://orenbochman.github.io/notes/XAI/l05/},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2023. “Lecture 5 --- Explainable AI in Practice.” March 28, 2023. https://orenbochman.github.io/notes/XAI/l05/.