3 Explaining Deep Neural Network Models

XAI Course Notes

Rigorously understanding how Deep Learning models function may allow us to steer and control their behavior and avoid unwanted consequences. In addition, today’s highly capable AI systems could be useful for studying intelligence. Can we reverse engineer neural nets from their weights, similar to how one might reverse engineer a binary compiled program? Can we gain insights into neural computation similar to how neuroscientists study the brain?

explainable AI
XAI
machine learning
ML
data science
contrafactuals
casual inference
CI
Author

Oren Bochman

Published

Sunday, March 19, 2023

Series Poseter

series poster

series poster

Session Video

not available!

Reuse

CC SA BY-NC-ND

Citation

BibTeX citation:
@online{bochman2023,
  author = {Bochman, Oren},
  title = {3 {Explaining} {Deep} {Neural} {Network} {Models}},
  date = {2023-03-19},
  url = {https://orenbochman.github.io/notes/XAI/l03/},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2023. “3 Explaining Deep Neural Network Models.” March 19, 2023. https://orenbochman.github.io/notes/XAI/l03/.