Lecture 1d: A simple example of learning
Visualization of neural networks is one of the few methods to get some insights into what is going on inside the black box.
- Consider a neural network with two layers of neurons.
- neurons in the top layer represent known shapes.
- neurons in the bottom layer represent pixel intensities.
- A pixel gets to vote if it has ink on it.
- Each inked pixel can vote for several different shapes.
- The shape that gets the most votes wins.
How to display the weights
Give each output unit its own “map” of the input image and display the weight coming from each pixel in the location of that pixel in the map.
Use a black or white blob with the area representing the magnitude of the weight and the color representing the sign.
How to learn the weights
Show the network an image and increment the weights from active pixels to the correct class.
Then decrement the weights from active pixels to whatever class the network guesses
The learned weights
The details of the learning algorithm will be explained in future lectures.
Why the simple learning algorithm is insufficient
- A two layer network with a single winner in the top layer is equivalent to having a rigid template for each shape.
- The winner is the template that has the biggest overlap with the ink.
- The ways in which hand-written digits vary are much too complicated to be captured by simple template matches of whole shapes.
- To capture all the allowable variations of a digit we need to learn the features that it is composed of.
Reuse
CC SA BY-NC-ND
Citation
BibTeX citation:
@online{bochman2017,
author = {Bochman, Oren},
title = {Deep {Neural} {Networks} - {Notes} for Lecture 1d},
date = {2017-07-05},
url = {https://orenbochman.github.io/notes/dnn/dnn-01/l01d.html},
langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2017. “Deep Neural Networks - Notes for Lecture
1d.” July 5, 2017. https://orenbochman.github.io/notes/dnn/dnn-01/l01d.html.