Deep Neural Networks - Notes for lecture 4b

For the course by Geoffrey Hinton on Coursera

A brief diversion into cognitive science
deep learning
neural networks
notes
coursera
Author

Oren Bochman

Published

Saturday, August 12, 2017

Lecture 4b: A brief diversion into cognitive science

This video is part of the course, i.e. it’s not optional, despite what Geoff says in the beginning of the video. This video gives a high-level interpretation of what’s going on in the family tree network. This video contrasts two types of inference:

  • Conscious inference, based on relational knowledge.
  • Unconscious inference, based on distributed representations.

What the family trees example tells us about concepts

• There has been a long debate in cognitive science between two rival theories of what it means to have a concept: The feature theory: A concept is a set of semantic features. – This is good for explaining similarities between concepts. – Its convenient: a concept is a vector of feature activities. The structuralist theory: The meaning of a concept lies in its relationships to other concepts. – So conceptual knowledge is best expressed as a relational graph. – Minsky used the limitations of perceptrons as evidence against feature vectors and in favor of relational graph representations.

Both sides are wrong

  • These two theories need not be rivals. A neural net can use vectors of semantic features to implement a relational graph.

    • In the neural network that learns family trees, no explicit inference is required to arrive at the intuitively obvious consequences of the facts that have been explicitly learned.

    • The net can “intuit” the answer in a forward pass.

  • We may use explicit rules for conscious, deliberate reasoning, but we do a lot of commonsense, analogical reasoning by just “seeing” the answer with no conscious intervening steps.

    • Even when we are using explicit rules, we need to just see which rules to apply.

Localist and distributed representations of concepts

  • The obvious way to implement a relational graph in a neural net is to treat a neuron as a node in the graph and a connection as a binary relationship. But this “localist” method will not work:

    • We need many different types of relationship and the connections in a neural net do not have discrete labels.

    • We need ternary relationships as well as binary ones. e.g. A is between B and C.

  • The right way to implement relational knowledge in a neural net is still an open issue.

    • But many neurons are probably used for each concept and each neuron is probably involved in many concepts. This is called a “distributed representation”.

Reuse

CC SA BY-NC-ND

Citation

BibTeX citation:
@online{bochman2017,
  author = {Bochman, Oren},
  title = {Deep {Neural} {Networks} - {Notes} for Lecture 4b},
  date = {2017-08-12},
  url = {https://orenbochman.github.io/notes/dnn/dnn-04/l_04b.html},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2017. “Deep Neural Networks - Notes for Lecture 4b.” August 12, 2017. https://orenbochman.github.io/notes/dnn/dnn-04/l_04b.html.