Lecture 1b: What are neural networks?
Some tasks that are easy or humans, like vision, are hard for software, and vice versa (chess).
Reasons to study neural computation
- To understand how the brain actually works.
- Its very big and very complicated and made of stuff that dies when you poke it around. So we need to use computer simulations.
- To understand a style of parallel computation inspired by neurons and their adaptive connections.
- Very different style from sequential computation.
- should be good for things that brains are good at (e.g. vision)
- Should be bad for things that brains are bad at (e.g. 23 x 71)
- To solve practical problems by using novel learning algorithms inspired by the brain (this course)
- Learning algorithms can be very useful even if they are not how the brain actually works.
A typical cortical neuron
- Gross physical structure:
- There is one axon that branches
- There is a dendritic tree that collects input from other neurons.
- Axons typically contact dendritic trees at synapses
- A spike of activity in the axon causes charge to be injected into the post-synaptic neuron.
- Spike generation:
- There is an axon hillock that generates outgoing spikes whenever enough charge has flowed in at synapses to depolarize the cell membrane.
Synapses
- When a spike of activity travels along an axon and arrives at a synapse it causes vesicles of transmitter chemical to be released.
- There are several kinds of transmitter.
- The transmitter molecules diffuse across the synaptic cleft and bind to receptor molecules in the membrane of the post-synaptic neuron thus changing their shape.
- This opens up holes that allow specific ions in or out.
How synapses adapt
- The effectiveness of the synapse can be changed:
- vary the number of vesicles of transmitter.
- vary the number of receptor molecules.
- Synapses are slow, but they have advantages over RAM
- They are very small and very low-power.
- They adapt using locally available signals
- But what rules do they use to decide how to change?
How the brain works on one slide!
- Each neuron receives inputs from other neurons
- A few neurons also connect to receptors.
- Cortical neurons use spikes to communicate.
- The effect of each input line on the neuron is controlled by a synaptic weight
- The weights can be positive or negative.
- The synaptic weights adapt so that the whole network learns to perform useful computations
- Recognizing objects, understanding language, making plans, controlling the body.
- You have about neurons each with about weights.
- A huge number of weights can affect the computation in a very short time. Much better bandwidth than a workstation.
Modularity and the brain
- Different bits of the cortex do different things.
- Local damage to the brain has specific effects.
- Specific tasks increase the blood flow to specific regions.
- But cortex looks pretty much the same all over.
- Early brain damage makes functions relocate.
- Cortex is made of general purpose stuff that has the ability to turn into special purpose hardware in response to experience.
- This gives rapid parallel computation plus flexibility.
- Conventional computers get flexibility by having stored sequential programs, but this requires very fast central processors to perform long sequential computations.
Reuse
CC SA BY-NC-ND
Citation
BibTeX citation:
@online{bochman2017,
author = {Bochman, Oren},
title = {Deep {Neural} {Networks} - {Notes} for Lecture 1b},
date = {2017-07-03},
url = {https://orenbochman.github.io/notes/dnn/dnn-01/l01b.html},
langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2017. “Deep Neural Networks - Notes for Lecture
1b.” July 3, 2017. https://orenbochman.github.io/notes/dnn/dnn-01/l01b.html.