Lecture 15a: From PCA to autoencoders
Remember how, in assignment 4, we’re use unsupervised learning to obtain a different representation of each data case? PCA is another example of that, but for PCA, there’s even greater emphasis on obtaining that different representation. Chapter 15 is about unsupervised learning using deterministic feedforward networks. By contrast, the first part of the course was about supervised learning using deterministic feedforward networks, and the second part was about unsupervised learning using very different types of networks. 0:26. A linear manifold is a hyperplane. 1:25. A curved manifold is no longer a hyperplane. One might say it’s a bent hyperplane, but really, “hyperplane” means that it’s not bent. 1:37. “N-dimensional data” means that the data has N components and is therefore handled in a neural network by N input units. 1:58. Here, that “lower-dimensional subspace” is yet another synonym for “linear manifold” and “hyperplane”. 2:46 and 3:53. Geoffrey means the squared reconstruction error. 4:43. Here, for the first time, we have a deterministic feedforward network with lots of output units that are not a softmax group. An “autoencoder” is a neural network that learns to encode data in such a way that the original can be approximately reconstructed.
Lecture 15b: Deep autoencoders
2:51. “Gentle backprop” means training with a small learning rate for not too long, i.e. not changing the weights a lot.
Lecture 15c: Deep autoencoders for document retrieval
“Latent semantic analysis” and “Deep Learning” sound pretty good as phrases… there’s definitely a marketing component in choosing such names :) 1:14. The application for the method in this video is this: “given one document (called the query document), find other documents similar to it in this giant col## Lection of documents.” 2:04. Some of the text on this slide is still hidden, hence for example the count of 1 for “reduce”. 3:09. This slide is a bit of a technicality, not very central to the story. If you feel confused, postpone focusing on this one until you’ve understood the others well. 6:49. Remember t-SNE?
Lecture 15d: Semantic Hashing
We’re continuing our attempts to find documents (or images), in some huge given pile, that are similar to a single given document (or image). Last time, we focused on making the search produce truly similar documents. This time, we focus on simply making the search fast (while still good). This video is one of the few times when machine learning goes hand in hand very well with intrinsically discrete computations (the use of bits, in this case). We’ll still use a deep autoencoder. This video is an example of using noise as a regularizer (see video 9c). Crucial in this story is the notion that units of the middle layer, the “bottleneck”, are trying to convey as much information as possible in their states to base the reconstruction on. Clearly, the more information their states contain, the better the reconstruction can potentially be.
Lecture 15e: Learning binary codes for image retrieval
It is essential that you understand video 15d before you try 15e. 7:13. Don’t worry if you don’t understand that last comment.
Lecture 15f: Shallow autoencoders for pre-training
This video is quite separate from the others of chapter 15.
CNN Architecture & hyper parameters
Convolutional Neural Network example INPUT [F,F,3]
CONV [F,F,K] - basis sensor RELU [F,F,K ] - elementwise activation POOL [F/2,F/2,S] - down sampling
FC - convers volume to class probability Hyper parameters: K – depth is the number of filters/kernels to use say 12 F - the RECEPTIVE FIELD or spatial extent of the filters – pixels width and height a neuron sees say 32x32 S – the STRIDE = step size for the offset used for sliding the filters so that there is an overlap neurons – say 1 P the amount of PADDING= padding round input with zeros, used because output and input might otherwise have different sizes
As of 2015 per STRIVING FOR SIMPLICITY: THE ALL CONVOLUTIONAL NET the recommendation is to Removing
Pooling Removing normalization also recommended
INPUT -> [[CONV -> RELU]*N -> POOL?]M -> [FC -> RELU]K -> FC
Seems FC and CONV are functionally equivalent and can be interchanged. Some other techniques/layers types: 1x1 convolution Dilated convolutions (acting on spaced out pixels) Replacing Max Pooling with ROI region of interrest pooling Loss layer – represent the overall error Dropout layer - Regularization by droping a unit with probabpility p DropConnect - Regularization by dropping connections instead of units
Stochastic pooling
Weight decay = 0.001 Image whitening and contrast normalization in preprocessing
Reuse
Citation
@online{bochman2017,
author = {Bochman, Oren},
title = {Deep {Neural} {Networks} - {Notes} for {Lesson} 15},
date = {2017-12-01},
url = {https://orenbochman.github.io/notes/dnn/dnn-15/l_15.html},
langid = {en}
}