TensorFlow-学习笔记(2)
TF Convolution Layer
tf.nn.conv2d()
and tf.nn.bias_add()
|
|
padding
can be 'SAME'
or 'VALID'
Pooling
Conceptually, the benefit of the max pooling operation is to reduce the size of the input(which can help prevent overfitting), and allow the neural network to focus on only the most important elements. Max pooling does this by only retaining the maximum value for each filtered area, and removing the remaining values.
|
|
4 element lists of ksize
and strides
corresponde to the dimension of the input tensor ([batch, height, width, channels]), batch and channel dimensions are typically set to 1
.
Recently, pooling layers have fallen out of favor. Some reasons are:
- Recent datasets are so big and complex we’re more concerned about underfitting.
- Dropout is a much better regularizer.
- Pooling results in a loss of information. Think about the max pooling operation as an example. We only keep the largest of n numbers, thereby disregarding n-1 numbers completely.
Walk Through MNIST again with CNN
Define the model:
|
|