问题描述
因为层"一词在应用于卷积层时通常具有不同的含义(有些将通过池化将所有内容视为一个单独的层,其他将卷积,非线性和池化视为独立的层"; 请参见图9.7 ),目前尚不清楚我在卷积层中的何处应用dropout.
Because the word "layer" often means different things when applied to a convolutional layer (some treat everything up through pooling as a single layer, others treat convolution, nonlinearity, and pooling as separate "layers"; see fig 9.7) it's not clear to me where to apply dropout in a convolutional layer.
非线性和池化之间是否会发生辍学?
Does dropout happen between nonlinearity and pooling?
例如,在TensorFlow中会是这样:
E.g., in TensorFlow would it be something like:
kernel_logits = tf.nn.conv2d(input_tensor, ...) + biases
activations = tf.nn.relu(kernel_logits)
kept_activations = tf.nn.dropout(activations, keep_prob)
output = pool_fn(kept_activations, ...)
推荐答案
您可能会尝试在其他位置应用dropout,但是就防止过度拟合而言,不确定在池化之前是否会遇到很多问题.对于CNN,我看到的是tensorflow.nn.dropout
在非线性和合并之后应用:
You could probably try applying dropout at different places, but in terms of preventing overfitting not sure you're going to see much of a problem before pooling. What I've seen for CNN is that tensorflow.nn.dropout
gets applied AFTER non-linearity and pooling:
# Create a convolution + maxpool layer for each filter size
pooled_outputs = []
for i, filter_size in enumerate(filters):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)
# Combine all the pooled features
num_filters_total = num_filters * len(filters)
self.h_pool = tf.concat(3, pooled_outputs)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])
# Add dropout
with tf.name_scope("dropout"):
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)
这篇关于我应该在哪里将dropout应用于卷积层?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!