多层感知机
输入->线性变换->Relu激活->线性变换->Softmax分类
多层感知机将mnist的结果提升到了98%左右的水平
知识点
过拟合:采用dropout解决,本质是bagging方法,相当于集成学习,注意dropout训练时设置为0~1的小数,测试时设置为1,不需要关闭节点
学习率难以设定:Adagrad等自适应学习率方法
深层网络梯度弥散:Relu激活取代sigmoid激活,不过输出层仍然使用sigmoid激活
对于ReLU激活函数,常用截断正态分布,避免0梯度和完全对称
对于Softmax分类(也就是sigmoid激活),由于对0附近最敏感,所以采用全0初始权重
代码如下
# Author : Hellcat
# Time : 2017/12/7 import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('../../../Mnist_data',one_hot=True)
sess = tf.InteractiveSession() in_units = 784
h1_units = 300 # 对于ReLU激活函数,常用截断正态分布,避免0梯度和完全对称
# 对于Softmax分类(也就是sigmoid激活),由于对0附近最敏感,所以采用全0初始权重
W1 = tf.Variable(tf.truncated_normal([in_units, h1_units],stddev=0.1))
b1 = tf.Variable(tf.zeros([h1_units], dtype=tf.float32))
W2 = tf.Variable(tf.zeros([h1_units, 10], dtype=tf.float32))
b2 = tf.Variable(tf.zeros([10], dtype=tf.float32)) x = tf.placeholder(tf.float32, [None, in_units])
y_ = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32) hidden1 = tf.nn.relu(tf.add(tf.matmul(x, W1), b1))
hidden1_drop = tf.nn.dropout(hidden1, keep_prob)
y = tf.nn.softmax(tf.add(tf.matmul(hidden1_drop, W2), b2)) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), axis=1))
# train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.3).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y,axis=1), tf.argmax(y_,axis=1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.global_variables_initializer().run()
for i in range(3000):
batch_xs, batch_ys = mnist.train.next_batch(100)
train_step.run({x:batch_xs, y_:batch_ys, keep_prob:0.5})
if i % 100 == 0:
print('当前迭代次数{0},当前准确率{1:.3f}'.
format(i,accuracy.eval({x:batch_xs, y_:batch_ys, keep_prob:1.0})))
print(accuracy.eval({x:mnist.test.images, y_:mnist.test.labels, keep_prob:1.0}))
输出如下,
有意思的是,使用tf.train.AdagradOptimizer()优化器时偶尔会出错,使用梯度下降优化器之后再修改回来就没问题了,可能是我的解释器出问题了。