本实验使用了mnist.npz数据集,可以使用在线方式导入,但是我在下载过程中老是因为网络原因被打断,因此使用离线方式导入,离线包已传至github方便大家下载:

https://github.com/guangfuhao/Deeplearning/blob/master/mnist.npz (mnist.npz下载)

下面是全部代码:

#1.Import the neccessary libraries needed
import numpy as np
import tensorflow as tf
import matplotlib
from matplotlib import pyplot as plt ######################################################################## #2.Set default parameters for plots
matplotlib.rcParams['font.size'] = 20
matplotlib.rcParams['figure.titlesize'] = 20
matplotlib.rcParams['figure.figsize'] = [9, 7]
matplotlib.rcParams['font.family'] = ['STKaiTi']
matplotlib.rcParams['axes.unicode_minus']=False ########################################################################
#3.Initialize Parameters #Initialize learning rate
lr = 1e-3
#Initialize loss array
losses = []
#Initialize the weights layers and the bias layers
w1=tf.Variable(tf.random.truncated_normal([784,256],stddev=0.1))
b1=tf.Variable(tf.zeros([256]))
w2=tf.Variable(tf.random.truncated_normal([256,128],stddev=0.1))
b2=tf.Variable(tf.zeros([128]))
w3=tf.Variable(tf.random.truncated_normal([128,10],stddev=0.1))
b3=tf.Variable(tf.zeros([10])) ######################################################################## #4.Import the minist dataset by numpy offline
def load_mnist():
#define the directory where mnist.npz is(Please watch the '\'!)
path = r'F:\learning\machineLearning\forward_progression\mnist.npz'
f = np.load(path)
x_train, y_train = f['x_train'],f['y_train']
x_test, y_test = f['x_test'],f['y_test']
f.close()
return (x_train, y_train), (x_test, y_test)
(train_image,train_label),_ = load_mnist()
x = tf.convert_to_tensor(train_image, dtype=tf.float32) / 255.
y = tf.convert_to_tensor(train_label, dtype=tf.int32)
#Reshape x from [60k, 28, 28] to [60k, 28*28]
x=tf.reshape(x,[-1,28*28]) ######################################################################## #5.Combine x and y as a tuple and batch them
train_db = tf.data.Dataset.from_tensor_slices((x,y)).batch(128)
'''
#Encapsulate train_db as an iterator object
train_iter = iter(train_db)
sample = next(train_iter)
''' ######################################################################## #6.Iterate database for 20 times
for epoch in range(20):
#For every batch:x:[128, 28*28],y: [128]
for step, (x, y) in enumerate(train_db):
with tf.GradientTape() as tape: # tf.Variable
# x: [b, 28*28]
# h1 = x@w1 + b1
# [b, 784]@[784, 256] + [256] => [b, 256] + [256] => [b, 256] + [b, 256]
h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256])
h1 = tf.nn.relu(h1)
# [b, 256] => [b, 128]
h2 = h1@w2 + b2
h2 = tf.nn.relu(h2)
# [b, 128] => [b, 10]
out = h2@w3 + b3 # y: [b] => [b, 10]
y_onehot = tf.one_hot(y, depth=10) # compute loss
# mse = mean(sum(y-out)^2)
# [b, 10]
loss = tf.square(y_onehot - out)
# mean: scalar
loss = tf.reduce_mean(loss) # compute gradients
grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])
#Update the weights and the bias
w1.assign_sub(lr * grads[0])
b1.assign_sub(lr * grads[1])
w2.assign_sub(lr * grads[2])
b2.assign_sub(lr * grads[3])
w3.assign_sub(lr * grads[4])
b3.assign_sub(lr * grads[5]) if step % 100 == 0:
print(epoch, step, 'loss:', float(loss)) losses.append(float(loss)) ######################################################################## #7.Show the change of losses via matplotlib
plt.figure()
plt.plot(losses, color='C0', marker='s', label='训练')
plt.xlabel('Epoch')
plt.legend()
plt.ylabel('MSE')
#Save figure as '.svg' file
#plt.savefig('forward.svg')
plt.show()

第一部分没什么好讲的,导入了numpy,tensorflow,matplot和pyplot库

import numpy as np
import tensorflow as tf
import matplotlib
from matplotlib import pyplot as plt

第二部分设置了matplot画图的一些参数

pylot使用rc配置文件来自定义图形的各种默认属性,称之为rc配置或rc参数。通过rc参数可以修改默认的属性,包括窗体大小、每英寸的点数、线条宽度、颜色、样式、坐标轴、坐标和网络属性、文本、字体等

font.size为字体大小,figure.titlesize为标题大小,figure.figsize为图像显示大小,font.family设置字体为STKaiTi显示中文,axes.unicode_minus设置正常显示字符

matplotlib.rcParams['font.size'] = 20
matplotlib.rcParams['figure.titlesize'] = 20
matplotlib.rcParams['figure.figsize'] = [9, 7]
matplotlib.rcParams['font.family'] = ['STKaiTi']
matplotlib.rcParams['axes.unicode_minus']=False

第三部分初始化一些参数,lr为学习率(我将lr调整为1e-2时最终的losses变的更小了,但是目前并不知道这个值会对网络的最终表现产生什么样的影响),就是控制参数在每次梯度下降中下降的速率,losses用来存储每次epoch结束时的loss,还用截断正态分布(在tf.truncated_normal中如果x的取值在区间(μ-2σ,μ+2σ)之外则重新进行选择。这样保证了生成的值都在均值附近)初始化了三层权重层和用0初始化了偏置层

#Initialize learning rate
lr = 1e-3
#Initialize loss array
losses = []
#Initialize the weights layers and the bias layers
w1=tf.Variable(tf.random.truncated_normal([784,256],stddev=0.1))
b1=tf.Variable(tf.zeros([256]))
w2=tf.Variable(tf.random.truncated_normal([256,128],stddev=0.1))
b2=tf.Variable(tf.zeros([128]))
w3=tf.Variable(tf.random.truncated_normal([128,10],stddev=0.1))
b3=tf.Variable(tf.zeros([10]))

第四部分导入了minist数据集并对x的维度做了预处理,其中path为自己本地下载的mnist.npz的位置,注意这里是右斜杠!

def load_mnist():
#define the directory where mnist.npz is(Please watch the '\'!)
path = r'F:\learning\machineLearning\forward_progression\mnist.npz'
f = np.load(path)
x_train, y_train = f['x_train'],f['y_train']
x_test, y_test = f['x_test'],f['y_test']
f.close()
return (x_train, y_train), (x_test, y_test)
(train_image,train_label),_ = load_mnist()
x = tf.convert_to_tensor(train_image, dtype=tf.float32) / 255.
y = tf.convert_to_tensor(train_label, dtype=tf.int32)
#Reshape x from [60k, 28, 28] to [60k, 28*28]
x=tf.reshape(x,[-1,28*28])

第五部分将数据集做了batch切分,每个batch为128(这里的batch大小为什么是128存疑,我试过200和100但没发现什么区别)条数据,至于什么是Batch和Epoch,可以直接向下至文末查看

train_db = tf.data.Dataset.from_tensor_slices((x,y)).batch(128)

第六部分Epoch20次,用mse计算loss,下面为mse的解释:

《TensorFlow2深度学习》学习笔记(二)手动搭建并测试简单神经网络(附mnist.npz下载方式)-LMLPHP

tf.GradientTape(梯度带)

__init__(persistent=False,watch_accessed_variables=True)
作用:创建一个新的GradientTape
参数:

persistent: 布尔值,用来指定新创建的gradient tape是否是可持续性的。默认是False,意味着只能够调用一次gradient()函数

watch_accessed_variables: 布尔值,表明这个gradien tap是不是会自动追踪任何能被训练(trainable)的变量。默认是True。要是为False的话,意味着你需要手动去指定你想追踪的那些变量

下面的前向计算过程都需要包裹在 with tf.GradientTape() as tape 上下文中,使得前向计算时能够保存计算图信息,方便反向求导运算。assign_sub()将原地(In-place)减去给定的参数值,实现参数的自我更新操作

for epoch in range(20):
#For every batch:x:[128, 28*28],y: [128]
for step, (x, y) in enumerate(train_db):
with tf.GradientTape() as tape: # tf.Variable
# x: [b, 28*28]
# h1 = x@w1 + b1
# [b, 784]@[784, 256] + [256] => [b, 256] + [256] => [b, 256] + [b, 256]
h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256])
h1 = tf.nn.relu(h1)
# [b, 256] => [b, 128]
h2 = h1@w2 + b2
h2 = tf.nn.relu(h2)
# [b, 128] => [b, 10]
out = h2@w3 + b3 # y: [b] => [b, 10]
y_onehot = tf.one_hot(y, depth=10) # compute loss
# mse = mean(sum(y-out)^2)
# [b, 10]
loss = tf.square(y_onehot - out)
# mean: scalar
loss = tf.reduce_mean(loss) # compute gradients
grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])
#Update the weights and the bias
w1.assign_sub(lr * grads[0])
b1.assign_sub(lr * grads[1])
w2.assign_sub(lr * grads[2])
b2.assign_sub(lr * grads[3])
w3.assign_sub(lr * grads[4])
b3.assign_sub(lr * grads[5]) if step % 100 == 0:
print(epoch, step, 'loss:', float(loss)) losses.append(float(loss))

第七部分将losses随训练次数的增加的变化展示出来

plt.figure()
plt.plot(losses, color='C0', marker='s', label='训练')
plt.xlabel('Epoch')
plt.legend()
plt.ylabel('MSE')
#Save figure as '.svg' file
#plt.savefig('forward.svg')
plt.show()

下图是最终的loss曲线:

《TensorFlow2深度学习》学习笔记(二)手动搭建并测试简单神经网络(附mnist.npz下载方式)-LMLPHP

Batch和Epoch通俗易懂的解释:(参考自https://blog.csdn.net/weixin_42137700/article/details/84302045

假设您有一个包含200个样本(数据行)的数据集,并且您选择的Batch大小为5和1,000个Epoch。

这意味着数据集将分为40个Batch,每个Batch有5个样本。每批五个样品后,模型权重将更新。

这也意味着一个epoch将涉及40个Batch或40个模型更新。

有1000个Epoch,模型将暴露或传递整个数据集1,000次。在整个培训过程中,总共有40,000Batch。

05-04 06:45