问题描述
我正在尝试使用多层感知器来估算来自sin(2x)函数的嘈杂数据:
I'm trying to approximate noisy data from the sin(2x) function using a multilayer perceptron:
# Get data
datasets = gen_datasets()
# Add noise
datasets["ysin_train"] = add_noise(datasets["ysin_train"])
datasets["ysin_test"] = add_noise(datasets["ysin_test"])
# Extract wanted data
patterns_train = datasets["x_train"]
targets_train = datasets["ysin_train"]
patterns_test = datasets["x_test"]
targets_test = datasets["ysin_test"]
# Reshape to fit model
patterns_train = patterns_train.reshape(62, 1)
targets_train = targets_train.reshape(62, 1)
patterns_test = patterns_test.reshape(62, 1)
targets_test = targets_test.reshape(62, 1)
# Parameters
learning_rate = 0.001
training_epochs = 10000
batch_size = patterns_train.shape[0]
display_step = 1
# Network Parameters
n_hidden_1 = 2
n_hidden_2 = 2
n_input = 1
n_classes = 1
# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Create model
def multilayer_perceptron(x):
# Hidden fully connected layer with 2 neurons
layer_1 = tf.sigmoid(tf.add(tf.matmul(x, weights['h1']), biases['b1']))
# Hidden fully connected layer with 2 neurons
layer_2 = tf.sigmoid(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']))
# Output fully connected layer
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Construct model
logits = multilayer_perceptron(X)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.losses.absolute_difference(labels = Y, predictions = logits, reduction=tf.losses.Reduction.NONE))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Initializing the variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
# Training Cycle
for epoch in range(training_epochs):
_ = sess.run(train_op, feed_dict={X: patterns_train,
Y: targets_train})
c = sess.run(loss_op, feed_dict={X: patterns_test,
Y: targets_test})
if epoch % display_step == 0:
print("Epoch: {0: 4} cost={1:9}".format(epoch+1, c))
print("Optimization finished!")
outputs = sess.run(logits, feed_dict={X: patterns_test})
print("outputs: {0}".format(outputs.T))
plt.plot(patterns_test, outputs, "r.", label="outputs")
plt.plot(patterns_test, targets_test, "b.", label="targets")
plt.legend()
plt.show()
当我在最后画图时,我得到一条直线,好像我有一个线性网络.看看情节:
When I plot this at the end, I get a straight line, as if I have a linear network. Take a look at the plot:
对于线性网络,这是对误差的正确最小化.但是我不应该有线性的下注,因为我在我的multilayer_perceptron()
函数中使用了sigmoid函数!为什么我的网络表现如此?
This is a correct minimization of the error for a linear network. But I shouldn't have a linear betwork because I'm using the sigmoid function in my multilayer_perceptron()
function! Why is my network behaving like this?
推荐答案
stddev=1.0的默认值"nofollow noreferrer"> tf.random_normal
,用于重量&偏差初始化是巨大.尝试将权重的显式值设置为stddev=0.01
;至于偏差,通常的做法是将它们初始化为零.
The default value of stddev=1.0
in tf.random_normal
, which you use for weight & bias initialization, is huge. Try an explicit value of stddev=0.01
for the weights; as for the biases, common practice is to initialize them to zero.
作为一种初始方法,我还将尝试将learning_rate
设置为更高的0.01(或者可能不尝试-请参见相关问题中的答案)
As an initial approach, I would also try a higher learning_rate
of 0.01 (or maybe not - see answer in a related question here)
这篇关于具有S型激活的多层感知器在sin(2x)回归上产生直线的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!