我正在尝试使用多层感知器来估计来自sin(2x)函数的嘈杂数据:

# Get data
datasets = gen_datasets()
# Add noise
datasets["ysin_train"] = add_noise(datasets["ysin_train"])
datasets["ysin_test"] = add_noise(datasets["ysin_test"])
# Extract wanted data
patterns_train = datasets["x_train"]
targets_train = datasets["ysin_train"]
patterns_test = datasets["x_test"]
targets_test = datasets["ysin_test"]
# Reshape to fit model
patterns_train = patterns_train.reshape(62, 1)
targets_train = targets_train.reshape(62, 1)
patterns_test = patterns_test.reshape(62, 1)
targets_test = targets_test.reshape(62, 1)

# Parameters
learning_rate = 0.001
training_epochs = 10000
batch_size = patterns_train.shape[0]
display_step = 1

# Network Parameters
n_hidden_1 = 2
n_hidden_2 = 2
n_input = 1
n_classes = 1

# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

# Create model
def multilayer_perceptron(x):
    # Hidden fully connected layer with 2 neurons
    layer_1 = tf.sigmoid(tf.add(tf.matmul(x, weights['h1']), biases['b1']))
    # Hidden fully connected layer with 2 neurons
    layer_2 = tf.sigmoid(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']))
    # Output fully connected layer
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

# Construct model
logits = multilayer_perceptron(X)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.losses.absolute_difference(labels = Y, predictions = logits, reduction=tf.losses.Reduction.NONE))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Initializing the variables
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)

    # Training Cycle
    for epoch in range(training_epochs):

        _ = sess.run(train_op, feed_dict={X: patterns_train,
                                          Y: targets_train})
        c = sess.run(loss_op, feed_dict={X: patterns_test,
                                         Y: targets_test})
        if epoch % display_step == 0:
            print("Epoch: {0: 4} cost={1:9}".format(epoch+1, c))
    print("Optimization finished!")
    outputs = sess.run(logits, feed_dict={X: patterns_test})
    print("outputs: {0}".format(outputs.T))
    plt.plot(patterns_test, outputs, "r.", label="outputs")
    plt.plot(patterns_test, targets_test, "b.", label="targets")
    plt.legend()
    plt.show()


当我在最后画图时,我得到一条直线,好像有一个线性网络。看一下情节:

python - 具有S型激活的多层感知器在sin(2x)回归上产生直线-LMLPHP

这是线性网络误差的正确最小化。但是我不应该进行线性下注,因为我在multilayer_perceptron()函数中使用了S型函数!为什么我的网络会这样?

最佳答案

stddev=1.0中用于权重和偏差初始化的tf.random_normal的默认值很大。尝试为权重指定一个明确的stddev=0.01值;至于偏差,通常的做法是将它们初始化为零。

作为一种初始方法,我还将尝试使用更高的learning_rate 0.01(或者可能不会-请参见相关问题here中的答案)

08-25 05:12