本文介绍了神经网络正弦近似的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在数天未能使用神经网络进行 Q 学习之后,我决定回归基础并做一个简单的函数近似,看看一切是否正常工作,以及一些参数如何影响学习过程.这是我想出的代码

from keras.models import Sequential从 keras.layers 导入密集导入 matplotlib.pyplot 作为 plt随机导入导入 numpy从 sklearn.preprocessing 导入 MinMaxScaler回归器 = 顺序()regressor.add(Dense(units=20, activation='sigmoid', kernel_initializer='uniform', input_dim=1))regressor.add(Dense(units=20, activation='sigmoid', kernel_initializer='uniform'))regressor.add(Dense(units=20, activation='sigmoid', kernel_initializer='uniform'))regressor.add(密集(单位=1))regressor.compile(loss='mean_squared_error', 优化器='sgd')#regressor = ExtraTreesRegressor()N = 5000X = numpy.empty((N,))Y = numpy.empty((N,))对于范围(N)中的我:X[i] = random.uniform(-10, 10)X = numpy.sort(X).reshape(-1, 1)对于范围(N)中的我:Y[i] = numpy.sin(X[i])Y = Y.reshape(-1, 1)X_scaler = MinMaxScaler()Y_scaler = MinMaxScaler()X = X_scaler.fit_transform(X)Y = Y_scaler.fit_transform(Y)regressor.fit(X, Y, epochs=2,verbose=1,batch_size=32)#regressor.fit(X, Y.reshape(5000,))x = numpy.mgrid[-10:10:100*1j]x = x.reshape(-1, 1)y = numpy.mgrid[-10:10:100*1j]y = y.reshape(-1, 1)x = X_scaler.fit_transform(x)对于范围内的 i(len(x)):y[i] = regressor.predict(numpy.array([x[i]]))plt.figure()plt.plot(X_scaler.inverse_transform(x), Y_scaler.inverse_transform(y))plt.plot(X_scaler.inverse_transform(X), Y_scaler.inverse_transform(Y))

问题是我所有的预测值都在 0 左右.如您所见,我使用了 sklearn(注释行)中的 ExtraTreesRegressor 来检查协议是否确实正确.那么我的神经网络有什么问题呢?为什么它不起作用?

(我试图解决的实际问题是使用神经网络计算山地车问题的 Q 函数.它与这个函数逼近器有什么不同?)

解决方案

有了这些变化:

  • 激活relu
  • 删除kernel_initializer(即保留

    修补匠,一次又一次......

    After spending days failing to use neural network for Q learning, I decided to go back to the basics and do a simple function approximation to see if everything was working correctly and see how some parameters affected the learning process.Here is the code that I came up with

    from keras.models import Sequential
    from keras.layers import Dense
    import matplotlib.pyplot as plt
    import random
    import numpy
    from sklearn.preprocessing import MinMaxScaler
    
    regressor = Sequential()
    regressor.add(Dense(units=20, activation='sigmoid', kernel_initializer='uniform', input_dim=1))
    regressor.add(Dense(units=20, activation='sigmoid', kernel_initializer='uniform'))
    regressor.add(Dense(units=20, activation='sigmoid', kernel_initializer='uniform'))
    regressor.add(Dense(units=1))
    regressor.compile(loss='mean_squared_error', optimizer='sgd')
    #regressor = ExtraTreesRegressor()
    
    N = 5000
    X = numpy.empty((N,))
    Y = numpy.empty((N,))
    
    for i in range(N):
        X[i] = random.uniform(-10, 10)
    X = numpy.sort(X).reshape(-1, 1)
    
    for i in range(N):
        Y[i] = numpy.sin(X[i])
    Y = Y.reshape(-1, 1)
    
    X_scaler = MinMaxScaler()
    Y_scaler = MinMaxScaler()
    X = X_scaler.fit_transform(X)
    Y = Y_scaler.fit_transform(Y)
    
    regressor.fit(X, Y, epochs=2, verbose=1, batch_size=32)
    #regressor.fit(X, Y.reshape(5000,))
    
    x = numpy.mgrid[-10:10:100*1j]
    x = x.reshape(-1, 1)
    y = numpy.mgrid[-10:10:100*1j]
    y = y.reshape(-1, 1)
    x = X_scaler.fit_transform(x)
    
    for i in range(len(x)):
        y[i] = regressor.predict(numpy.array([x[i]]))
    
    plt.figure()
    plt.plot(X_scaler.inverse_transform(x), Y_scaler.inverse_transform(y))
    plt.plot(X_scaler.inverse_transform(X), Y_scaler.inverse_transform(Y))
    

    The problem is that all my predictions are around 0 in value. As you can see I used an ExtraTreesRegressor from sklearn (commented lines) to check that the protocol is actually correct. So what is wrong with my neural network ? Why is it not working ?

    (The actual problem that I'm trying to solve is to compute the Q function for the mountain car problem using neural network. How is it different from this function approximator ?)

    解决方案

    With these changes:

    • Activations to relu
    • Remove kernel_initializer (i.e. leave the default 'glorot_uniform')
    • Adam optimizer
    • 100 epochs

    i.e.

    regressor = Sequential()
    regressor.add(Dense(units=20, activation='relu', input_dim=1))
    regressor.add(Dense(units=20, activation='relu'))
    regressor.add(Dense(units=20, activation='relu'))
    regressor.add(Dense(units=1))
    regressor.compile(loss='mean_squared_error', optimizer='adam')
    
    regressor.fit(X, Y, epochs=100, verbose=1, batch_size=32)
    

    and the rest of your code unchanged, here is the result:

    Tinker, again and again...

    这篇关于神经网络正弦近似的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-06 06:49