我正在尝试估算收缩压。我将PPG功能(27)放入了ANN。我得到的结果如下。学习率好吗?如果不是,它是高还是低?这是我的结果。
我将学习率设置为0.000001。我认为它仍然太高。我认为它下降得太快。
损失:5.1285-毫秒:57.7257-val损失:6.0154-val_mse:73.9671
# import data
data = pandas.read_csv("data.csv", sep=",")
data = data[["cp", "st", "dt", "sw10", "dw10", "sw10+dw10", "dw10/sw10", "sw25", "dw25",
"sw25+dw25", "dw25/sw25", "sw33", "dw33", "sw33+dw33", "dw33/sw33", "sw50",
"dw50", "sw50+dw50", "dw50/sw50", "sw66", "dw66", "sw66+dw66", "dw66/sw66",
"sw75", "dw75", "sw75+dw75", "dw75/sw75", "sys"]]
# data description
described_data = data.describe()
print(described_data)
print(len(data))
# # histograms of input data (features)
# data.hist(figsize=(12, 10))
# plt.show()
# index and shuffle data
data.reset_index(inplace=True, drop=True)
data = data.reindex(numpy.random.permutation(data.index))
# x (parameters) and y (blood pressure) data
predict = "sys"
X = numpy.array(data.drop([predict], 1))
y = numpy.array(data[predict])
# Splitting the total data into subsets: 90% - training, 10% - testing
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.1, random_state=0)
def feature_normalize(X): # standardization function
mean = numpy.mean(X, axis=0)
std = numpy.std(X, axis=0)
return (X - mean) / std
# Features scaling
X_train_standardized = feature_normalize(X_train)
X_test_standardized = feature_normalize(X_test)
# Build the ANN model
model = Sequential()
# Adding the input layer and the first hidden layer
model.add(Dense(25, activation='sigmoid', input_dim=27))
# Adding the second hidden layer
model.add(Dense(units=15, activation='sigmoid'))
# Adding the output layer
model.add(Dense(units=1, activation='linear', kernel_initializer='normal'))
model.summary()
optimizer = keras.optimizers.Adam(learning_rate=0.000001)
# Compiling the model
model.compile(loss='mae', optimizer='adam', metrics=['mse'])
#Early stopping to prevent overfitting
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=10, verbose=1, mode='auto',
restore_best_weights=True)
# Fitting the ANN to the Training set
history = model.fit(X_train_standardized, y_train, validation_split=0.2, verbose=2, epochs=1000, batch_size=5)
data
loss
prediction
最佳答案
未使用学习率,因为您没有使用optimizer
实例编译模型。
# Compiling the model
model.compile(loss='mae', optimizer='adam', metrics=['mse'])
应该:
# Compiling the model
model.compile(loss='mae', optimizer=optimizer, metrics=['mse'])
关于问题本身:正如《混血王子》所说,在不知道您的数据集的情况下很难说。此外,数据本身的状况很重要。我确实会建议以下内容:
考虑将要素放置在(0,1)范围内,这可以通过sklearn.preprocess.MinMaxScaler完成。
不用逐步确定超参数的方法,而是根据验证数据优化它们,然后在保留测试集上测试最终结果。 Hyper-parameter optimization is so easy with skopt。
关于python - 亚当方法学习率高吗?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/58993754/