我正在尝试使用一本书作为输入来创建用于字符识别和预测的RNN。在本地计算机上运行每个纪元需要花费几分钟,因此我尝试在GCP上运行它。
在Google Cloud Platform上执行代码时,出现以下错误。但是,当我使用Spyder3在本地计算机上尝试时,代码正常执行。
# Character Prediction using RNN
# Small LSTM Network to Generate Text for Alice in Wonderland
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
# load ascii text and covert to lowercase
filename = "Alice in Wonderland.txt"
raw_text = open(filename).read()
raw_text = raw_text.lower()
# create mapping of unique chars to integers
chars = sorted(list(set(raw_text)))
char_to_int = dict((c, i) for i, c in enumerate(chars))
# summarize the loaded data
n_chars = len(raw_text)
n_vocab = len(chars)
print ("Total Characters: ", n_chars)
print ("Total Vocab: ", n_vocab)
# prepare the dataset of input to output pairs encoded as integers
seq_length = 100
X_train = []
y_train = []
for i in range(0, n_chars - seq_length, 1):
seq_in = raw_text[i:i + seq_length]
seq_out = raw_text[i + seq_length]
X_train.append([char_to_int[char] for char in seq_in])
y_train.append(char_to_int[seq_out])
n_patterns = len(X_train)
print ("Total Patterns: ", n_patterns)
# reshape X to be [samples, time steps, features]
X = numpy.reshape(X_train, (len(X_train), seq_length, 1))
# normalize
X = X / float(n_vocab)
# one hot encode the output variable
y = np_utils.to_categorical(y_train)
# define the LSTM model
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
# define the checkpoint
filepath="weights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)
在以下行上创建LSTM时发生错误:
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2])))
这是错误:
文件“ /root/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py”,行2957,在rnn中
maximum_iterations = input_length)
TypeError:while_loop()获得了意外的关键字参数“ maximum_iterations”
最佳答案
在本地计算机上运行时,我遇到了类似的问题。以下是我遵循的步骤
我的conda环境名称是TESTENV
登录或使用进入您的conda环境
source activate TESTENV
检查conda环境中是否已经安装了pip,否则进行安装
conda install pip
安装TensorFlow版本1.4.1
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.1-py2-none-any.whl
安装Keras版本2.1.2
conda install keras=2.1.2
关于python - 在GCP上运行时出错:意外的关键字参数'maximum_iterations',我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/51015928/