我试图循环训练1000倍的顺序模型。在每个循环中,我的程序都会泄漏内存,直到用尽并收到OOM异常。
我之前已经问过类似的问题
(Training multiple Sequential models in a row slows down)
并看到其他人也遇到类似的问题(Keras: Out of memory when doing hyper parameter grid search)
解决方案始终是在使用完模型后将K.clear_session()
添加到您的代码中。所以我在上一个问题中做到了,但我仍在泄漏内存
这是重现此问题的代码。
import random
import time
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
import tracemalloc
def run():
tracemalloc.start()
num_input_nodes = 12
num_hidden_nodes = 8
num_output_nodes = 1
random_numbers = random.sample(range(1000), 50)
train_x, train_y = create_training_dataset(random_numbers, num_input_nodes)
for i in range(100):
snapshot = tracemalloc.take_snapshot()
for j in range(10):
start_time = time.time()
nn = Sequential()
nn.add(Dense(num_hidden_nodes, input_dim=num_input_nodes, activation='relu'))
nn.add(Dense(num_output_nodes))
nn.compile(loss='mean_squared_error', optimizer='adam')
nn.fit(train_x, train_y, nb_epoch=300, batch_size=2, verbose=0)
K.clear_session()
print("Iteration {iter}. Current time {t}. Took {elapsed} seconds".
format(iter=i*10 + j + 1, t=time.strftime('%H:%M:%S'), elapsed=int(time.time() - start_time)))
top_stats = tracemalloc.take_snapshot().compare_to(snapshot, 'lineno')
print("[ Top 5 differences ]")
for stat in top_stats[:5]:
print(stat)
def create_training_dataset(dataset, input_nodes):
"""
Outputs a training dataset (train_x, train_y) as numpy arrays.
Each item in train_x has 'input_nodes' number of items while train_y items are of size 1
:param dataset: list of ints
:param input_nodes:
:return: (numpy array, numpy array), train_x, train_y
"""
data_x, data_y = [], []
for i in range(len(dataset) - input_nodes - 1):
a = dataset[i:(i + input_nodes)]
data_x.append(a)
data_y.append(dataset[i + input_nodes])
return numpy.array(data_x), numpy.array(data_y)
run()
这是我从第一个内存调试打印中获得的输出
/tensorflow/python/framework/ops.py:121:大小= 3485 KiB(+3485 KiB),计数= 42343(+42343)
/tensorflow/python/framework/ops.py:1400:size = 998 KiB(+998 KiB),count = 8413(+8413)
/tensorflow/python/framework/ops.py:116:size = 888 KiB(+888 KiB),count = 32468(+32468)
/tensorflow/python/framework/ops.py:1185:size = 795 KiB(+795 KiB),count = 3179(+3179)
/tensorflow/python/framework/ops.py:2354:size = 599 KiB(+599 KiB),count = 5886(+5886)
系统信息:
最佳答案
内存泄漏源于Keras和TensorFlow,它们使用单个“默认图”来存储网络结构,随着内部for
循环的每次迭代,网络结构的大小都会增加。
调用K.clear_session()
会释放两次与两次迭代之间的默认图相关联的(后端)状态,但是需要额外调用 tf.reset_default_graph()
才能清除Python状态。
请注意,可能有一个更有效的解决方案:由于nn
不依赖于任何一个循环变量,因此您可以在循环外部定义它,并在循环内部重用相同的实例。如果执行此操作,则无需清除 session 或重置默认图形,并且性能会提高,因为您可以受益于两次迭代之间的缓存。
关于python-3.x - Keras(TensorFlow,CPU): Training Sequential models in loop eats memory,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/42886049/