问题描述
我有来自 Retina Unet 的 Unet 模型,但是我增强了图像以及面具.现在?它给了我这个错误 ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y).发现:无
我想在增强(图像和蒙版)上进行训练并在增强图像和蒙版上进行验证.
I have Unet model from Retina Unet, However I have augmented the images as well as the masks. Now? it gives me this error ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
I want to train on augmented (images and masks) and validate on augmented images and masks.
批量生成函数:
def batch_generator(X_gen,Y_gen):
yield(X_batch,Y_batch)
model = get_unet(1,img_width,img_hight) #the U-net model
print("Model Summary")
print(model.summary())
print "Check: final output of the network:"
print model.output_shape
#============ Training ==================================
checkpointer = ModelCheckpoint(filepath='./'+'SAEED'+'_best_weights.h5', verbose=2, monitor='val_acc', mode='auto', save_best_only=True) #save at each epoch if the validation decreased
print("Now augumenting training")
datagen = ImageDataGenerator(rotation_range=120)
#traing augumentation.
train_images_generator = datagen.flow_from_directory(train_images_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
train_mask_generator = datagen.flow_from_directory(train_masks_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
print("Now augumenting val")
#val augumentation.
val_images_generator = datagen.flow_from_directory(val_images_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
val_masks_generator = datagen.flow_from_directory(val_masks_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
print("Now augumenting test")
#test augumentation
test_images_generator = datagen.flow_from_directory(test_images_dir,target_size=(img_width,img_hight),batch_size=25,class_mode=None)
test_masks_generator = datagen.flow_from_directory(test_masks_dir,target_size=(img_width,img_hight),batch_size=25,class_mode=None)
#fitting model.
print("Now fitting the model ")
#model.fit_generator(train_generator,samples_per_epoch = nb_train_samples*2,nb_epoch=nb_epoch,validation_data=val_generator,nb_val_samples=nb_val_samples,callbacks=[checkpointer])
print("train_images_generator size {} and type is {}".format(next(train_images_generator).shape,type(next(train_images_generator))))
print("train_masks_generator size {} and type is {}".format(next(train_mask_generator).shape,type(next(train_mask_generator))))
model.fit_generator(batch_generator(train_images_generator,train_mask_generator),samples_per_epoch = nb_train_samples,nb_epoch=nb_epoch,validation_data=batch_generator(val_images_generator,val_masks_generator),nb_val_samples=nb_val_samples,callbacks=[checkpointer])
print("Finished fitting the model")
`模型摘要:
`
Model Summary
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 160, 160) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 32, 160, 160) 320 input_1[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 32, 160, 160) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 32, 160, 160) 9248 dropout_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 32, 80, 80) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 64, 80, 80) 18496 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout) (None, 64, 80, 80) 0 convolution2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 64, 80, 80) 36928 dropout_2[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 64, 40, 40) 0 convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 128, 40, 40) 73856 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout) (None, 128, 40, 40) 0 convolution2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 128, 40, 40) 147584 dropout_3[0][0]
____________________________________________________________________________________________________
upsampling2d_1 (UpSampling2D) (None, 128, 80, 80) 0 convolution2d_6[0][0]
____________________________________________________________________________________________________
merge_1 (Merge) (None, 192, 80, 80) 0 upsampling2d_1[0][0]
convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 64, 80, 80) 110656 merge_1[0][0]
____________________________________________________________________________________________________
dropout_4 (Dropout) (None, 64, 80, 80) 0 convolution2d_7[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 64, 80, 80) 36928 dropout_4[0][0]
____________________________________________________________________________________________________
upsampling2d_2 (UpSampling2D) (None, 64, 160, 160) 0 convolution2d_8[0][0]
____________________________________________________________________________________________________
merge_2 (Merge) (None, 96, 160, 160) 0 upsampling2d_2[0][0]
convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D) (None, 32, 160, 160) 27680 merge_2[0][0]
____________________________________________________________________________________________________
dropout_5 (Dropout) (None, 32, 160, 160) 0 convolution2d_9[0][0]
____________________________________________________________________________________________________
convolution2d_10 (Convolution2D) (None, 32, 160, 160) 9248 dropout_5[0][0]
____________________________________________________________________________________________________
convolution2d_11 (Convolution2D) (None, 2, 160, 160) 66 convolution2d_10[0][0]
____________________________________________________________________________________________________
reshape_1 (Reshape) (None, 2, 25600) 0 convolution2d_11[0][0]
____________________________________________________________________________________________________
permute_1 (Permute) (None, 25600, 2) 0 reshape_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation) (None, 25600, 2) 0 permute_1[0][0]
====================================================================================================
Total params: 471,010
Trainable params: 471,010
Non-trainable params: 0
`
有什么想法吗?谢谢.
推荐答案
以防以后有人遇到同样的问题.
In case someone run to the same issue later.
问题是发电机问题.固定在下面
The problem is generator issue. fixed below
def batch_generator(X_gen,Y_gen):虽然是真的:yield(X_gen.next(),Y_gen.next())
这篇关于keras ValueError: 生成器的输出应该是一个元组 (x, y, sample_weight) 或 (x, y).发现:无的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!