问题描述
我正在基于这文章.因为我的数据集很小,所以我想使用Keras ImageDataGenerator
并将其提供给fit_generator()
.因此,我遵循了Keras网站上的示例.但是,由于无法压缩图像和蒙版生成器,因此我遵循了并创建了自己的生成器.
I am trying to build a CNN using Keras for an image segmentation task, based on this article. Because my dataset is small, I wanted to use Keras ImageDataGenerator
and feed it to fit_generator()
. So, I followed the example on the Keras website. But, since zipping the image and mask generators didn't work, I followed this answer and created my own generator.
我的输入数据大小为(701,256,1)
,我的问题是二进制(前景,背景).对于每个图像,我都有一个相同形状的标签.
My input data is of size (701,256,1)
and my problem is binary (foreground, background). For each image I have a label of the same shape.
现在,我正面临一个尺寸问题. answer ,但我不确定如何解决.
Now, I am facing a dimensionality problem. This was also mentioned in the answer, but I am unsure of how to solve it.
错误:
ValueError: Error when checking target: expected dense_3 to have 2 dimensions, but got array with shape (2, 704, 256, 1)
我要粘贴此处的全部代码:
I am pasting the entire code I have here:
import numpy
import pygpu
import theano
import keras
from keras.models import Model, Sequential
from keras.layers import Input, Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, Reshape
from keras.layers import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
from keras import backend as K
def superGenerator(image_gen, label_gen):
while True:
x = image_gen.next()
y = label_gen.next()
yield x[0], y[0]
img_height = 704
img_width = 256
train_data_dir = 'Dataset/Train/Images'
train_label_dir = 'Dataset/Train/Labels'
validation_data_dir = 'Dataset/Validation/Images'
validation_label_dir = 'Dataset/Validation/Labels'
n_train_samples = 1000
n_validation_samples = 500
epochs = 50
batch_size = 2
input_shape = (img_height, img_width,1)
target_shape = (img_height, img_width)
model = Sequential()
model.add(Conv2D(80,(28,28), input_shape=input_shape))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(96,(18,18)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(128,(13,13)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(160,(8,8)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(2, activation='softmax'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['accuracy'])
data_gen_args = dict(
rescale=1./255,
horizontal_flip=True,
vertical_flip=True
)
train_datagen = ImageDataGenerator(**data_gen_args)
train_label_datagen = ImageDataGenerator(**data_gen_args)
test_datagen = ImageDataGenerator(**data_gen_args)
test_label_datagen = ImageDataGenerator(**data_gen_args)
seed = 1
train_image_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=target_shape,
color_mode='grayscale',
batch_size=batch_size,
class_mode = 'binary',
seed=seed)
train_label_generator = train_label_datagen.flow_from_directory(
train_label_dir,
target_size=target_shape,
color_mode='grayscale',
batch_size=batch_size,
class_mode = 'binary',
seed=seed)
validation_image_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=target_shape,
color_mode='grayscale',
batch_size=batch_size,
class_mode = 'binary',
seed=seed)
validation_label_generator = test_label_datagen.flow_from_directory(
validation_label_dir,
target_size=target_shape,
color_mode='grayscale',
batch_size=batch_size,
class_mode = 'binary',
seed=seed)
train_generator = superGenerator(train_image_generator, train_label_generator,batch_size)
test_generator = superGenerator(validation_image_generator, validation_label_generator,batch_size)
model.fit_generator(
train_generator,
steps_per_epoch= n_train_samples // batch_size,
epochs=50,
validation_data=test_generator,
validation_steps=n_validation_samples // batch_size)
model.save_weights('first_try.h5')
我是Keras(和CNN)的新手,所以非常感谢您的帮助.
I am new to Keras (and CNNs), so any help would be very much appreciated.
推荐答案
好.我做了一些橡皮鸭调试,并阅读了更多文章.当然,尺寸是个问题. 此简单的答案为我做到了.我的标签形状与输入图像相同,因此模型的输出也应具有该形状.我用Conv2DTranspose
解决了这个问题.
Ok. I did some rubberduck-debugging and read a few more articles. Of course the dimensionality was a problem. This simple answer did it for me.My labels are of shape same as the input image so the output of the model should be of that shape as well. I used Conv2DTranspose
to solve this issue.
这篇关于Keras CNN尺寸问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!