本文介绍了Tensorflow入门-将图像拆分为子图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 29岁程序员,3月因学历无情被辞! 这是我第一次使用卷积神经网络和Tensorflow。 我正在尝试实现能够提取来自数字视网膜图像的血管。我正在使用公开发布的驱动器数据库(图像位于.tif中 由于我的图片很大,我的想法是将它们分成大小为28x28x1的子图片( 1是绿色通道,唯一的一个我需要)。为了创建训练集,我从每个图像上随机地裁剪28x28批次,并在该集合上训练网络。 现在,我想在数据库中的大图像之一(也就是说,我想将网络完全应用)。由于我的网络是在尺寸为28x28的子图像上进行训练的,所以我们的想法是将眼睛分割成'n'个子图像,将它们传递给网络,重新组装它们并显示结果,如图1所示: 图1 我尝试使用一些函数,例如: tf.extract_image_pathces 或 tf.train.batch ,但是我想知道什么是正确的方法。 下面是我的代码段。卡住的函数是 split_image(image) import numpy import os 从PIL导入中随机获得 图片导入张量流为tf BATCH_WIDTH = 28; BATCH_HEIGHT = 28; NUM_TRIALS = 10; 类驱动器: def __init __(self,train): self.train = train class数据集: def __init __( self,输入,标签): self.inputs =输入 self.labels =标签 self.current_batch = 0 def next_batch(self):批处理= self.inputs [self.current_batch],self.labels [self.current_batch] self.current_batch =(self.current_batch + 1)%len(self.inputs)返回批处理 #主要计算批处理中黑色像素的数量 def黑色(图像):像素= image.getdata() black_thresh = 50 nblack = 0 表示像素:如果pixel < black_thresh: nblack + = 1 return nblack / float(len(pixels))> 0.5 #从任意点开始裁剪图像 def cropImage(image,label): width = image.size [0] height = image。 size [1] x = random.randrange(0,width-BATCH_WIDTH)y = random.randrange(0,height-BATCH_HEIGHT) image = image.crop((x,y,x + BATCH_WIDTH,y + BATCH_HEIGHT))。split()[1] label = label.crop((x,y,x + BATCH_WIDTH,y + BATCH_HEIGHT))。split()[0] 返回图像,标签 def split_image(image): ksizes_ = [1,BATCH_WIDTH,BATCH_HEIGHT,1] strides_ = [1,BATCH_WIDTH,BATCH_HEIGHT, 1] 输入= numpy.array(image.split()[1])$ ​​b $ b #input = tf.reshape((input),[image.size [0],image。 size [1]]) #input = tf.train.batch([input],batch_size = 1) split = tf.extract_image_patches(input,padding ='VALID',ksizes = ksizes_,步幅= strides_,比率= [1,28,28,1],名称= asdk) #从数据集中创建NUM_TRIALS张图像 def create_datase t(images_path,label_path):文件= os.listdir(images_path) label_files = os.listdir(label_path) images = []; 标签= []; t = 0 而t 索引= random.randrange(0,len(files)) if files [index] .endswith(。tif): image_filename = images_path + files [index] label_filename = label_path + label_files [index] image = Image.open(image_filename) label = Image.open(label_filename) image,label = cropImage(image,label)如果不是大多数黑色(图像):#images.append(tf.convert_to_tensor(numpy.array(image)))#labels.append(tf.convert_to_tensor(numpy.array(label)) )) images.append(numpy.array(image)) labels.append(numpy.array(label)) t + = 1 image = Image.open(images_path + files [1])$ ​​b $ b split_image(image) train =数据集(图像,标签) return Drive(train) 解决方案您可以使用 r的组合eshape 和 transpose 调用以将图像切成图块: def split_image(image3,tile_size): image_shape = tf.shape(image3) tile_rows = tf.reshape(image3,[image_shape [0],-1,tile_size [1] ,image_shape [2]]) serial_tiles = tf.transpose(tile_rows,[1,0,2,3])返回tf.reshape(serial_tiles,[-1,tile_size [1],tile_size [0],image_shape [2]]) 其中 image3 是3维张量(例如图片),而 tile_size 是一对值 [H,W] ,它们指定图块的大小。输出是形状为 [B,H,W,C] 的张量。在您的情况下,呼叫为: tiles = split_image(image,[28,28]) 得出形状为 [B,28,28,1] 。您还可以通过反向执行以下操作来从图块重新组合原始图像: def unsplit_image(tiles4,image_shape): tile_width = tf.shape(tiles4)[1] serialized_tiles = tf.reshape(tiles4,[-1,image_shape [0],tile_width,image_shape [2]]) rowwise_tiles = tf .transpose(serialized_tiles,[1,0,2,3])返回tf.reshape(rowwise_tiles,[image_shape [0],image_shape [1],image_shape [2]]) 其中 tiles4 是形状为的4D张量[ B,H,W,C] 和 image_shape 是原始图像的形状。在您的情况下,呼叫可能是: image = unsplit_image(tiles,tf.shape(image)) 请注意,这仅在图像大小可被图块大小整除的情况下有效。如果不是这种情况,则需要将图像填充到图块大小的最接近倍数: def pad_image_to_tile_multiple(image3,tile_size ,padding = CONSTANT): imagesize = tf.shape(image3)[0:2] padding_ = tf.to_int32(tf.ceil(imagesize / tile_size))* tile_size-imagesize return tf.pad(image3,[[0,padding_ [0]],[0,padding_ [1]],[0,0]],填充) 您应该这样称呼: image = pad_image_to_tile_multiple (image,[28,28]) 然后在重新组装图像后,通过拼接移除paddig瓷砖: image = image [0:original_size [0],0:original_size [1],:] This is my very first time using a Convolutional Neural Networks and Tensorflow.I am trying to implement a convolutional neural network that is able to extract vessels from Digital Retinal Images. I am working with the publicly available Drive database (images are in .tif format).Since my images are very large my idea is to split them into sub-images of size 28x28x1 (The "1" is the green channel, the only one I need). To create the training set I randomly crop a 28x28 batch iteratively from each image, and train the network on this set.Now, I would like to test my trained network on one of the large images in the database (that is, I want to apply the network to a complete eye). Since my network is trained on sub-images of size 28x28 the idea is to split the eye in 'n' sub-images, pass them throw the network, reassemble them and show the result as show in Fig1:Fig1I tried using some functions like:tf.extract_image_pathces or tf.train.batch, but I would like to know what is the right method to do this.Below is a snippet of my code. The function where I am stuck is split_image(image)import numpyimport osimport randomfrom PIL import Imageimport tensorflow as tfBATCH_WIDTH = 28;BATCH_HEIGHT = 28;NUM_TRIALS = 10;class Drive: def __init__(self,train): self.train = trainclass Dataset: def __init__(self, inputs, labels): self.inputs = inputs self.labels = labels self.current_batch = 0 def next_batch(self): batch = self.inputs[self.current_batch], self.labels[self.current_batch] self.current_batch = (self.current_batch + 1) % len(self.inputs) return batch#counts the number of black pixel in the batchdef mostlyBlack(image): pixels = image.getdata() black_thresh = 50 nblack = 0 for pixel in pixels: if pixel < black_thresh: nblack += 1 return nblack / float(len(pixels)) > 0.5#crop the image starting from a random pointdef cropImage(image, label): width = image.size[0] height = image.size[1] x = random.randrange(0, width - BATCH_WIDTH) y = random.randrange(0, height - BATCH_HEIGHT) image = image.crop((x, y, x + BATCH_WIDTH, y + BATCH_HEIGHT)).split()[1] label = label.crop((x, y, x + BATCH_WIDTH, y + BATCH_HEIGHT)).split()[0] return image, labeldef split_image(image): ksizes_ = [1, BATCH_WIDTH, BATCH_HEIGHT, 1] strides_ = [1, BATCH_WIDTH, BATCH_HEIGHT, 1] input = numpy.array(image.split()[1]) #input = tf.reshape((input), [image.size[0], image.size[1]]) #input = tf.train.batch([input],batch_size=1) split = tf.extract_image_patches(input, padding='VALID', ksizes=ksizes_, strides=strides_, rates=[1,28,28,1], name="asdk")#creates NUM_TRIALS images from a datasetdef create_dataset(images_path, label_path): files = os.listdir(images_path) label_files = os.listdir(label_path) images = []; labels = []; t = 0 while t < NUM_TRIALS: index = random.randrange(0, len(files)) if files[index].endswith(".tif"): image_filename = images_path + files[index] label_filename = label_path + label_files[index] image = Image.open(image_filename) label = Image.open(label_filename) image, label = cropImage(image, label) if not mostlyBlack(image): #images.append(tf.convert_to_tensor(numpy.array(image))) #labels.append(tf.convert_to_tensor(numpy.array(label))) images.append(numpy.array(image)) labels.append(numpy.array(label)) t+=1 image = Image.open(images_path + files[1]) split_image(image) train = Dataset(images, labels) return Drive(train) 解决方案 You can use a combination of reshape and transpose calls to cut an image into tiles:def split_image(image3, tile_size): image_shape = tf.shape(image3) tile_rows = tf.reshape(image3, [image_shape[0], -1, tile_size[1], image_shape[2]]) serial_tiles = tf.transpose(tile_rows, [1, 0, 2, 3]) return tf.reshape(serial_tiles, [-1, tile_size[1], tile_size[0], image_shape[2]])where image3 is a 3-dimensional tensor (e.g. an image), and tile_size is a pair of values [H, W] specifying the size of a tile. The output is a tensor with shape [B, H, W, C]. In your case the call would be:tiles = split_image(image, [28, 28])resulting in a tensor with shape [B, 28, 28, 1]. You can also reassemble the original image from the tiles by performing these operations in reverse:def unsplit_image(tiles4, image_shape): tile_width = tf.shape(tiles4)[1] serialized_tiles = tf.reshape(tiles4, [-1, image_shape[0], tile_width, image_shape[2]]) rowwise_tiles = tf.transpose(serialized_tiles, [1, 0, 2, 3]) return tf.reshape(rowwise_tiles, [image_shape[0], image_shape[1], image_shape[2]])Where tiles4 is a 4D tensor of shape [B, H, W, C], and image_shape is the shape of the original image. In your case the call could be:image = unsplit_image(tiles, tf.shape(image))Note that this only works if the image size is divisible by the tile size. If that's not the case you need to pad your image to the nearest multiple of the tile size:def pad_image_to_tile_multiple(image3, tile_size, padding="CONSTANT"): imagesize = tf.shape(image3)[0:2] padding_ = tf.to_int32(tf.ceil(imagesize / tile_size)) * tile_size - imagesize return tf.pad(image3, [[0, padding_[0]], [0, padding_[1]], [0, 0]], padding)Which you would call as such:image = pad_image_to_tile_multiple(image, [28,28])Then remove the paddig by splicing after you reassembled the image from tiles:image = image[0:original_size[0], 0:original_size[1], :] 这篇关于Tensorflow入门-将图像拆分为子图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
07-30 17:24