我正在跟踪分析维迪亚。
我很难想象扁平层和稠密层之间的连接,这两个层有2个节点,输入维为50。这是一个二进制分类问题,所以我理解这两个节点。但是,什么决定了输入维度?我们也可以忽略这个参数,在这种情况下,训练这个稠密层的权重就更少了?

import os
import numpy as np
import pandas as pd
import scipy
import sklearn
import keras
from keras.models import Sequential
import cv2
from skimage import io
%matplotlib inline

#Defining the File Path

cat=os.listdir("/mnt/hdd/datasets/dogs_cats/train/cat")
dog=os.listdir("/mnt/hdd/datasets/dogs_cats/train/dog")
filepath="/mnt/hdd/datasets/dogs_cats/train/cat/"
filepath2="/mnt/hdd/datasets/dogs_cats/train/dog/"

#Loading the Images

images=[]
label = []
for i in cat:
    image = scipy.misc.imread(filepath+i)
    images.append(image)
    label.append(0) #for cat images

for i in dog:
    image = scipy.misc.imread(filepath2+i)
    images.append(image)
    label.append(1) #for dog images

#resizing all the images

for i in range(0,23000):
    images[i]=cv2.resize(images[i],(300,300))

#converting images to arrays

images=np.array(images)
label=np.array(label)

# Defining the hyperparameters

filters=10
filtersize=(5,5)

epochs =5
batchsize=128

input_shape=(300,300,3)

#Converting the target variable to the required size

from keras.utils.np_utils import to_categorical
label = to_categorical(label)

#Defining the model

model = Sequential()

model.add(keras.layers.InputLayer(input_shape=input_shape))

model.add(keras.layers.convolutional.Conv2D(filters, filtersize, strides=(1, 1), padding='valid', data_format="channels_last", activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Flatten())

model.add(keras.layers.Dense(units=2, input_dim=50,activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(images, label, epochs=epochs, batch_size=batchsize,validation_split=0.3)

model.summary()

最佳答案

但是,什么决定了输入维度?我们也可以省略这个
参数,在这种情况下,只需训练较少的权重
这么厚的一层?
它由前一层的输出形状决定。从model.summary()可以看出,扁平层的输出形状为(None,219040),因此密集层的输入尺寸为219040。所以,在这种情况下,有更多的重量训练(>50)。

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d_1 (Conv2D)            (None, 296, 296, 10)      760
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 148, 148, 10)      0
_________________________________________________________________
flatten_1 (Flatten)          (None, 219040)            0
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 438082
=================================================================
Total params: 438,842
Trainable params: 438,842
Non-trainable params: 0
_________________________________________________________________

从下面的代码片段可以看出,稠密层的权重是基于input_shape参数(前一层的output_shape参数)创建的。忽略用户在构造input_dim层时传递的Dense
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),

https://github.com/keras-team/keras/blob/3bda5520b787f84f687bb116c460f3aedada039b/keras/layers/core.py#L891

关于python - 什么决定了CNN末端的致密层的输入尺寸,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/56959986/

10-12 19:51