本文介绍了CIFAR-10 TensorFlow:InvalidArgumentError(请参见上文的回溯):logits和标签必须可广播的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在按以下方式实现CNN,但出现了此错误:

I am implementing the CNN as below, but I got this error:

我在下面附加了我的部分代码。我怀疑错误是由于我的体重和偏见的形状和尺寸引起的。

I have attached my partial code below. I suspect the error is coming from the shapes and dimensions of my weight and biases.

我要实现的目标-我想将CNN层减少两个

What I'm trying to implement - I want to reduce the CNN layers from two fully connected layers to just one fully connected layer, meaning, out=tf.add(tf.add(fc1....) and stop it there.

nInput = 32
nChannels = 3
nClasses = 10

# Placeholder and drop-out
X = tf.placeholder(tf.float32, [None, nInput, nInput, nChannels])
Y = tf.placeholder(tf.float32, [None, nClasses])
keep_prob = tf.placeholder(tf.float32)

def conv2d(x, W, b, strides=1):
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)


def maxpool2d(x, k=2):
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME')


def normalize_layer(pooling):
    #norm = tf.nn.lrn(pooling, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')
    norm = tf.contrib.layers.batch_norm(pooling, center=True, scale=True)
    return norm


def drop_out(fc, keep_prob=0.4):
    drop_out = tf.layers.dropout(fc, rate=keep_prob)
    return drop_out


weights = {
    'WC1': tf.Variable(tf.random_normal([5, 5, 3, 32]), name='W0'),
    'WC2': tf.Variable(tf.random_normal([5*5*32, 64]), name='W1'),
    #'WD1': tf.Variable(tf.random_normal([8 * 8 * 64, 64]), name='W2'),
    #'WD2': tf.Variable(tf.random_normal([64, 128]), name='W3'),
    'out': tf.Variable(tf.random_normal([64, nClasses]), name='W5')
}

biases = {
    'BC1': tf.Variable(tf.random_normal([32]), name='B0'),
    'BC2': tf.Variable(tf.random_normal([64]), name='B1'),
    #'BD1': tf.Variable(tf.random_normal([64]), name='B2'),
    #'BD2': tf.Variable(tf.random_normal([128]), name='B3'),
    'out': tf.Variable(tf.random_normal([nClasses]), name='B5')
}

def conv_net(x, weights, biases):
    conv1 = conv2d(x, weights['WC1'], biases['BC1'])
    conv1 = maxpool2d(conv1)
    conv1 = normalize_layer(conv1)

    #conv2 = conv2d(conv1, weights['WC2'], biases['BC2'])
    #conv2 = maxpool2d(conv2)
    #conv2 = normalize_layer(conv2)

    fc1 = tf.reshape(conv1, [-1, weights['WC2'].get_shape().as_list()[0]])
    fc1 = tf.add(tf.matmul(fc1, weights['WC2']), biases['BC2'])
    fc1 = tf.nn.relu(fc1)  # Using self-normalization activation
    fc1 = drop_out(fc1)

    #fc2 = tf.add(tf.matmul(fc1, weights['WD2']), biases['BD2'])
    #fc2 = tf.nn.selu(fc2)  # Using self-normalization activation
    #fc2 = drop_out(fc2)

    out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
    out = tf.nn.softmax(out)

    return out


推荐答案

我认为权重字典的 WC2参数有问题。它应该是'WC2':tf.Variable(tf.random_normal([16 * 16 * 32,64]),name ='W1')

I think there is something wrong with 'WC2' parameter of weights dictionary. It should be 'WC2': tf.Variable(tf.random_normal([16*16*32, 64]), name='W1')

应用 1 卷积和最大池运算后,您将对 32 x 32 x的输入图像进行下采样3 16 x 16 x 3 ,现在您需要展平该降采样的输出,以将其作为输入输入到完全连接的层。这就是为什么您需要通过 16 * 16 * 32

After applying 1 convolution and max-pooling operations, you are downsampling the input image from 32 x 32 x 3 to 16 x 16 x 3 and now you need to flatten this downsampled output to feed this as input to the fully connected layer. That's why you need to pass 16*16*32.

这篇关于CIFAR-10 TensorFlow:InvalidArgumentError(请参见上文的回溯):logits和标签必须可广播的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 10:04