我试图在tensorflow中使用cifar100数据集训练图像分类器模型,但准确性没有增加超过1.2%。我在问题上进行了搜索,找到了几种解决方案,但我的模型仍然做得不好。

我实现了一些步骤,例如:


不断增加CNN层并汇集以及退出
正常化
更改编号致密层
更改批次大小和时代
不断变化的优化器


我注意到的一个共同之处是,当epoch = 10且批处理大小= 256和epoch = 500且批处理大小= 512时,训练损失和准确性以相同的方式变化。

为了防止过度拟合,我还尝试了辍学正则化,这显示了一些变化(火车acc。在0.5到1.2%之间变化),当我增加历元时,相同的参数没有改变(火车和model acc。)。

我想知道这是数据集还是模型定义问题。

分类器模型:

def classifierModel(inp):
    layer1=tf.nn.relu(tf.nn.conv2d(inp, filter=tf.Variable(tf.truncated_normal([5,5,3,16])),
                                   strides=[1,2,2,1], padding='SAME'))
    layer1=tf.nn.bias_add(layer1, tf.Variable(tf.truncated_normal([16])))
    layer1=tf.nn.relu(tf.nn.max_pool(layer1, ksize=[1,1,1,1], strides=[1,2,2,1], padding='SAME'))

    layer2=tf.nn.relu(tf.nn.conv2d(layer1, filter=tf.Variable(tf.truncated_normal([5,5,16,32])),
                                   strides=[1,2,2,1], padding='SAME'))
    layer2=tf.nn.bias_add(layer2, tf.Variable(tf.truncated_normal([32])))
    layer2=tf.nn.relu(tf.nn.max_pool(layer2, ksize=[1,1,1,1], strides=[1,2,2,1], padding='SAME'))

    layer3=tf.nn.relu(tf.nn.conv2d(layer2, filter=tf.Variable(tf.truncated_normal([5,5,32, 64])),
                                   strides=[1,2,2,1], padding='SAME'))
    layer3=tf.nn.bias_add(layer3, tf.Variable(tf.truncated_normal([64])))

    layer3=tf.nn.relu(tf.nn.max_pool(layer3, ksize=[1,1,1,1], strides=[1,2,2,1], padding='SAME'))
    layer3=tf.nn.dropout(layer3, keep_prob=0.7)
    print(layer3.shape)


    fclayer1=tf.reshape(layer3, [-1, weights['fc1'].get_shape().as_list()[0]])
    fclayer1=tf.add(tf.matmul(fclayer1, weights['fc1']), biases['fc1'])
    fclayer1= tf.nn.dropout(fclayer1, keep_prob=0.5)
    fclayer2=tf.add(tf.matmul(fclayer1, weights['fc2']), biases['fc2'])
    fclayer2=tf.nn.dropout(fclayer2, keep_prob=0.5)
    fclayer3=tf.add(tf.matmul(fclayer2, weights['fc3']), biases['fc3'])
    fclayer3=tf.nn.dropout(fclayer3, keep_prob=0.7)
    outLayer=tf.nn.softmax(tf.add(tf.matmul(fclayer3, weights['out']), biases['out']))
    return outLayer


优化器,成本,准确性:

cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=y))
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
correct_pred=tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
accuracy=tf.reduce_mean(tf.cast(correct_pred, tf.float32))


训练:

with tf.Session() as sess:
sess.run(init)
for i in range(epochs):
    #shuffle(idx)
    #train_features=train_features[idx, :, :, :]
    #train_labels=train_labels[idx, ]
    for batch_features, batch_labels in get_batches(batch_size, train_features, train_labels):
        sess.run(optimizer, feed_dict={x:batch_features, y:batch_labels})
    if (i%display_step==0):

        epoch_stats(sess, i, batch_features, batch_labels)

model_acc=sess.run(accuracy, feed_dict={x:test_features, y:test_labels})
saver.save(sess, save_file)

writer.add_graph(sess.graph)


结果:


时代:0-费用:4.62-acc:0.01
时期:1-费用:4.62-acc:0.01
时期:2-费用:4.62-acc:0.008
时期:3-费用:4.61-acc:0.012
时期:4-费用:4.61-acc:0.005
时代:5-费用:4.62-acc:0.006
时期:6-费用:4.62-acc:0.016
时期:7-费用:4.62-acc:0.012
时期:8-费用:4.61-acc:0.014
时期:9-费用:4.62-acc:0.009
型号精度-0.010499999858438969

最佳答案

您传递给softmax_cross_entropy_with_logits_v2的第一个参数不正确。
您必须传递“上一个”值才能应用softmax。那是因为softmax_cross_entropy_with_logits_v2实际上是cross_entropy(softmax(x))。理由是可以简化导数。

在模型中,您应该执行以下操作:

def classifierModel(inp):
    layer1=tf.nn.relu(tf.nn.conv2d(inp, filter=tf.Variable(tf.truncated_normal([5,5,3,16])),
                                   strides=[1,2,2,1], padding='SAME'))
    layer1=tf.nn.bias_add(layer1, tf.Variable(tf.truncated_normal([16])))
    layer1=tf.nn.relu(tf.nn.max_pool(layer1, ksize=[1,1,1,1], strides=[1,2,2,1], padding='SAME'))

    layer2=tf.nn.relu(tf.nn.conv2d(layer1, filter=tf.Variable(tf.truncated_normal([5,5,16,32])),
                                   strides=[1,2,2,1], padding='SAME'))
    layer2=tf.nn.bias_add(layer2, tf.Variable(tf.truncated_normal([32])))
    layer2=tf.nn.relu(tf.nn.max_pool(layer2, ksize=[1,1,1,1], strides=[1,2,2,1], padding='SAME'))

    layer3=tf.nn.relu(tf.nn.conv2d(layer2, filter=tf.Variable(tf.truncated_normal([5,5,32, 64])),
                                   strides=[1,2,2,1], padding='SAME'))
    layer3=tf.nn.bias_add(layer3, tf.Variable(tf.truncated_normal([64])))

    layer3=tf.nn.relu(tf.nn.max_pool(layer3, ksize=[1,1,1,1], strides=[1,2,2,1], padding='SAME'))
    layer3=tf.nn.dropout(layer3, keep_prob=0.7)
    print(layer3.shape)


    fclayer1=tf.reshape(layer3, [-1, weights['fc1'].get_shape().as_list()[0]])
    fclayer1=tf.add(tf.matmul(fclayer1, weights['fc1']), biases['fc1'])
    fclayer1= tf.nn.dropout(fclayer1, keep_prob=0.5)
    fclayer2=tf.add(tf.matmul(fclayer1, weights['fc2']), biases['fc2'])
    fclayer2=tf.nn.dropout(fclayer2, keep_prob=0.5)
    fclayer3=tf.add(tf.matmul(fclayer2, weights['fc3']), biases['fc3'])
    fclayer3=tf.nn.dropout(fclayer3, keep_prob=0.7)
    logits = tf.add(tf.matmul(fclayer3, weights['out']), biases['out'])
    outLayer=tf.nn.softmax(logits)
    return outLayer, logits


在损失函数中:

model, logits = classifierModel(inp)
cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
correct_pred=tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
accuracy=tf.reduce_mean(tf.cast(correct_pred, tf.float32))

关于python - 使用Cifar 100的图像分类器,训练精度未提高,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54259807/

10-12 19:27