本文介绍了Tensorflow 标签和 logits 形状不兼容的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 Tensorflow 中有以下代码,其中包含使用 Keras 创建的自定义估算器.它在损失函数 loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=labels, logits=logits) 上出错.

I have following code in Tensorflow with custom estimator that is created with Keras. It errors out on loss function loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=labels, logits=logits).

错误信息很明显,我的标签和 logits 形状不一样.只是不知道如何解决这个问题.我还附上了我的输入功能.

The error message is clear that my label and logits are not the same shape. just not sure how to fix that. I also attached my input function.

非常感谢您的帮助.

谢谢约翰

这是我的代码:

def read_dataset(filename, mode, batch_size = 512):
  def _input_fn():
    def decode_csv(value_column):
      columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
      features = dict(zip(CSV_COLUMNS, columns))
      label = features.pop(LABEL_COLUMS)
      return features, label



    # Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
    filenames_dataset = tf.data.Dataset.list_files(filename)
    # Read lines from text files
    textlines_dataset = filenames_dataset.flat_map(
                                lambda filename: (
                                   tf.data.TextLineDataset(filename)
                                   .skip(1)
                                ))

    # Parse text lines as comma-separated values (CSV)
    dataset = textlines_dataset.map(decode_csv)

    # Note:
    # use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
    # use tf.data.Dataset.map      to apply one to one  transformations (here: text line -> feature list)

    if mode == tf.estimator.ModeKeys.TRAIN:
        num_epochs = None # indefinitely
        dataset = dataset.shuffle(buffer_size = 10 * batch_size)
    else:
        num_epochs = 1 # end-of-input after this

    dataset = dataset.repeat(num_epochs).batch(batch_size)


    batch_features, batch_labels = dataset.make_one_shot_iterator().get_next()


    return  batch_features, batch_labels
  return _input_fn


he_init = tf.keras.initializers.he_normal()

def build_fully_connected(X, n_units=100, activation=tf.keras.activations.relu, initialization=he_init,
                          batch_normalization=False, training=False, name=None):
    layer = tf.keras.layers.Dense(n_units,
                                  activation=None,
                                  kernel_initializer=he_init,
                                  name=name)(X)
    if batch_normalization:
        bn = tf.keras.layers.BatchNormalization(momentum=0.90)
        layer = bn(layer, training=training)
    return activation(layer)

def output_layer(h, n_units, initialization=he_init,
                 batch_normalization=False, training=False):
    logits = tf.keras.layers.Dense(n_units, activation=None)(h)
    if batch_normalization:
        bn = tf.keras.layers.BatchNormalization(momentum=0.90)
        logits = bn(logits, training=training)
    return logits

# build model

ACTIVATION = tf.keras.activations.relu
BATCH_SIZE = 550
HIDDEN_UNITS = [256, 128, 16, 1]
LEARNING_RATE = 0.01
NUM_STEPS = 10
USE_BATCH_NORMALIZATION = False

def dnn_custom_estimator(features, labels, mode, params):
    in_training = mode == tf.estimator.ModeKeys.TRAIN
    use_batch_norm = params['batch_norm']

    net = tf.feature_column.input_layer(features, params['features'])
    for i, n_units in enumerate(params['hidden_units']):
        net = build_fully_connected(net, n_units=n_units, training=in_training,
                                    batch_normalization=use_batch_norm,
                                    activation=params['activation'],
                                    name='hidden_layer'+str(i))

    logits = output_layer(net, 1, batch_normalization=use_batch_norm,
                          training=in_training)
    print (logits.get_shape())

    print (labels.get_shape())


    predicted_classes = tf.argmax(logits, 1)

    loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=labels, logits=logits)
    accuracy = tf.metrics.accuracy(labels=tf.argmax(labels, 1),
                                   predictions=predicted_classes,
                                   name='acc_op')
    tf.summary.scalar('accuracy', accuracy[1])  # for visualizing in TensorBoard
    if mode == tf.estimator.ModeKeys.EVAL:
        return tf.estimator.EstimatorSpec(mode, loss=loss,
                                          eval_metric_ops={'accuracy': accuracy})

    # Create training op.
    assert mode == tf.estimator.ModeKeys.TRAIN

    extra_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    optimizer = tf.train.AdamOptimizer(learning_rate=params['learning_rate'])
    with tf.control_dependencies(extra_ops):
        train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())

    return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)

这是我得到的堆栈跟踪:

And here is the stacktrace I am getting:

<ipython-input-1-070ea24b3267> in dnn_custom_estimator(features, labels, mode, params)
    198
    199
--> 200     loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=labels, logits=logits)
    201     accuracy = tf.metrics.accuracy(labels=tf.argmax(labels, 1),
    202                                    predictions=predicted_classes,


ValueError: Shapes (?, 1) and (?,) are incompatible

推荐答案

好的.如果 labels 是张量,请使用 tf.reshape() 方法并运行:

Okay. If labels is a Tensor, use the tf.reshape() method and run:

labels = tf.reshape(labels, [-1, 1])

否则,如果它是一个 Numpy 数组,请执行:

Else, if it is a Numpy array, do:

labels = np.reshape(labels, (-1, 1))

这篇关于Tensorflow 标签和 logits 形状不兼容的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 10:04