在同一图中绘制训练和验证标量

在同一图中绘制训练和验证标量

本文介绍了keras tensorboard:在同一图中绘制训练和验证标量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我在 keras 中使用了张量板.在 tensorflow 中,可以对训练和验证标量使用两个不同的摘要编写器,以便 tensorboard 可以将它们绘制在同一图中.类似于

中的数字

我对类做了一些修改,以便它可以与急切执行一起使用.

最大的变化是我在下面的代码中使用了tf.keras.独立 Keras 中的 TensorBoard 回调似乎还不支持 Eager 模式.

import os将张量流导入为 tf从 tensorflow.keras.callbacks 导入 TensorBoard从 tensorflow.python.eager 导入上下文类 TrainValTensorBoard(TensorBoard):def __init__(self, log_dir='./logs', **kwargs):self.val_log_dir = os.path.join(log_dir, '验证')training_log_dir = os.path.join(log_dir, 'training')super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)def set_model(自我,模型):如果 context.executing_eagerly():self.val_writer = tf.contrib.summary.create_file_writer(self.val_log_dir)别的:self.val_writer = tf.summary.FileWriter(self.val_log_dir)super(TrainValTensorBoard, self).set_model(model)def _write_custom_summaries(自我,步骤,日志=无):日志 = 日志或 {}val_logs = {k.replace('val_', ''): v for k, v in logs.items() if 'val_' in k}如果 context.executing_eagerly():使用 self.val_writer.as_default(), tf.contrib.summary.always_record_summaries():对于名称,val_logs.items() 中的值:tf.contrib.summary.scalar(name, value.item(), step=step)别的:对于名称,val_logs.items() 中的值:摘要 = tf.Summary()summary_value = summary.value.add()summary_value.simple_value = value.item()summary_value.tag = 名称self.val_writer.add_summary(总结,步骤)self.val_writer.flush()logs = {k: v for k, v in logs.items() 如果不是 'val_' in k}超级(TrainValTensorBoard,自我)._write_custom_summaries(步骤,日志)def on_train_end(自我,日志=无):超级(TrainValTensorBoard,自我).on_train_end(日志)self.val_writer.close()

想法是一样的——

  • 查看TensorBoard回调的源代码
  • 看看它如何设置编写器
  • 在这个自定义回调中做同样的事情

同样,您可以使用 MNIST 数据对其进行测试,

from tensorflow.keras.datasets import mnist从 tensorflow.keras.models 导入顺序从 tensorflow.keras.layers 导入密集从 tensorflow.train 导入 AdamOptimizertf.enable_eager_execution()(x_train, y_train), (x_test, y_test) = mnist.load_data()x_train = x_train.reshape(60000, 784)x_test = x_test.reshape(10000, 784)x_train = x_train.astype('float32')x_test = x_test.astype('float32')x_train/= 255x_test/= 255y_train = y_train.astype(int)y_test = y_test.astype(int)模型 = 顺序()model.add(Dense(64, activation='relu', input_shape=(784,)))模型.添加(密集(10,激活=softmax"))model.compile(loss='sparse_categorical_crossentropy', optimizer=AdamOptimizer(), metrics=['accuracy'])模型.fit(x_train, y_train, epochs=10,验证数据=(x_test,y_test),回调=[TrainValTensorBoard(write_graph=False)])

So I am using tensorboard within keras. In tensorflow one could use two different summarywriters for train and validation scalars so that tensorboard could plot them in a same figure. Something like the figure in

TensorBoard - Plot training and validation losses on the same graph?

Is there a way to do this in keras?

Thanks.

解决方案

To handle the validation logs with a separate writer, you can write a custom callback that wraps around the original TensorBoard methods.

import os
import tensorflow as tf
from keras.callbacks import TensorBoard

class TrainValTensorBoard(TensorBoard):
    def __init__(self, log_dir='./logs', **kwargs):
        # Make the original `TensorBoard` log to a subdirectory 'training'
        training_log_dir = os.path.join(log_dir, 'training')
        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)

        # Log the validation metrics to a separate subdirectory
        self.val_log_dir = os.path.join(log_dir, 'validation')

    def set_model(self, model):
        # Setup writer for validation metrics
        self.val_writer = tf.summary.FileWriter(self.val_log_dir)
        super(TrainValTensorBoard, self).set_model(model)

    def on_epoch_end(self, epoch, logs=None):
        # Pop the validation logs and handle them separately with
        # `self.val_writer`. Also rename the keys so that they can
        # be plotted on the same figure with the training metrics
        logs = logs or {}
        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if k.startswith('val_')}
        for name, value in val_logs.items():
            summary = tf.Summary()
            summary_value = summary.value.add()
            summary_value.simple_value = value.item()
            summary_value.tag = name
            self.val_writer.add_summary(summary, epoch)
        self.val_writer.flush()

        # Pass the remaining logs to `TensorBoard.on_epoch_end`
        logs = {k: v for k, v in logs.items() if not k.startswith('val_')}
        super(TrainValTensorBoard, self).on_epoch_end(epoch, logs)

    def on_train_end(self, logs=None):
        super(TrainValTensorBoard, self).on_train_end(logs)
        self.val_writer.close()

  • In __init__, two subdirectories are set up for training and validation logs
  • In set_model, a writer self.val_writer is created for the validation logs
  • In on_epoch_end, the validation logs are separated from the training logs and written to file with self.val_writer

Using the MNIST dataset as an example:

from keras.models import Sequential
from keras.layers import Dense
from keras.datasets import mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(10, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10,
          validation_data=(x_test, y_test),
          callbacks=[TrainValTensorBoard(write_graph=False)])

You can then visualize the two curves on a same figure in TensorBoard.


EDIT: I've modified the class a bit so that it can be used with eager execution.

The biggest change is that I use tf.keras in the following code. It seems that the TensorBoard callback in standalone Keras does not support eager mode yet.

import os
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.python.eager import context

class TrainValTensorBoard(TensorBoard):
    def __init__(self, log_dir='./logs', **kwargs):
        self.val_log_dir = os.path.join(log_dir, 'validation')
        training_log_dir = os.path.join(log_dir, 'training')
        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)

    def set_model(self, model):
        if context.executing_eagerly():
            self.val_writer = tf.contrib.summary.create_file_writer(self.val_log_dir)
        else:
            self.val_writer = tf.summary.FileWriter(self.val_log_dir)
        super(TrainValTensorBoard, self).set_model(model)

    def _write_custom_summaries(self, step, logs=None):
        logs = logs or {}
        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if 'val_' in k}
        if context.executing_eagerly():
            with self.val_writer.as_default(), tf.contrib.summary.always_record_summaries():
                for name, value in val_logs.items():
                    tf.contrib.summary.scalar(name, value.item(), step=step)
        else:
            for name, value in val_logs.items():
                summary = tf.Summary()
                summary_value = summary.value.add()
                summary_value.simple_value = value.item()
                summary_value.tag = name
                self.val_writer.add_summary(summary, step)
        self.val_writer.flush()

        logs = {k: v for k, v in logs.items() if not 'val_' in k}
        super(TrainValTensorBoard, self)._write_custom_summaries(step, logs)

    def on_train_end(self, logs=None):
        super(TrainValTensorBoard, self).on_train_end(logs)
        self.val_writer.close()

The idea is the same --

  • Check the source code of TensorBoard callback
  • See what it does to set up the writer
  • Do the same thing in this custom callback

Again, you can use the MNIST data to test it,

from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.train import AdamOptimizer

tf.enable_eager_execution()

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
y_train = y_train.astype(int)
y_test = y_test.astype(int)

model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(10, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer=AdamOptimizer(), metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10,
          validation_data=(x_test, y_test),
          callbacks=[TrainValTensorBoard(write_graph=False)])

这篇关于keras tensorboard:在同一图中绘制训练和验证标量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-22 16:44