本文介绍了您如何编辑现有的Tensorboard Training Loss摘要?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经训练了我的网络并产生了一些训练/验证损失,这些损失通过下面的代码示例保存(仅训练损失示例,验证完全等效):

I've trained my network and generated some training/validation losses which I saved via the following code example (example of training loss only, validation is perfectly equivalent):

valid_summary_writer = tf.summary.create_file_writer("/path/to/logs/")
with train_summary_writer.as_default():
    tf.summary.scalar('Training Loss', data=epoch_loss, step=current_step)

经过培训后,我想查看使用Tensorboard的损耗曲线。但是,由于我将损耗曲线保存在训练损耗和验证损耗的名称下,因此这些曲线分别绘制在单独的图形上。我知道我应该将名称更改为简单的丢失以解决此问题,以备将来写入日志目录时使用。但是,如何编辑现有的日志文件来弥补培训/验证损失呢?

After training I would then like to view the loss curves using Tensorboard. However because I saved the loss curves under the names 'Training Loss' and 'Validation Loss' these curves are plotted on separate graphs. I know that I should change the name to be simply 'loss' to solve this problem for future writes to the log directory. But how do I edit my existing log files for the training/validation losses to account for this?

我试图修改以下帖子的解决方案:,它可以编辑日志文件的步骤并重新写入文件;我的版本涉及更改文件中的标记。但是我在这方面没有成功。它还需要通过 tf.compat.v1导入较旧的Tensorflow代码。有没有办法做到这一点(也许是在TF 2.X中)?

I attempted to modify the following post's solution: https://stackoverflow.com/a/55061404 which edits the steps of a log file and re-writes the file; where my version involves changing the tags in the file. But I had no success in this area. It also requires importing older Tensorflow code through 'tf.compat.v1'. Is there a way to achieve this (maybe in TF 2.X)?

我曾经想过从包含损失和损失的每个日志目录中简单地获取损失和步长值。通过我以前的工作方法将它们写入新的日志文件,但是我只能设法获得步骤,而不能获得损失值本身。有没有人在这里取得任何成功?

I had thought to simply acquire the loss and step values from each log directory containing the losses and write them to new log files via my previous working method, but I only managed to obtain the step, and not the loss value itself. Has anyone had any success here?

--- ===编辑=== ---

---=== EDIT ===---

我使用@jhedesa中的代码设法解决了这个问题

I managed to fix the problem using the code from @jhedesa

我不得不稍微改变函数 rename_events_dir的调用方式,因为我在内部协同使用Tensorflow Google Colab笔记本。为此,我更改了代码的最后部分:

I had to slightly alter the way that the function "rename_events_dir" was called as I am using Tensorflow collaboratively inside of a Google Colab Notebook. To do this I changed the final part of the code which read:

if __name__ == '__main__':
    if len(sys.argv) != 5:
        print(f'{sys.argv[0]} <input dir> <output dir> <old tags> <new tag>',
              file=sys.stderr)
        sys.exit(1)
    input_dir, output_dir, old_tags, new_tag = sys.argv[1:]
    old_tags = old_tags.split(';')
    rename_events_dir(input_dir, output_dir, old_tags, new_tag)
    print('Done')

要阅读以下内容:

rootpath = '/path/to/model/'
dirlist = [dirname for dirname in os.listdir(rootpath) if dirname not in ['train', 'valid']]
for dirname in dirlist:
  rename_events_dir(rootpath + dirname + '/train', rootpath + '/train', 'Training Loss', 'loss')
  rename_events_dir(rootpath + dirname + '/valid', rootpath + '/valid', 'Validation Loss', 'loss')

我打电话给我的通知两次编辑 rename_events_dir,一次用于编辑训练损失标签,一次用于验证损失标签。我可以通过设置 old_tags ='Training Loss; Validation Loss'并使用 old_tags = old_tags.split(';’)来拆分代码,从而使用以前的代码调用方法。我仅使用我的方法来理解代码及其对数据的处理方式。

Notice that I called "rename_events_dir" twice, once for editing the tags for the training loss, and once for the validation loss tags. I could have used the previous method of calling the code by setting "old_tags = 'Training Loss;Validation Loss'" and using "old_tags = old_tags.split(';')" to split the tags. I used my method simply to understand the code and how it processed the data.

推荐答案

如,TensorBoard事件实际上是存储的记录文件,因此您可以读取它们并对其进行处理。这是与此处发布的脚本类似的脚本,但用于重命名事件,并已更新为可在TF 2.x中使用。

As mentioned in How to load selected range of samples in Tensorboard, TensorBoard events are actually stored record files, so you can read them and process them as such. Here is a script similar to the one posted there but for the purpose of renaming events, and updated to work in TF 2.x.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# rename_events.py

import sys
from pathlib import Path
import os
# Use this if you want to avoid using the GPU
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import tensorflow as tf
from tensorflow.core.util.event_pb2 import Event

def rename_events(input_path, output_path, old_tags, new_tag):
    # Make a record writer
    with tf.io.TFRecordWriter(str(output_path)) as writer:
        # Iterate event records
        for rec in tf.data.TFRecordDataset([str(input_path)]):
            # Read event
            ev = Event()
            ev.MergeFromString(rec.numpy())
            # Check if it is a summary
            if ev.summary:
                # Iterate summary values
                for v in ev.summary.value:
                    # Check if the tag should be renamed
                    if v.tag in old_tags:
                        # Rename with new tag name
                        v.tag = new_tag
            writer.write(ev.SerializeToString())

def rename_events_dir(input_dir, output_dir, old_tags, new_tag):
    input_dir = Path(input_dir)
    output_dir = Path(output_dir)
    # Make output directory
    output_dir.mkdir(parents=True, exist_ok=True)
    # Iterate event files
    for ev_file in input_dir.glob('**/*.tfevents*'):
        # Make directory for output event file
        out_file = Path(output_dir, ev_file.relative_to(input_dir))
        out_file.parent.mkdir(parents=True, exist_ok=True)
        # Write renamed events
        rename_events(ev_file, out_file, old_tags, new_tag)

if __name__ == '__main__':
    if len(sys.argv) != 5:
        print(f'{sys.argv[0]} <input dir> <output dir> <old tags> <new tag>',
              file=sys.stderr)
        sys.exit(1)
    input_dir, output_dir, old_tags, new_tag = sys.argv[1:]
    old_tags = old_tags.split(';')
    rename_events_dir(input_dir, output_dir, old_tags, new_tag)
    print('Done')

您将像这样使用它:

> python rename_events.py my_log_dir renamed_log_dir "Training Loss;Validation Loss" loss

这篇关于您如何编辑现有的Tensorboard Training Loss摘要?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-29 17:27