问题描述
我正在使用卷积神经网络从图像中提取特征.所讨论的网络具有三个大小不同的输出(三个输出张量).我想将提取的特征存储在 TFRecords 中,每个图像一个示例:
I'm extracting features from images using a convolutional neural network. The network in question has three outputs (three output tensors), which differ in size. I want to store the extracted features in TFRecords, one Example for each image:
Example:
image_id: 1
features/fc8: [output1.1, output1.2, output1.3]
Example:
image_id: 2
features/fc8: [output2.1, output2.2, output2.3]
....
如何使用 TFRecords 实现这种结构?
How can I achieve this structure using TFRecords?
推荐答案
优雅的方式是使用 tf.SequenceExample.
Elegant way is to use tf.SequenceExample.
使用 tf.SequenceExample() 格式转换数据
def make_example(features, image_id):
ex = tf.train.SequenceExample()
ex.context.feature['image_id'].int64_list.value.append(image_id)
fl_features = ex.feature_lists.feature_list['features/fc8']
for feature in features:
fl_features.feature.add().bytes_list.value.append(frame.tostring())
return ex
写入 TFRecord
def _convert_to_tfrecord(output_file, feature_batch, ids_batch):
writer = tf.python_io.TFRecordWriter(output_file)
for features, id in zip(feature_batch, ids_batch):
ex = make_example(features, id)
writer.write(ex.SerializeToString())
writer.close()
解析示例
def parse_example_proto(example_serialized):
context_features = {
'image_id': tf.FixedLenFeature([], dtype=tf.int64)}
sequence_features = {
'features/fc8': tf.FixedLenSequenceFeature([], dtype=tf.string)}
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized=example_serialized,
context_features=context_features,
sequence_features=sequence_features)
return context_parsed['image_id'], sequence_features['features/fc8']
注意:这里的特征保存在byte_list中,也可以保存在float_list中.
另一种方法是使用 tf.parse_single_example() 将示例存储为:
Note: The features here are saved in byte_list, you can also save it in float_list.
Another way, is to use tf.parse_single_example() by storing the examples as:
image_id: 1
features/fc8_1: output1.1
features/fc8_2: output1.2
features/fc8_3: output1.3
这篇关于TFRecords:将张量列表写入单个示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!