将张量分为训练集和测试集

将张量分为训练集和测试集

本文介绍了将张量分为训练集和测试集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我已经使用 TextLineReader 读取了文本文件。有什么方法可以在 Tensorflow 中将其分为训练集和测试集?

  def read_my_file_format(filename_queue):
reader = tf.TextLineReader()
键, record_string = reader.read(filename_queue)
raw_features,label = tf.decode_csv(record_string)
features = some_processing(raw_features)
features_train,labels_train,featurestest,labelstest = tf.train_split(features,
标签,
frac = .1)
返回features_train,labels_train,features_test,labels_test


解决方案

类似下面的方法应该起作用:
tf.split_v(tf.random_shuffle(...


编辑:针对tensorflow> 0.12现在应将其称为 tf.split(tf.random_shuffle(...



请参阅,并以为例。 / p>

Let's say I've read in a textfile using a TextLineReader. Is there some way to split this into train and test sets in Tensorflow? Something like:

def read_my_file_format(filename_queue):
  reader = tf.TextLineReader()
  key, record_string = reader.read(filename_queue)
  raw_features, label = tf.decode_csv(record_string)
  features = some_processing(raw_features)
  features_train, labels_train, features_test, labels_test = tf.train_split(features,
                                                                            labels,
                                                                            frac=.1)
  return features_train, labels_train, features_test, labels_test
解决方案

Something like the following should work:tf.split_v(tf.random_shuffle(...

Edit: For tensorflow>0.12 This should now be called as tf.split(tf.random_shuffle(...

Reference

See docs for tf.split and for tf.random_shuffle for examples.

这篇关于将张量分为训练集和测试集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-30 05:42