本文介绍了Tensorflow展平vs numpy展平功能对机器学习训练的影响的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我开始使用keras和tensorflow进行深度学习.在开始的第一阶段,我会产生疑问.当我使用 tf.contrib.layers.flatten (Api 1.8)进行展平时图片(也可以是多渠道).

I am starting with deep learning stuff using keras and tensorflow. At very first stage i am stuck with a doubt. when I use tf.contrib.layers.flatten (Api 1.8) for flattening a image (could be multichannel as well).

与使用numpy的flatten函数有何不同?这如何影响培训.我可以看到tf.contrib.layers.flatten花费的时间比numpy展平的时间更长.它还在做些什么吗?

How is this different than using flatten function from numpy? How does this affect the training. I can see the tf.contrib.layers.flatten is taking longer time than numpy flatten. Is it doing something more?

这是一个非常接近的问题,但此处接受的答案包括Theano并不能完全解决我的疑虑.

This is a very close question but here the accepted answer includes Theano and does not solve my doubts exactly.

示例:可以说我有(10000,2,96,96)形状的训练数据.现在我需要输出为(10000,18432)形状.我可以使用tensorflow flatten或像

Example: Lets say i have a training data of (10000,2,96,96) shape. Now I need the output to be in (10000,18432) shape. I can do this using tensorflow flatten or by using numpy flatten like

X_reshaped = X_train.reshape(*X_train.shape[:1], -2)

在培训中有什么区别?最佳实践是什么?

what difference does it make in training and which is the best practice?

推荐答案

np.flattentf.layers.flatten(或tf.contrib.layers.flatten)之间的最大区别是numpy操作仅适用于静态nd数组,而张量流操作可以与动态张量一起使用.在这种情况下,动态的意味着只有在运行时(训练或测试)才能知道确切的形状.

The biggest difference between np.flatten and tf.layers.flatten (or tf.contrib.layers.flatten) is that numpy operations are applicable only to static nd arrays, while tensorflow operations can work with dynamic tensors. Dynamic in this case means that the exact shape will be known only at runtime (either training or testing).

所以我的建议很简单:

  • 如果输入数据是静态numpy数组,例如在预处理中,使用np.flatten.这样可以避免不必要的开销,并且还返回numpy数组.
  • 如果数据已经是张量,请使用tensorflow提供的任何flatten操作.在这两者之间,tf.layers.flatten是更好的选择,因为tf.layers API比tf.contrib.*更稳定.
  • If the input data is static numpy array, e.g. in pre-processing, use np.flatten. This avoids unnecessary overhead and returns numpy array as well.
  • If the data is already a tensor, use any of the flatten ops provided by tensorflow. Between those, tf.layers.flatten is better choice since tf.layers API is more stable than tf.contrib.*.

这篇关于Tensorflow展平vs numpy展平功能对机器学习训练的影响的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-13 19:44