在张量流中更改张量的比例

在张量流中更改张量的比例

本文介绍了在张量流中更改张量的比例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对不起,如果我弄乱了标题,我不知道该如何措辞.无论如何,我都有一组张量的张量,但我想确保张量中的每个元素的范围都在0-255之间(或0-1也适用).但是,我不想像softmax一样使所有值加起来为1或255,我只是想缩小这些值.

Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values.

有什么办法吗?

谢谢!

推荐答案

您正试图规范化数据.一个经典的归一化公式是这样的:

You are trying to normalize the data. A classic normalization formula is this one:

normalize_value = (value − min_value) / (max_value − min_value)

在tensorflow上的实现将如下所示:

The implementation on tensorflow will look like this:

tensor = tf.div(
   tf.subtract(
      tensor,
      tf.reduce_min(tensor)
   ),
   tf.subtract(
      tf.reduce_max(tensor),
      tf.reduce_min(tensor)
   )
)

所有张量的值都将介于0和1之间.

All the values of the tensor will be betweetn 0 and 1.

重要:确保张量具有浮点/双精度值,否则输出张量将只有零和一.如果您有整数张量,请首先调用:

IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:

tensor = tf.to_float(tensor)

更新:从tensorflow 2起,不推荐使用 tf.to_float(),而应使用 tf.cast():

Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used:

tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough

这篇关于在张量流中更改张量的比例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 10:06