本文介绍了二进制数而不是一个热向量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在进行逻辑回归时,通常的做法是使用一个热向量作为所需的结果.所以,no of classes = no of nodes in output layer.我们不使用词汇中的词索引(或一般的类号),因为这可能会错误地表明两个类的接近程度.但是为什么我们不能用二进制数代替 one-hot 向量呢?

While doing logistic regression, it is common practice to use one hot vectors as desired result. So, no of classes = no of nodes in output layer. We don't use index of word in vocabulary(or a class number in general) because that may falsely indicate closeness of two classes. But why can't we use binary numbers instead of one-hot vectors?

即如果有 4 个类,我们可以将每个类表示为 00,01,10,11,从而导致输出层中的 log(no of classes) 节点.

i.e if there are 4 classes, we can represent each class as 00,01,10,11 resulting in log(no of classes) nodes in output layer.

推荐答案

用二进制编码就好了.但是您可能需要根据您的任务和模型添加另一个层(或过滤器).因为由于二进制表示,您的编码现在涉及无效的共享功能.

It is fine if you encode with binary. But you probably need to add another layer (or a filter) depending on your task and model. Because your encoding now implicates invalid shared features due to the binary representation.

例如,输入的二进制编码 (x = [x1, x2]):

For example, a binary encoding for input (x = [x1, x2]):

'apple' = [0, 0]
'orange' = [0, 1]
'table' = [1, 0]
'chair' = [1, 1]

这意味着orangechair 共享相同的特征x2.现在有两个类的预测 y:

It means that orange and chair share same feature x2. Now with predictions for two classes y:

'fruit' = 0
'furniture' = 1

以及标记数据样本的线性优化模型(W = [w1, w2] 和偏差b):

And linear optimization model (W = [w1, w2] and bias b) for labeled data sample:

(argmin W) Loss = y - (w1 * x1 + w2 * x2 + b)

每当您将 chairw2 权重更新为 furniture 时,您都会得到一个不受欢迎的更新,就像将 orange 预测为家具也是如此.

Whenever you update w2 weights for chair as furniture you get an undesirable update as if predicting orange as furniture as well.

在这种特殊情况下,如果您添加另一个层U = [u1, u2],您可能可以解决此问题:

In this particular case, if you add another layer U = [u1, u2], you can probably solve this issue:

(argmin U,W) Loss = y - (u1 * (w1 * x1 + w2 * x2 + b) +
                         u2 * (w1 * x1 + w2 * x2 + b) +
                         b2)

好的,为什么不通过使用 one-hot 编码来避免这种未命中表示.:)

Ok, why not avoid this miss representation, by using one-hot encoding. :)

这篇关于二进制数而不是一个热向量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-25 12:22