本文介绍了堆叠自编码器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有一个基本的自动编码器结构.我想将其更改为堆叠式自动编码器.据我所知,堆叠式 AE 有两种不同之处:
I have a basic autoencoder structure. I want to change it to a stacked autoencoder. From what I know the stacked AE differs in 2 ways:
- 它由稀疏的香草 AE 层组成
- 它进行分层训练.
我想知道稀疏性是否是堆叠 AE 的必要条件,或者只是增加普通 AE 结构中隐藏层的数量会使其成为堆叠 AE?
I want to know if sparsity is a necessity for stacked AEs or just increasing number of hidden layers in vanilla AE structure will make it a stacked AE?
class Autoencoder(Chain):
def __init__(self):
super().__init__()
with self.init_scope():
# encoder part
self.l1 = L.Linear(1308608,500)
self.l2 = L.Linear(500,100)
# decoder part
self.l3 = L.Linear(100,500)
self.l4 = L.Linear(500,1308608)
def forward(self,x):
h = self.encode(x)
x_recon = self.decode(h)
return x_recon
def __call__(self,x):
x_recon = self.forward(x)
loss = F.mean_squared_error(h, x)
return loss
def encode(self, x, train=True):
h = F.dropout(self.activation(self.l1(x)), train=train)
return self.activation(self.l2(x))
def decode(self, h, train=True):
h = self.activation(self.l3(h))
return self.l4(x)
推荐答案
在堆叠自编码器的上下文中经常提到稀疏性似乎是这种情况,但不一定.因此,我认为没有必要.
It seems to be the case that sparsity if often mention in the context of stacked autoencoder, but not necessarily. Hence, I don't think that it is necessary.
这篇关于堆叠自编码器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!