问题描述
我直接从输入中得到的训练权重会返回不同的结果!我将在一个非常简单的示例中展示它假设我们有一个输入向量x= 0:0.01:1;
和目标向量t=x^2
(我知道最好使用非线性网络)经过训练后的两层线性网络,每一层有一个神经元,我们得到:
The weights that I get from training, when implied directly on input, return different results!I'll show it on a very simple examplelet's say we have an input vector x= 0:0.01:1;
and target vector t=x^2
(I know it better to use non linear network)after training, 2 layer, linear network, with one neuron at each layer, we get:
sim(net,0.95) = 0.7850
(培训中的一些错误-没关系,应该这样)来自net.IW,net.LW,net.b:
sim(net,0.95) = 0.7850
(some error in training - that's ok and should be)weights from net.IW,net.LW,net.b:
IW =
0.4547
LW =
2.1993
b =
0.3328 -1.0620
如果我使用权重:Out = purelin(purelin(0.95 * IW + b(1))* LW + b(2))= 0.6200! ,我得到的结果与sim的结果不同!怎么会这样?怎么了?
if I use the weights: Out = purelin(purelin(0.95*IW+b(1))*LW+b(2)) = 0.6200! , I get different result from the result of the sim!how can it be? what's wrong?
代码:
%Main_TestWeights
close all
clear all
clc
t1 = 0:0.01:1;
x = t1.^2;
hiddenSizes = 1;
net = feedforwardnet(hiddenSizes);
[Xs,Xi,Ai,Ts,EWs,shift] = preparets(net,con2seq(t1),con2seq(x));
net.layers{1,1}.transferFcn = 'purelin';
[net,tr,Y,E,Pf,Af] = train(net,Xs,Ts,Xi,Ai);
view(net);
IW = cat(2,net.IW{1});
LW = cat(2,net.LW{2,1});
b = cat(2,[net.b{1,1},net.b{2,1}]);
%Result from Sim
t2=0.95;
Yk = sim(net,t2)
%Result from Weights
x1 = IW*t2'+b(1)
x1out = purelin(x1)
x2 = purelin(x1out*(LW)+b(2))
推荐答案
神经网络工具箱将输入和输出重新缩放为[-1,1]范围.因此,您必须对其进行重新缩放和缩放,以使模拟输出与sim()的输出相同:
The neural network toolbox rescales inputs and outputs to the [-1,1] range. You must therefore rescale and unscale it so that your simulation output is the same sim()'s output:
%Result from Weights
x1 = 2*t2 - 1; # rescale
x1 = IW*x1+b(1);
x1out = purelin(x1);
x2 = purelin(x1out*(LW)+b(2));
x2 = (x2+1)/2 # unscale
然后
>> x2 == Yk
ans =
1
这篇关于训练中的简单线性神经网络权重与训练结果不兼容的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!