本文介绍了多少信号出现时间如果一个神经网络需要学习坊? (测试结果包括)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

好了,让我preface这一说,我很清楚地知道,这取决于很多因素,我正在寻找一些从有经验的人一般准则。

我的目标是没有,以使一个神经网络,可以计算数字的平方适合我,但我认为这将是一个很好的尝试,看看我实现了BP算法正确。这看起来是一个好主意?不管怎么说,我很担心,我还没有实现的学习算法(完全)正确。

我的测试(结果):

  • 训练数据 500 的使用Java的随机.001 .999之间随机生成的数字
  • 网络拓扑 3层 1输入神经元,5隐层神经元,1个输出神经元
  • 重量:所有生成的随机值-1到1之间(java.util.Random.nextDouble()* 2 - 1)
  • 使用一个偏压节点:(numOfInputs + 1),以使输入[input.length -1] = 1
  • 激活功能:乙状结肠
  • 学习率:显示在下面
  • 结果code
  • 在还没有实现任何种类的势头,等等
  • 结果:

还有没有其他的简单的东西,我应该尝试对网络进行训练以检查它的学习能力?

解决方案

一,你能做的就是计算XOR功能最简单的事情。为了测试正常多层感知这就是我通常做。 0.2一学习率异或问题解决完全在小于100与历元(99%平均精度)2 - 5 - 1神经元

使用一个网络(MLP)我有$ C $的cd(双曲正切,没有偏差的神经元,但对每个神经偏差值,重量在0.1和0.5初始化,偏置与每个0.5初始化,1.000训练数据集从0.001至2.0,并激活正常化(输入/激活所有,但输入层神经元由神经元的数量父层划分),1-5-1的神经元),我想你的问题,并得到了95%,在不到2.000时代,每次平均准确度与0.1一个学习的速度。

这可能有几个原因。对于我的网络0.001-1.0需要约两倍的时代学习。还提到的活化正常化(在大多数情况下)减少了所需的时期大幅学习特定的问题。

此外,我不得不多是正面的经验,每个神经元,而不是每个层中的一个偏差神经元的偏差值。

此外,如果你的学习率过高(和你做大量的时期),你可以运行的风险成过度拟合。

Okay, let me preface this by saying that I am well aware that this depends on MANY factors, I'm looking for some general guidelines from people with experience.

My goal is not to make a Neural Net that can compute squares of numbers for me, but I thought it would be a good experiment to see if I implemented the Backpropagation algorithm correctly. Does this seem like a good idea? Anyways, I am worried that I have not implemented the learning algorithm (fully) correctly.

My Testing (Results):

  • Training Data: 500 randomly generated numbers between .001 and .999 using Java's Random
  • Network Topology: 3 Layers with 1 input neuron, 5 hidden neurons, 1 output neuron
  • Weights: All generated to random values between -1 and 1 (java.util.Random.nextDouble() * 2 - 1;)
  • Uses a bias node: (numOfInputs + 1) so that the input[input.length -1] = 1
  • Activation Function: Sigmoid
  • Learning Rate: Shown in results code below
  • Have not implemented any sort of momentum, etc
  • Results:

Are there any other 'simple' 'things' that I should try to train the network with to check its learning abilities?

解决方案

One of the simplest things you can do is calculating a XOR function. For testing "normal" multilayer perceptrons this is what I normally do. With a learning rate of 0.2 the XOR problem is solved perfectly (99% averaged accuracy) in less than 100 epochs with 2 - 5 - 1 neuron.

With a network (MLP) I have coded (tanh, no bias neuron but bias values for each neuron, weights initialized between 0.1 and 0.5, biases initialized with 0.5 each, 1.000 training data sets from 0.001 to 2.0 and activation normalization (input/activation of all but input layer neurons are divided by the amount of neurons in the parent layer), 1-5-1 neurons) I tried your problem and got a 95% averaged accuracy in less than 2.000 epochs every time with a learning rate of 0.1.

This can have several reasons. For my network 0.001 to 1.0 needed about twice the epochs to learn. Also the mentioned activation normalization (in most cases) reduces the needed epochs to learn a specific problem drastically.

In addition to that I had mostly positive experiences with bias values per neuron instead of one bias neuron per layer.

Furthermore if your learning rate is too high (and you do lots of epochs) you may risk running into overfitting.

这篇关于多少信号出现时间如果一个神经网络需要学习坊? (测试结果包括)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

11-02 15:21