我正在尝试学习Deeplearning4j库。我正在尝试使用S型激活函数来实现一个简单的3层神经网络来解决XOR。我缺少什么配置或超参数?我已经成功地使用RELU激活和从网上找到的一些MLP示例中的softmax输出获得了准确的输出,但是对于S型激活,它似乎并不想精确地拟合。谁能分享为什么我的网络无法产生正确的输出?

    DenseLayer inputLayer = new DenseLayer.Builder()
            .nIn(2)
            .nOut(3)
            .name("Input")
            .weightInit(WeightInit.ZERO)
            .build();

    DenseLayer hiddenLayer = new DenseLayer.Builder()
            .nIn(3)
            .nOut(3)
            .name("Hidden")
            .activation(Activation.SIGMOID)
            .weightInit(WeightInit.ZERO)
            .build();

    OutputLayer outputLayer = new OutputLayer.Builder()
            .nIn(3)
            .nOut(1)
            .name("Output")
            .activation(Activation.SIGMOID)
            .weightInit(WeightInit.ZERO)
            .lossFunction(LossFunction.MEAN_SQUARED_LOGARITHMIC_ERROR)
            .build();

    NeuralNetConfiguration.Builder nncBuilder = new NeuralNetConfiguration.Builder();
    nncBuilder.iterations(10000);
    nncBuilder.learningRate(0.01);
    nncBuilder.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT);

    NeuralNetConfiguration.ListBuilder listBuilder = nncBuilder.list();
    listBuilder.layer(0, inputLayer);
    listBuilder.layer(1, hiddenLayer);
    listBuilder.layer(2, outputLayer);

    listBuilder.backprop(true);

    MultiLayerNetwork myNetwork = new MultiLayerNetwork(listBuilder.build());
    myNetwork.init();

    INDArray trainingInputs = Nd4j.zeros(4, inputLayer.getNIn());
    INDArray trainingOutputs = Nd4j.zeros(4, outputLayer.getNOut());

    // If 0,0 show 0
    trainingInputs.putScalar(new int[]{0,0}, 0);
    trainingInputs.putScalar(new int[]{0,1}, 0);
    trainingOutputs.putScalar(new int[]{0,0}, 0);

    // If 0,1 show 1
    trainingInputs.putScalar(new int[]{1,0}, 0);
    trainingInputs.putScalar(new int[]{1,1}, 1);
    trainingOutputs.putScalar(new int[]{1,0}, 1);

    // If 1,0 show 1
    trainingInputs.putScalar(new int[]{2,0}, 1);
    trainingInputs.putScalar(new int[]{2,1}, 0);
    trainingOutputs.putScalar(new int[]{2,0}, 1);

    // If 1,1 show 0
    trainingInputs.putScalar(new int[]{3,0}, 1);
    trainingInputs.putScalar(new int[]{3,1}, 1);
    trainingOutputs.putScalar(new int[]{3,0}, 0);

    DataSet myData = new DataSet(trainingInputs, trainingOutputs);
    myNetwork.fit(myData);


    INDArray actualInput = Nd4j.zeros(1,2);
    actualInput.putScalar(new int[]{0,0}, 0);
    actualInput.putScalar(new int[]{0,1}, 0);

    INDArray actualOutput = myNetwork.output(actualInput);
    System.out.println("myNetwork Output " + actualOutput);
    //Output is producing 1.00. Should be 0.0

最佳答案

因此,总的来说,我将把您链接到:
https://deeplearning4j.org/troubleshootingneuralnets

一些具体技巧。永远不要将weight init设为零,这是我们在示例中不使用权重的原因(我强烈建议您从头开始而不是从头开始):
https://github.com/deeplearning4j/dl4j-examples

对于输出层,如果要学习xor,为什么不只使用二进制Xent:
https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/xor/XorExample.java

这里值得注意的是,也请关闭小批量(请参见上面的示例),请参见:
https://deeplearning4j.org/toyproblems

关于java - Deeplearning4j-3层神经网络无法正确拟合,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/46879409/

10-12 23:06