我们刚刚开始使用CNTK创建二进制分类器的项目。

我们的数据集如下所示:

|attribs 1436000 24246.3124164245 |isMatch 1
|attribs 535000 21685.9351529239 |isMatch 1
|attribs 729000 8988.24232231086 |isMatch 1
|attribs 436000 4787.7521169184 |isMatch 1
|attribs 110000 38236394.456649 |isMatch 0
|attribs 808000 39512500.9870238 |isMatch 0
|attribs 108000 28432968.9161523 |isMatch 0
|attribs 816000 39512231.5629576 |isMatch 0


我们正在尝试确定校车停靠点是否与计划的路线匹配。第一个值是计划停靠点与实际停靠点之间的增量时间(以毫秒为单位),第二个值是计划位置与实际停靠点之间的增量距离(毫米)。

我遇到的问题是(可能是对如何使用CNTK的基本误解),无论我如何调整数据,隐藏节点,批处理大小或任何其他旋钮,我都将继续获得几乎相同的结果。我可以估算出最荒谬的输入,并且不断得到1.00。

如何修改数据或模型以获得更准确的结果?

完整的代码在这里:

import numpy as np
import cntk as C
from cntk import Trainer  # to train the NN
from cntk.learners import sgd, learning_rate_schedule, \
    UnitType
from cntk.ops import *  # input_variable() def
from cntk.logging import ProgressPrinter
from cntk.initializer import glorot_uniform
from cntk.layers import default_options, Dense
from cntk.io import CTFDeserializer, MinibatchSource, \
    StreamDef, StreamDefs, INFINITELY_REPEAT


def my_print(arr, dec):
    # print an array of float/double with dec decimals
    fmt = "%." + str(dec) + "f"  # like %.4f
    for i in range(0, len(arr)):
        print(fmt % arr[i] + '  ', end='')
    print("\n")


def create_reader(path, is_training, input_dim, output_dim):
    return MinibatchSource(CTFDeserializer(path, StreamDefs(
        features=StreamDef(field='attribs', shape=input_dim,
                           is_sparse=False),
        labels=StreamDef(field='isMatch', shape=output_dim,
                         is_sparse=False)
    )), randomize=is_training,
                           max_sweeps=INFINITELY_REPEAT if is_training else 1)


def save_weights(fn, ihWeights, hBiases,
                 hoWeights, oBiases):
    f = open(fn, 'w')
    for vals in ihWeights:
        for v in vals:
            f.write("%s\n" % v)
    for v in hBiases:
        f.write("%s\n" % v)
    for vals in hoWeights:
        for v in vals:
            f.write("%s\n" % v)
    for v in oBiases:
        f.write("%s\n" % v)
    f.close()


def do_demo():
    # create NN, train, test, predict
    input_dim = 2
    hidden_dim = 30
    output_dim = 1
    train_file = "trainData_cntk.txt"
    test_file = "testData_cntk.txt"
    input_Var = C.ops.input_variable(input_dim, np.float32)
    label_Var = C.ops.input_variable(output_dim, np.float32)
    print("Creating a 2-21 tanh softmax NN for Stop data ")
    with default_options(init=glorot_uniform()):
        hLayer = Dense(hidden_dim, activation=C.ops.tanh,
                       name='hidLayer')(input_Var)
        oLayer = Dense(output_dim, activation=C.ops.softmax,
                       name='outLayer')(hLayer)
    nnet = oLayer
    # ----------------------------------
    print("Creating a cross entropy mini-batch Trainer \n")
    ce = C.cross_entropy_with_softmax(nnet, label_Var)
    pe = C.classification_error(nnet, label_Var)
    fixed_lr = 0.05
    lr_per_batch = learning_rate_schedule(fixed_lr,
                                          UnitType.minibatch)
    learner = C.sgd(nnet.parameters, lr_per_batch)

    trainer = C.Trainer(nnet, (ce, pe), [learner])
    max_iter = 5000  # 5000 maximum training iterations
    batch_size = 100  # mini-batch size  5
    progress_freq = 1000  # print error every n minibatches
    reader_train = create_reader(train_file, True, input_dim,
                                 output_dim)
    my_input_map = {
        input_Var: reader_train.streams.features,
        label_Var: reader_train.streams.labels
    }
    pp = ProgressPrinter(progress_freq)
    print("Starting training \n")
    for i in range(0, max_iter):
        currBatch = reader_train.next_minibatch(batch_size,
                                                input_map=my_input_map)
        trainer.train_minibatch(currBatch)
        pp.update_with_trainer(trainer)
    print("\nTraining complete")
    # ----------------------------------
    print("\nEvaluating test data \n")
    reader_test = create_reader(test_file, False, input_dim,
                                output_dim)
    numTestItems = 200
    allTest = reader_test.next_minibatch(numTestItems,
                                         input_map=my_input_map)
    test_error = trainer.test_minibatch(allTest)
    print("Classification error on the test items = %f"
          % test_error)
    # ----------------------------------
    # make a prediction for an unknown flower
    # first train versicolor = 7.0,3.2,4.7,1.4,0,1,0
    unknown = np.array([[10000002000, 24275329.7232828]], dtype=np.float32)
    print("\nPredicting Stop Match for input features: ")
    my_print(unknown[0], 1)  # 1 decimal
    predicted = nnet.eval({input_Var: unknown})
    print("Prediction is: ")
    my_print(predicted[0], 3)  # 3 decimals
    # ---------------------------------
    print("\nTrained model input-to-hidden weights: \n")
    print(hLayer.hidLayer.W.value)
    print("\nTrained model hidden node biases: \n")
    print(hLayer.hidLayer.b.value)
    print("\nTrained model hidden-to-output weights: \n")
    print(oLayer.outLayer.W.value)
    print("\nTrained model output node biases: \n")
    print(oLayer.outLayer.b.value)
    save_weights("weights.txt", hLayer.hidLayer.W.value,
                 hLayer.hidLayer.b.value, oLayer.outLayer.W.value,
                 oLayer.outLayer.b.value)
    return 0  # success


def main():
    print("\nBegin Stop Match \n")
    np.random.seed(0)
    do_demo()  # all the work is done in do_demo()


if __name__ == "__main__":
    main()
# end script

最佳答案

我认为问题在于您的输出层正在使用softmax()激活函数,但是随后您正在使用cross_entropy_with_softmax()作为损失函数。因此,在训练时,您的结果将被评估为softmax。

在输出层中使用activation=None,然后查看培训如何进行。

在您的预测代码中,显然您将必须将softmax应用于评估,因此类似C.ops.softmax(nnet).eval({input_Var: unknown})。回想一下我做的一个示例,我使用了C.softmax,但这与我编写该示例与使用的CNTK版本相比,可能是命名空间的差异。

PS:如果您正在执行二进制分类,那么您实际上不需要使用softmax,因为它确实适用于多类分类问题。它仍然应该在二进制情况下工作。

PPS:在训练过程中,每次最小批量后打印出损失将很有用,这样您就可以看到梯度下降是否正在收敛。我想您会在当前模型中发现并非如此。

PPS:我只是注意到您的变量output_dim设置为1。我不知道在这种情况下使用softmax会得到什么行为。通常,softmax将应用于一个热编码输出,因此在二进制情况下,您将有两个输出,它们给出正确结果为零或一的可能性。同样,您显然需要在培训之前对您的基本事实进行一次热编码。不能肯定地告诉您您的方法是否有效,但是看起来很糟糕。

关于machine-learning - CNTK二进制分类器,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/45603477/

10-12 23:46