为什么平均平均百分比误差

为什么平均平均百分比误差

本文介绍了为什么平均平均百分比误差(mape)极高?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经从 machinelearningmastery

我修改了 model.compile()函数,添加了 mape 指标,以找出平均绝对百分比误差.运行代码后,每个时期的 mape 都非常大,考虑到它是百分比指标.我是否缺少明显的内容或输出正确?输出如下:

  Epoch 91/1000s-损失:0.0103-mean_absolute_percentage_error:1764997.4502时代92/1000s-损失:0.0103-mean_absolute_percentage_error:1765653.4924时代93/1000s-损失:0.0102-mean_absolute_percentage_error:1766505.5107时代94/1000s-损失:0.0102-mean_absolute_percentage_error:1766814.5450时代95/1000s-损失:0.0102-mean_absolute_percentage_error:1767510.8146时代96/1000s-损失:0.0101-mean_absolute_percentage_error:1767686.9054时代97/1000s-损失:0.0101-mean_absolute_percentage_error:1767076.2169时代98/1000s-损失:0.0100-mean_absolute_percentage_error:1767014.8481时代99/1000s-损失:0.0100-mean_absolute_percentage_error:1766592.8125时代100/1000s-损失:0.0100-mean_absolute_percentage_error:1766348.6332 

我运行的代码(省略了预测部分)如下:

  import numpy从numpy导入数组导入matplotlib.pyplot作为plt从熊猫导入read_csv导入数学从keras.models导入顺序从keras.layers导入密集从keras.layers导入LSTM从sklearn.preprocessing导入MinMaxScaler从sklearn.metrics导入mean_squared_error#将值数组转换为数据集矩阵def create_dataset(数据集,look_back = 1):dataX,dataY = [],[]对于范围内的我(len(dataset)-look_back-1):a =数据集[i:(i + look_back),0]dataX.append(a)dataY.append(dataset [i + look_back,0])返回numpy.array(dataX),numpy.array(dataY)#修复随机种子以提高可重复性numpy.random.seed(7)#加载数据集数据框= read_csv('airlinepassdata.csv',usecols = [1],engine ='python',skipfooter = 3)数据集= dataframe.values#dataset = array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])数据集= dataset.astype('float32')#标准化数据集定标器= MinMaxScaler(feature_range =(0,1))数据集= scaler.fit_transform(数据集)#分为训练集和测试集train_size = int(len(数据集)* 0.67)test_size = len(数据集)-train_size训练,测试=数据集[0:train_size ,:],数据集[train_size:len(dataset),:]#重塑为X = t和Y = t + 1look_back = 1trainX,trainY = create_dataset(火车,look_back)testX,testY = create_dataset(测试,后退)#将输入重塑为[样本,时间步长,特征]trainX = numpy.reshape(trainX,(trainX.shape [0],1,trainX.shape [1]))testX = numpy.reshape(testX,(testX.shape [0],1,testX.shape [1]))#创建并适应LSTM网络模型= Sequential()model.add(LSTM(4,input_shape =(1,look_back)))model.add(密集(1))model.compile(loss ='mse',优化器='adam',指标= ['mape'])model.fit(trainX,trainY,nb_epoch = 100,batch_size = 50,详细= 2) 

我通过在调用编译之前通过 keras.backend.set_epsilon(1)将模糊因子epsilon设置为一个来解决此问题./p>

提示在源代码中

  def mean_absolute_percentage_error(y_true,y_pred):diff = K.abs((y_true-y_pred)/K.clip(K.abs(y_true),K.epsilon(),没有任何))返回100.* K.mean(diff,axis = -1) 

这意味着,由于某种未知的原因,训练集的MAPE计算中的 K.abs(y_true)项低于模糊默认值(1e-7),因此它使用而是默认值,因此数量巨大.

I have obtained code from machinelearningmastery

I modified the model.compile() function to add mape metrics to find out the Mean Absolute Percentage Error. After running the code, the mape at every epoch comes so huge, considering it as a percentage metric. Am I missing something obvious or is the output right?The output looks like:

Epoch 91/100
0s - loss: 0.0103 - mean_absolute_percentage_error: 1764997.4502
Epoch 92/100
0s - loss: 0.0103 - mean_absolute_percentage_error: 1765653.4924
Epoch 93/100
0s - loss: 0.0102 - mean_absolute_percentage_error: 1766505.5107
Epoch 94/100
0s - loss: 0.0102 - mean_absolute_percentage_error: 1766814.5450
Epoch 95/100
0s - loss: 0.0102 - mean_absolute_percentage_error: 1767510.8146
Epoch 96/100
0s - loss: 0.0101 - mean_absolute_percentage_error: 1767686.9054
Epoch 97/100
0s - loss: 0.0101 - mean_absolute_percentage_error: 1767076.2169
Epoch 98/100
0s - loss: 0.0100 - mean_absolute_percentage_error: 1767014.8481
Epoch 99/100
0s - loss: 0.0100 - mean_absolute_percentage_error: 1766592.8125
Epoch 100/100
0s - loss: 0.0100 - mean_absolute_percentage_error: 1766348.6332

My code that I ran (which omits the prediction part) goes as follows:

import numpy
from numpy import array
import matplotlib.pyplot as plt
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error

# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
        dataX, dataY = [], []
        for i in range(len(dataset)-look_back-1):
                a = dataset[i:(i+look_back), 0]
                dataX.append(a)
                dataY.append(dataset[i + look_back, 0])
        return numpy.array(dataX), numpy.array(dataY)
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset
dataframe = read_csv('airlinepassdata.csv', usecols=[1], engine='python', skipfooter=3)
dataset = dataframe.values

#dataset = array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
dataset = dataset.astype('float32')
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam', metrics=['mape'])
model.fit(trainX, trainY, nb_epoch=100, batch_size=50, verbose=2)
解决方案

I solved this by setting the fuzz factor epsilon to one with keras.backend.set_epsilon(1) before calling the compile.

The hint was in the source code

def mean_absolute_percentage_error(y_true, y_pred):
diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true),
                                        K.epsilon(),
                                        None))
return 100. * K.mean(diff, axis=-1)

Meaning that, for some unknown reason, the K.abs(y_true) term in the MAPE calculation on the training set is lower than the fuzz default (1e-7), so it uses that default value instead, thus the huge numbers.

这篇关于为什么平均平均百分比误差(mape)极高?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 04:20