我想按Hadoop中的列计算均值和标准差。

我简单地对MapReduce采用单次通过朴素算法。
我在多变量数据集455000x90和650000x120上进行了测试,并获得了更低,更多,然后处理器数量更多的加速。对于具有2个 Activity 内核的独立和伪分布式模式,对于455000x90,我的加速比为0.4 = 20秒/ 53秒。

为什么我的程序无效?有可能改善它吗?

映射器:

public class CalculateMeanAndSTDEVMapper extends
       Mapper <LongWritable,
               DoubleArrayWritable,
               IntWritable,
               DoubleArrayWritable> {

    private int dataDimFrom;
    private int dataDimTo;
    private long samplesCount;
    private int universeSize;

@Override
protected void setup(Context context) throws IOException {
    Configuration conf = context.getConfiguration();
    dataDimFrom = conf.getInt("dataDimFrom", 0);
    dataDimTo = conf.getInt("dataDimTo", 0);
    samplesCount = conf.getLong("samplesCount", 0);
    universeSize = dataDimTo - dataDimFrom + 1;
}

@Override
public void map(
        LongWritable key,
        DoubleArrayWritable array,
        Context context) throws IOException, InterruptedException {
    DoubleWritable[] outArray = new DoubleWritable[universeSize*2];
    for (int c = 0; c < universeSize; c++) {
        outArray[c] = new DoubleWritable(
                         array.get(c+dataDimFrom).get() / samplesCount);
    }
    for (int c = universeSize; c < universeSize*2; c++) {
        double val = array.get(c-universeSize+dataDimFrom).get();
        outArray[c] = new DoubleWritable((val*val) / samplesCount);
    }
    context.write(new IntWritable(1), new DoubleArrayWritable(outArray));
}

}

组合器:

public class CalculateMeanAndSTDEVCombiner extends
       Reducer <IntWritable,
                DoubleArrayWritable,
                IntWritable,
                DoubleArrayWritable> {

   private int dataDimFrom;
   private int dataDimTo;
   private int universeSize;

@Override
protected void setup(Context context) throws IOException {
    Configuration conf = context.getConfiguration();
    dataDimFrom = conf.getInt("dataDimFrom", 0);
    dataDimTo = conf.getInt("dataDimTo", 0);
    universeSize = dataDimTo - dataDimFrom + 1;
}

@Override
public void reduce(
        IntWritable column,
        Iterable<DoubleArrayWritable> partialSums,
        Context context) throws IOException, InterruptedException {
    DoubleWritable[] outArray = new DoubleWritable[universeSize*2];
    boolean isFirst = true;
    for (DoubleArrayWritable partialSum : partialSums) {
        for (int i = 0; i < universeSize*2; i++) {
            if (!isFirst) {
                outArray[i].set(outArray[i].get()
                                  + partialSum.get(i).get());
            } else {
                outArray[i]
                    = new DoubleWritable(partialSum.get(i).get());
            }
        }
        isFirst = false;
    }
    context.write(column, new DoubleArrayWritable(outArray));
}

}

reducer :

public class CalculateMeanAndSTDEVReducer extends
       Reducer <IntWritable,
                DoubleArrayWritable,
                IntWritable,
                DoubleArrayWritable> {

   private int dataDimFrom;
   private int dataDimTo;
   private int universeSize;

@Override
protected void setup(Context context) throws IOException {
    Configuration conf = context.getConfiguration();
    dataDimFrom = conf.getInt("dataDimFrom", 0);
    dataDimTo = conf.getInt("dataDimTo", 0);
    universeSize = dataDimTo - dataDimFrom + 1;
}

@Override
public void reduce(
        IntWritable column,
        Iterable<DoubleArrayWritable> partialSums,
        Context context) throws IOException, InterruptedException {
    DoubleWritable[] outArray = new DoubleWritable[universeSize*2];
    boolean isFirst = true;
    for (DoubleArrayWritable partialSum : partialSums) {
        for (int i = 0; i < universeSize; i++) {
            if (!isFirst) {
                outArray[i].set(outArray[i].get() + partialSum.get(i).get());
            } else {
                outArray[i] = new DoubleWritable(partialSum.get(i).get());
            }
        }
        isFirst = false;
    }
    for (int i = universeSize; i < universeSize * 2; i++) {
        double mean = outArray[i-universeSize].get();
        outArray[i].set(Math.sqrt(outArray[i].get() - mean*mean));
    }
    context.write(column, new DoubleArrayWritable(outArray));
}

}

其中DoubleArrayWritable是扩展ArrayWritable的简单类:

public class DoubleArrayWritable extends ArrayWritable {

public DoubleArrayWritable() {
    super(DoubleWritable.class);
}

public DoubleArrayWritable(DoubleWritable[] values) {
    super(DoubleWritable.class, values);
}

public DoubleWritable get(int idx) {
    return (DoubleWritable) get()[idx];
}

}

最佳答案

我问了有关在相同环境中有相同问题的另一项工作的问题。大卫·格鲁兹曼(David Gruzman)猜测在差异作业开始时间(本地,群集)中存在该问题。他建议使用最佳数据大小,以在这种环境下获得良好的加速(5 GB)。我尝试过,这是真的。

Why job with mappers only is so slow in real cluster?

07-24 19:13