我正在使用mahout来运行k-均值聚类,但是在聚类时遇到了标识数据条目的问题,例如,我有100个数据条目
id data
0 0.1 0.2 0.3 0.4
1 0.2 0.3 0.4 0.5
... ...
100 0.2 0.4 0.4 0.5
聚类后,我需要从聚类结果中获取ID,以查看哪个点属于哪个聚类,但是似乎没有方法可以维护ID。
在将综合控制数据聚类的官方mahout示例中,仅将数据输入到mahout中,而没有id,例如
28.7812 34.4632 31.3381 31.2834 28.9207 ...
...
24.8923 25.741 27.5532 32.8217 27.8789 ...
并且聚类结果仅具有cluster-id和point值:
VL-539{n=38 c=[29.950, 30.459, ...
Weight: Point:
1.0: [28.974, 29.026, 31.404, 27.894, 35.985...
2.0: [24.214, 33.150, 31.521, 31.986, 29.064
但是不存在point-id,因此,在进行mahout群集时,谁能想到如何添加维护point-id的想法?非常感谢!
最佳答案
为此,我使用NamedVectors。
如您所知,在对数据进行任何集群之前,必须将其向量化。
这意味着您必须将数据转换为Mahout向量,因为那是
聚类算法可以使用的一种数据。
向量化过程将取决于您数据的性质,即向量化文本与
向量化数值。
您的数据似乎很容易矢量化,因为它只有一个ID和4个数值。
您可以编写一个Hadoop Job来接收您的输入数据,例如,作为CSV文件,
并输出一个SequenceFile,其中您的数据已经矢量化。
然后,将Mahout聚类算法应用于此输入,并将每个向量的ID(向量名称)保留在聚类结果中。
可以使用以下类来实现用于矢量化数据的示例作业:
public class DenseVectorizationDriver extends Configured implements Tool{
@Override
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.printf("Usage: %s [generic options] <input> <output>\n", getClass().getSimpleName());
ToolRunner.printGenericCommandUsage(System.err); return -1;
}
Job job = new Job(getConf(), "Create Dense Vectors from CSV input");
job.setJarByClass(DenseVectorizationDriver.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(DenseVectorizationMapper.class);
job.setReducerClass(DenseVectorizationReducer.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(VectorWritable.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
return job.waitForCompletion(true) ? 0 : 1;
}
}
public class DenseVectorizationMapper extends Mapper<LongWritable, Text, LongWritable, VectorWritable>{
/*
* This mapper class takes the input from a CSV file whose fields are separated by TAB and emits
* the same key it receives (useless in this case) and a NamedVector as value.
* The "name" of the NamedVector is the ID of each row.
*/
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
System.out.println("LINE: "+line);
String[] lineParts = line.split("\t", -1);
String id = lineParts[0];
//you should do some checks here to assure that this piece of data is correct
Vector vector = new DenseVector(lineParts.length -1);
for (int i = 1; i < lineParts.length -1; i++){
String strValue = lineParts[i];
System.out.println("VALUE: "+strValue);
vector.set(i, Double.parseDouble(strValue));
}
vector = new NamedVector(vector, id);
context.write(key, new VectorWritable(vector));
}
}
public class DenseVectorizationReducer extends Reducer<LongWritable, VectorWritable, LongWritable, VectorWritable>{
/*
* This reducer simply writes the output without doing any computation.
* Maybe it would be better to define this hadoop job without reduce phase.
*/
@Override
public void reduce(LongWritable key, Iterable<VectorWritable> values, Context context) throws IOException, InterruptedException{
VectorWritable writeValue = values.iterator().next();
context.write(key, writeValue);
}
}
关于apache - 如何在Mahout K-means集群中维护数据条目ID,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/8572478/