嗨,我试图在Mahout的第7章(k均值聚类)中运行该示例。有人可以指导我如何使用Mahout(0.7)在Hadoop集群(单节点CDH-4.2.1)中运行该示例吗?
这些是我遵循的步骤:
hadoop-common-2.0.0-cdh4.2.1.jar
hadoop-hdfs-2.0.0-cdh4.2.1.jar
hadoop-mapreduce-client-core-2.0.0-cdh4.2.1.jar
mahout-core-0.7-cdh4.3.0.jar
mahout-core-0.7-cdh4.3.0-job.jar
mahout-math-0.7-cdh4.3.0.jar
这给了我以下错误
Exception in thread "main" java.lang.NoClassDefFoundError: FileSystem
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2427)
at java.lang.Class.getMethod0(Class.java:2670)
at java.lang.Class.getMethod(Class.java:1603)
at org.apache.hadoop.util.RunJar.main(RunJar.java:202)
Caused by: java.lang.ClassNotFoundException: FileSystem
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 5 more
谁能帮助我解决我所缺少的东西或者我的执行方式有误吗?
其次,我想知道如何在CSV文件上运行K-mean聚类?
提前致谢 :)
最佳答案
给定的代码具有误导性,该代码
Cluster cluster = new Cluster(vec, i, new EuclideanDistanceMeasure());
writer.append(new Text(cluster.getIdentifier()), cluster);
}
writer.close();
KMeansDriver.run(conf, new Path("testdata/points"), new Path("testdata/clusters"),
new Path("output"), new EuclideanDistanceMeasure(), 0.001, 10,
true, false);
SequenceFile.Reader reader = new SequenceFile.Reader(fs,
new Path("output/" + Cluster.CLUSTERED_POINTS_DIR
+ "/part-m-00000"), conf);
应该由
Kluster cluster = new Kluster(vec, i, new EuclideanDistanceMeasure());
writer.append(new Text(cluster.getIdentifier()), cluster);
}
writer.close();
KMeansDriver.run(conf, new Path("testdata/points"), new Path("testdata/clusters"),
new Path("output"), new EuclideanDistanceMeasure(), 0.001, 10,
true, false);
SequenceFile.Reader reader = new SequenceFile.Reader(fs,
new Path("output/" + Kluster.CLUSTERED_POINTS_DIR
+ "/part-m-00000"), conf);
集群是一个接口(interface),而Kluster是一个类。请检查Mahout API Javadoc以获取更多信息。
要使用csv文件运行kmeans,首先必须创建一个SequenceFile作为KmeansDriver中的参数传递。以下代码读取CSV文件“points.csv”的每一行,并将其转换为 vector ,并将其写入SequenceFile“points.seq”
try (
BufferedReader reader = new BufferedReader(new FileReader("testdata2/points.csv"));
SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf,new Path("testdata2/points.seq"), LongWritable.class, VectorWritable.class)
) {
String line;
long counter = 0;
while ((line = reader.readLine()) != null) {
String[] c = line.split(",");
if(c.length>1){
double[] d = new double[c.length];
for (int i = 0; i < c.length; i++)
d[i] = Double.parseDouble(c[i]);
Vector vec = new RandomAccessSparseVector(c.length);
vec.assign(d);
VectorWritable writable = new VectorWritable();
writable.set(vec);
writer.append(new LongWritable(counter++), writable);
}
}
writer.close();
}
希望能帮助到你!!
关于hadoop - 如何在Mahout in Action中运行Kmean聚类?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/17008795/