本文介绍了无法找到Hadoop Java类的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
线程中的异常mainjava.lang.ClassNotFoundException:WordCount->这么多答案都涉及到这个问题,看起来好像我肯定错过了一个小点,花了我几个小时的时间来计算。
我会尝试尽可能清楚地了解路径,代码本身以及其他可能的解决方案,我尝试过并且无法正常工作。
我确信我正确配置的Hadoop是一切正在运行的,直到最后一个阶段
但是仍然发布详细信息:
$ b
- 环境变量和路径
>
HADOOP变量START
export HADOOP_HOME = / usr / local / hadoop
export HADOOP_CONF_DIR = / usr / local / hadoop / etc / hadoop
export HADOOP_INSTALL = / usr / local / hadoop $ b $ export HADOOP_CLASSPATH = / usr / lib / jvm / java-8-oracle / lib / tools.jar $ b $ export PATH = $ PATH:$ HADOOP_INSTALL / bin $ b $ export PATH = $ PATH:$ HADOOP_INSTALL / sbin
export HADOOP_MAPRED_HOME = $ HADOOP_HOME
导出HADOOP_COM MON_HOME = $ HADOOP_HOME
export HADOOP_HDFS_HOME = $ HADOOP_HOME
export YARN_HOME = $ HADOOP_HOME
#export HADOOP_COMMON_LIB_NATIVE_DIR = $ HADOOP_INSTALL / lib / native
#export HADOOP_OPTS = - Djava.library。 path = $ HADOOP_INSTALL / lib
export JAVA_HOME = / usr / lib / jvm / java-8-oracle
export PATH = $ PATH:$ HADOOP_HOME / bin $ b $ export PATH = $ PATH: $ JAVA_HOME / bin
#HADOOP VARIABLES END
类本身:
package com.cloud.hw03;
/ **
* Hello world!
*
* /
import java.io.IOException;
import java.util.StringTokenizer;
导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper< Object,Text,Text,IntWritable> {
private final static IntWritable one = new IntWritable(1);
私人文字=新文字();
public void map(Object key,Text value,Context context
)throws IOException,InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while(itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word,one);
public static class IntSumReducer
extends Reducer< Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
$ b public void reduce(Text key,Iterable< IntWritable> values,
Context context
)throws IOException,InterruptedException {
int sum = 0; (IntWritable val:values)
{
sum + = val.get();
}
result.set(sum);
context.write(key,result);
public static void main(String [] args)throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf,wordcount);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(WordCount.class);
FileInputFormat.addInputPath(job,new Path(args [0]));
FileOutputFormat.setOutputPath(job,new Path(args [1]));
System.exit(job.waitForCompletion(true)?0:1);
我在编译和运行时做了些什么:
-
使用我的WordCount maven项目(eclipse-workspace)在同一个文件夹中创建jar文件
$ hadoop com.sun.tools.javac.Main WordCount.java
$ jar cf WordCount.jar WordCount * .class
-
运行程序:(我已经创建了一个目录并复制了hdfs中的输入输出文件)
hadoop jar WordCount.jar WordCount / input / inputfile01 / input / outputfile01
$ b 结果是:线程main中的异常java.lang.ClassNotFoundException :WordCount
由于我在WordCount.class的同一个目录中,我在同一个目录中创建了我的jar文件,所以我没有指定WordCount的完整路径,所以我我在这个目录中运行上面的第二个命令:
我已经添加了job.setJarByClass(WordCount.class);到代码如此没有帮助。我会很感激你花费在回答上的时间!
我相信我正在做一些意想不到的事情,并且不能在4小时内弄清楚。 方案
Hadoop网站上的Wordcount示例代码不使用包
由于您确实有一个,您可以运行完全限定的类。与普通Java应用程序完全相同的方式
hadoop jar WordCount.jar com.cloud.hw03.WordCount
另外,如果您确实有Maven项目,那么 hadoop com.sun.tools.javac。主要
不正确。实际上,您将使用Maven编译和创建包含所有类的JAR,而不仅仅是 WordCount *
文件
例如,从具有pom.xml的文件夹
mvn包
否则,您需要在父目录中
hadoop com.sun.tools.javac.Main ./com/cloud/hw03/WordCount.java
以及也从该目录运行 jar cf
命令
Exception in thread "main" java.lang.ClassNotFoundException:WordCount-> so many answers relate to this issue and it seems like I am definitely missing a small point again which took me hours to figure.
I will try to be as clear as possible about the paths, code itself and other possible solutions I tried and did not work.
I am kinda sure about my correctly configuring Hadoop as everything was working up until the last stage.
But still posting the details:
- Environment variables and paths
>
HADOOP VARIABLES START
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_CLASSPATH=/usr/lib/jvm/java-8-oracle/lib/tools.jar
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
#export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
#export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$JAVA_HOME/bin
#HADOOP VARIABLES END
The Class itself:
package com.cloud.hw03;
/**
* Hello world!
*
*/
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "wordcount");
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(WordCount.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
what I did for compiling and running:
Created jar file in the same folder with my WordCount maven project (eclipse-workspace)
$ hadoop com.sun.tools.javac.Main WordCount.java
$ jar cf WordCount.jar WordCount*.class
Running the program: (i already created a directory and copied input-output files in hdfs)
hadoop jar WordCount.jar WordCount /input/inputfile01 /input/outputfile01
result is: Exception in thread "main" java.lang.ClassNotFoundException: WordCount
Since I am in the same directory with WordCount.class and i created my jar file in that same directory, i am not specifying full path to WordCount, so i am running the above 2nd command in this directory :
I already added job.setJarByClass(WordCount.class); to the code so no help. I would appreciate your spending time on answering!
I am sure I am doing something unexpected again and cannot figure it out for 4 hrs
解决方案
The Wordcount example code on the Hadoop site does not use a package
Since you do have one, you would run the fully qualified class. The exact same way as a regular Java application
hadoop jar WordCount.jar com.cloud.hw03.WordCount
Also, if you actually had a Maven project, then hadoop com.sun.tools.javac.Main
is not correct. You would actually use Maven to compile and create the JAR with all the classes, not only WordCount*
files
For example, from the folder with the pom.xml
mvn package
Otherwise, you need to be in the parent directory
hadoop com.sun.tools.javac.Main ./com/cloud/hw03/WordCount.java
And run the jar cf
command also from that directory
这篇关于无法找到Hadoop Java类的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!