我想运行一系列map reduce作业,因此最简单的解决方案似乎是jobcontroller。说我有两个工作,工作1和工作2。我想在job1之后运行job2。好吧,它遇到了一些问题。经过数小时的调试,我将代码缩小为以下几行:
JobConf jobConf1 = new JobConf();
JobConf jobConf2 = new JobConf();
System.out.println("*** Point 1");
Job job1 = new Job(jobConf1);
System.out.println("*** Point 2");
Job job2 = new Job(jobConf2);
System.out.println("*** Point 3");
运行代码时,我一直得到以下输出:
*** Point 1
10/12/06 17:19:30 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
*** Point 2
10/12/06 17:19:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
*** Point 3
我想我的问题与“无法初始化JMV ....”行有关。那是什么?以及我如何实例化一个以上的工作,以将它们传递给JobController。
当我在初始化第二个作业之前添加job1.waitForTheCompletion(true)时,它给了我这个错误:
10/12/07 11:28:21 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/home/workspace/WikipediaSearch/__TEMP1
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at ch.ethz.nis.query.HadoopQuery.run(HadoopQuery.java:353)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at ch.ethz.nis.query.HadoopQuery.main(HadoopQuery.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
__Temp1是第一个作业的输出文件夹,我想将其作为第二个作业的输入。即使我的代码中包含了这个waitForCompletion行,它仍然在提示该路径不存在。
最佳答案
Wowww,经过两天的调试,事实证明问题出在hadoop内部目录名称规则上。看来,对于输入或输出map-reduce目录,不能选择以下划线“_”开头的名称。真蠢!
警告和错误根本没有帮助。