在为自定义InputFormat设置作业时遇到了某些错误

以下是我的代码

package com.nline_delimiter;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;



public class NL_driver {

public static void main(String [] args) throws IOException, InterruptedException, ClassNotFoundException
{
    Configuration conf=new Configuration(true);

    Job job_run =new Job(conf);

    job_run.setJobName("nline input format each line seperate wth delimiter");

    job_run.setJarByClass(NL_driver.class);

    job_run.setMapperClass(NL_mapper.class);
    job_run.setReducerClass(NL_reducer.class);
    job_run.setInputFormatClass(NL_inputformatter.class);;


    job_run.setMapOutputKeyClass(Text.class);
    job_run.setMapOutputValueClass(IntWritable.class);
    job_run.setOutputKeyClass(Text.class);
    job_run.setOutputValueClass(IntWritable.class);


    FileInputFormat.setInputPaths(job_run,new Path("/home/hduser/input_formatter_usage.txt"));
    FileOutputFormat.setOutputPath(job_run, new Path("/home/hduser/input_formatter_usage"));

    job_run.waitForCompletion(true);
}
}

线
job_run.setInputFormatClass(NL_inputformatter.class)

显示错误

NL_inputformatter是一个自定义Inputformatter类,它扩展了FileInputFormat

我需要为setInputFormatClass导入什么东西,因为Eclipse中的默认错误检查要求我将setInputFormatClass更改为setOutFormatClass,但不要求任何导入。

NL_inputformatter的源代码如下。
package com.nline_delimiter;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.RecordReader;
import org.apache.hadoop.mapred.Reporter;

public class NL_inputformatter extends FileInputFormat<Text, IntWritable>{

@Override
public RecordReader<Text, IntWritable> getRecordReader(InputSplit input,
        JobConf job_run, Reporter reporter) throws IOException {
    // TODO Auto-generated method stub
    System.out.println("I am Inside the NL_inputformatter class");
    reporter.setStatus(input.toString());
    return new NL_record_reader(job_run, (FileSplit)input);


}

}

您的帮助将不胜感激。

最佳答案

这是因为您将旧FileInputFormat API中的Hadoop与新API一起使用。您必须更改导入和实现:

import org.apache.hadoop.mapred.FileInputFormat;


import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

关于hadoop - 设置job.setInputFormatClass时出错,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/23180962/

10-10 19:50