我正在使用Hadoop MapRedue
,并且有一个问题。
目前,我的 map 绘制者的input KV type
是LongWritable, LongWritable type
和output KV type
也是LongWritable, LongWritable type
。
InputFileFormat是SequenceFileInputFormat。
基本上,我想做的是将txt文件更改为SequenceFileFormat,以便可以在映射器中使用它。
我想做的是
输入文件是这样的1\t2 (key = 1, value = 2)
2\t3 (key = 2, value = 3)
还有……
我查看了这个线程How to convert .txt file to Hadoop's sequence file format,但可以确信TextInputFormat
仅支持Key = LongWritable and Value = Text
有什么方法可以获取txt并在KV = LongWritable, LongWritable
中创建序列文件吗?
最佳答案
当然,基本上与我在另一个已链接的线程中所说的相同。但是您必须实现自己的Mapper
。
只是为您快速入门:
public class LongLongMapper extends
Mapper<LongWritable, Text, LongWritable, LongWritable> {
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, LongWritable, LongWritable>.Context context)
throws IOException, InterruptedException {
// assuming that your line contains key and value separated by \t
String[] split = value.toString().split("\t");
context.write(new LongWritable(Long.valueOf(split[0])), new LongWritable(
Long.valueOf(split[1])));
}
public static void main(String[] args) throws IOException,
InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJobName("Convert Text");
job.setJarByClass(LongLongMapper.class);
job.setMapperClass(Mapper.class);
job.setReducerClass(Reducer.class);
// increase if you need sorting or a special number of files
job.setNumReduceTasks(0);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(LongWritable.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
job.setInputFormatClass(TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path("/input"));
FileOutputFormat.setOutputPath(job, new Path("/output"));
// submit and wait for completion
job.waitForCompletion(true);
}
}
映射器函数中的每个值都会得到一行输入,因此我们只是通过定界符(制表符)将其分割,然后将其每个部分解析为long。
而已。
关于hadoop - 为Hadoop MR创建序列文件格式,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/12242979/