本文介绍了错误:java.io.IOException:错误的值类:类org.apache.hadoop.io.Text不是类Myclass的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有我的映射器和reducer如下。但我正在得到某种奇怪的例外。
我无法弄清楚为什么会抛出这样的例外。

  public static class MyMapper实现了Mapper< LongWritable,Text,Text,Info> {

@Override
public void map(LongWritable key,Text value,
OutputCollector< Text,Info>输出,Reporter记者)
抛出IOException {
text text = new Text(someText)
// process
output.collect(text,infoObjeject);
}

}

公共静态类MyReducer实现了Reducer<文本,信息,文本,文本> {b
$ b @Override
public void reduce(Text key,Iterator< Info>值,
OutputCollector< Text,Text>输出,Reporter记者)
抛出IOException {
字符串值=xyz//以某种方式派生
//进程
output.collect(key,new Text(value)); //在此行发生异常
}



System.out.println(Starting v14);
JobConf conf = new JobConf(RouteBuilderJob.class);
conf.setJobName(xyz);

字符串jarLocation = ClassUtil.findContainingJar(getClass());

System.out.println(jar file =path + jarLocation);

conf.setJarByClass(RouteBuilderJob.class);

conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputValueClass(Info.class);

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);

//我在这里错过了什么?

conf.setMapperClass(RouteBuilderJob.RouteMapper.class);
conf.setCombinerClass(RouteBuilderJob.RouteReducer.class);
conf.setReducerClass(RouteBuilderJob.RouteReducer.class);


conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf,new Path(args [0]));
FileOutputFormat.setOutputPath(conf,new Path(args [1]));

JobClient.runJob(conf);

我收到一个异常:

 错误:java.io.IOException:错误的值类:类org.apache.hadoop.io.Text不是类com.xyz.mypackage.Info 
在org.apache。 hadoop.mapred.IFile $ Writer.append(IFile.java:199)
at org.apache.hadoop.mapred.Task $ CombineOutputCollector.collect(Task.java:1307)
at com.xyz。 mypackage.job.MyJob $ RouteReducer.reduce(MyJob.java:156)
at com.xyz.mypackage.job.MyJob $ RouteReducer.reduce(MyJob.java:1)

使用序列化内部信息对象(实现 Writable >) Text

  @Override 
public void write(DataOutput out)throws IOException {
Gson gson = new Gson();
String searlizedStr = gson.toJson(this);
Text.writeString(out,searlizedStr);
}

@Override
public void readFields(DataInput in)throws IOException {
String s = Text.readString(in);
Gson gson = new Gson();
JsonReader jsonReader = new JsonReader(new StringReader(s));
jsonReader.setLenient(true);

Info info = gson.fromJson(jsonReader,Info.class);
//设置字段使用this.somefield = info.getsomefield()
}



I have my mapper and reducers as follows. But I am getting some kind of strange exception.I can't figure out why is it throwing such kind of exception.

public static class MyMapper implements Mapper<LongWritable, Text, Text, Info> {

    @Override
    public void map(LongWritable key, Text value,
        OutputCollector<Text, Info> output, Reporter reporter)
        throws IOException {
        Text text = new Text("someText")
            //process
        output.collect(text, infoObjeject);
    }

}

public static class MyReducer implements Reducer<Text, Info, Text, Text> {

    @Override
    public void reduce(Text key, Iterator<Info> values,
        OutputCollector<Text, Text> output, Reporter reporter)
        throws IOException {
        String value = "xyz" //derived in some way
        //process
        output.collect(key, new Text(value)); //exception occurs at this line
    }

}

System.out.println("Starting v14 ");
JobConf conf = new JobConf(RouteBuilderJob.class);
conf.setJobName("xyz");

String jarLocation =ClassUtil.findContainingJar(getClass());

System.out.println("path of jar file = " + jarLocation);

conf.setJarByClass(RouteBuilderJob.class);

conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputValueClass(Info.class);

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);

//am i missing something here???

conf.setMapperClass(RouteBuilderJob.RouteMapper.class);
conf.setCombinerClass(RouteBuilderJob.RouteReducer.class);
conf.setReducerClass(RouteBuilderJob.RouteReducer.class);


conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf);

I am getting an exception:

Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.Text is not class com.xyz.mypackage.Info
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:199)
at org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1307)
at com.xyz.mypackage.job.MyJob$RouteReducer.reduce(MyJob.java:156)
at com.xyz.mypackage.job.MyJob$RouteReducer.reduce(MyJob.java:1)

Internally info object (which implements Writable) is serialized using Text

@Override
public void write(DataOutput out) throws IOException {
    Gson gson = new Gson();
    String searlizedStr = gson.toJson(this);
    Text.writeString(out, searlizedStr);
}

@Override
public void readFields(DataInput in) throws IOException {
    String s = Text.readString(in);
    Gson gson = new Gson();
    JsonReader jsonReader = new JsonReader(new StringReader(s));
    jsonReader.setLenient(true);

Info info = gson.fromJson(jsonReader, Info.class);
    //set fields using this.somefield = info.getsomefield()
}
解决方案

Technically the output types of your reduce should be the same as your input type. This must be true if you use a combiner as the output of the combiner is fed into your reducer.

这篇关于错误:java.io.IOException:错误的值类:类org.apache.hadoop.io.Text不是类Myclass的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-06 19:47